UK-based startup Align AI claims to have developed an algorithm that could enhance the reliability of AI systems in self-driving cars, robots, and other AI-based products. The algorithm, known as Algorithm for Concept Extraction (ACE), aims to address the issue of spurious correlations developed by current AI systems during training, which can lead to catastrophic outcomes in real-world scenarios. ACE enables AI systems to form more sophisticated associations that resemble human concepts, potentially avoiding misgeneralizations. This breakthrough could have significant implications for robotics, content moderation, and other AI applications.
The achievement by Align AI highlights the dangers of incorrect correlations in AI systems, particularly when it comes to safety-critical applications like self-driving cars. In 2018, an Uber self-driving car killed a woman crossing the road due to the AI software’s failure to identify her as a pedestrian outside a crosswalk. The training data provided to the AI only depicted pedestrians in crosswalks, causing it to misidentify the woman. ACE offers a solution to overcome such limitations by allowing AI systems to make more accurate associations and understand complex scenarios.
Aligned AI sees the potential for ACE in various fields like robotics, where robots can learn to pick up objects of different sizes and shapes in different environments without the need for retraining. The algorithm could also improve content moderation on social media platforms by detecting toxic language. As ACE demonstrates promising results in a video game benchmark called CoinRun, Align AI is currently seeking funding and a patent for the algorithm. The company aims to further develop ACE to achieve “zero-shot learning” and enhance AI systems’ interpretability by expressing objectives in natural language.