For decades, the realm of artificial intelligence (AI) has been a hotbed of research and development. Experts in computer vision, planning, and reasoning have tirelessly worked to solve complex problems, pushing the boundaries of what machines can achieve. While early progress was compartmentalized, recent years have witnessed a remarkable convergence, bringing together disparate AI disciplines to create systems exhibiting truly advanced intelligence. From IBM’s Watson, capable of winning at Jeopardy!, to AI mastering complex games like poker, and even algorithms that can recognize cats in online images, the leaps in AI are undeniable.
These groundbreaking advancements were showcased at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas. The conference, chaired by Shlomo Zilberstein, a computer science professor at the University of Massachusetts Amherst, highlighted the growing trend of interdisciplinary and applied AI research. Zilberstein himself is deeply involved in studying how artificial agents plan actions, particularly in semi-autonomous systems that collaborate with humans or other devices.
Examples of these semi-autonomous systems are becoming increasingly prevalent, ranging from co-robots in manufacturing to search-and-rescue robots managed remotely by humans. However, it’s the domain of “driverless” cars that has truly captured Zilberstein’s attention and the imagination of the public. The alluring vision painted by automotive marketing campaigns is one where passengers, no longer burdened by driving, can transform their commute into productive or leisure time. Concepts like swiveling seats creating mobile living rooms and Google’s driverless car prototype, devoid of steering wheels and pedals, fuel this futuristic image.
However, Zilberstein cautions that the path to full autonomy in vehicles, and many other areas, is not as straightforward as these visions suggest. He argues that “in many areas, there are lots of barriers to full autonomy,” extending beyond mere technological hurdles to encompass legal, ethical, and economic considerations.
The Co-Pilot Approach to Autonomous Driving
Instead of a sudden shift to fully autonomous vehicles, Zilberstein proposes a more realistic near-future scenario: a prolonged phase of human-machine co-piloting. In this model, humans and AI systems would share driving responsibilities. The vehicle would handle routine driving tasks, while the human driver would intervene in challenging or ambiguous situations. This necessitates sophisticated communication between car and driver, with the car proactively alerting the driver when intervention is needed. Furthermore, in critical situations where the driver is unresponsive, the self-driving system must be capable of making autonomous decisions to safely pull over and stop.
This introduces the concept of “fault-tolerant planning.” As Zilberstein explains, “What happens if the person is not doing what they’re asked or expected to do, and the car is moving at sixty miles per hour?” This necessitates AI systems that can anticipate and manage human unpredictability and errors, ensuring safety even when human input deviates from the plan.
With support from the National Science Foundation (NSF), Zilberstein’s research delves into these practical questions surrounding artificial agents operating within human environments. He collaborates with human behavior experts from academia and industry to understand the nuances of human behavior that are critical for developing effective semi-autonomous robots. This understanding is then translated into computer programs that enable autonomous vehicles to plan their actions and, crucially, to devise contingency plans for unforeseen events.
Decoding Human Driving Cues: A Challenge for AI Programmers
Safe driving is replete with subtle, often unspoken cues that humans instinctively understand. Consider a four-way stop. While the official rule dictates right-of-way to the first car at the intersection, actual navigation involves a delicate dance of observation and communication. “There is a slight negotiation going on without talking,” Zilberstein notes. “It’s communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine.”
Current autonomous vehicles often struggle at these very intersections, becoming paralyzed by their inability to interpret these human cues. This indecisiveness is a significant challenge. Research by Alan Winfield at Bristol Robotics Laboratory highlights this issue, demonstrating how robots facing difficult decisions can become trapped in prolonged processing loops, missing critical opportunities to act. Zilberstein’s research aims to address this by designing planning algorithms that maintain a “live state,” even when timely human interventions are essential.
Tailoring Autonomous Driving to Human Needs
Beyond basic navigation, Zilberstein’s research explores how semi-autonomous driving can be personalized to human-centered factors. This includes adapting to driver attentiveness levels or driver preferences like avoiding highways. In collaboration with Kyle Wray and Abdel-Illah Mouaddib, Zilberstein developed a novel model and planning algorithm that empowers semi-autonomous systems to make sequential decisions in situations involving competing objectives, such as balancing safety and speed.
Their experiments focused on scenarios where the transfer of control between human and vehicle was dependent on driver fatigue. The results showed that their new algorithm enabled vehicles to prioritize roads suitable for autonomous driving when the driver was fatigued, thereby enhancing driver safety. “In real life, people often try to optimize several competing objectives,” Zilberstein points out. “This planning algorithm can do that very quickly when the objectives are prioritized. For example, the highest priority may be to minimize driving time and a lower priority objective may be to minimize driving effort. Ultimately, we want to learn how to balance such competing objectives for each driver based on observed driving patterns.”
The Future is Collaborative: AI and Human Expertise
The field of artificial intelligence is undeniably in a period of rapid advancement. Decades of foundational research are now bearing fruit, with machine learning being applied across diverse fields in ways previously unimaginable. Héctor Muñoz-Avila, program director in NSF’s Robust Intelligence cluster, emphasizes this integration of long-term AI research, leading to “remarkable successes.”
NSF’s Robust Intelligence program has been instrumental in supporting the fundamental AI research that underpins these transformative smart systems. Moreover, it supports researchers like Zilberstein who are tackling the complex questions that arise with emerging technologies. As Zilberstein concludes, “When we talk about autonomy, there are legal issues, technological issues and a lot of open questions… NSF has been able to identify these as important questions and has been willing to put money into them. And this gives the U.S. a big advantage.” The programming of self-driving cars is not solely the domain of software engineers; it requires a multidisciplinary approach, drawing upon expertise from AI researchers, roboticists, ethicists, legal scholars, and policymakers, all working collaboratively to navigate the multifaceted challenges and opportunities of autonomous vehicles.