It’s 2034. Imagine a scenario: a drunk pedestrian stumbles off the curb and directly into the path of an autonomous vehicle. The car, unable to brake in time, hits and fatally injures him. In the past, with a human driver, this would be a tragic accident, the fault lying squarely with the pedestrian. However, the rise of self-driving cars, which has drastically reduced accident rates since the 2020s, has shifted the legal landscape. The benchmark is no longer the “reasonable person,” but the “reasonable robot.” The victim’s family decides to sue the car manufacturer. Their argument? While braking was impossible, the car could have swerved, crossing the double yellow line and colliding with an empty self-driving car in the adjacent lane. Data from the car’s sensors confirms this possibility. The software designer is asked a pivotal question in court: “Why didn’t the car swerve?”
This question, “Why?” is unprecedented in traditional accident investigations involving human drivers. Human actions in the moments leading up to a crash are often attributed to panic, instinct, or lack of thought. But with robots at the wheel, “Why?” becomes a legitimate and critical inquiry. Human ethical standards, imperfectly reflected in law, rely on assumptions that engineers are now forced to confront. The most significant is the expectation that a person with sound judgment will sometimes disregard the literal interpretation of the law to uphold its intended spirit. The challenge now for engineers is to instill this very element of “good judgment” into self-driving cars and other autonomous machines – to essentially program ethics into robots.
The journey towards computerized driving started in the 1970s with anti-lock braking systems. Today, we see increasingly sophisticated features like automated steering, acceleration, and emergency braking becoming commonplace. Fully automated vehicle testing, with a safety driver present, is permitted in various locations including parts of the UK, Netherlands, Germany, Japan, and across numerous states in the USA. Tech giants like Google, Nissan, and Ford have projected fully driverless operation within the next decade.
This technological leap necessitates a fundamental shift in how we understand and assign responsibility in vehicle accidents. Manufacturers and software developers will face scrutiny unlike anything seen with human drivers, forced to defend the decision-making process of their autonomous creations.
Self-driving cars rely on a suite of sensors – video cameras, ultrasonic sensors, radar, and lidar – to perceive their surroundings. In California, for example, autonomous vehicle testing regulations mandate that all sensor data for the 30 seconds preceding any collision must be submitted to the Department of Motor Vehicles. This data-rich environment, coupled with accident records (including incidents where Google’s cars were deemed at fault), provides engineers with unprecedented insight. They can reconstruct accident scenarios with remarkable precision, analyzing what the vehicle sensed, the alternative actions considered, and the logic underpinning its choices. Essentially, we can ask a computer to explain its reasoning, much like we might ask a human to narrate their decisions in a driving simulator or video game.
This level of transparency and accountability means regulators and legal systems can hold autonomous vehicles to safety standards exceeding human capabilities and subject them to intense post-accident analysis, even for rare incidents. Manufacturers must be prepared to justify a car’s actions in ways unimaginable for today’s drivers.
Driving, by its very nature, involves risk. The allocation of this risk – among drivers, pedestrians, cyclists, and even property – is inherently ethical. Therefore, it’s crucial for both engineers and the public to understand that a self-driving car’s decision-making system must incorporate the ethical implications of its actions.
A common, seemingly straightforward approach to morally ambiguous situations is to adhere to the law while minimizing harm. This strategy appears attractive, allowing developers to justify a car’s actions by simply stating, “It complied with all traffic laws.” It also conveniently shifts the responsibility of defining ethical behavior to lawmakers. However, this approach rests on the flawed assumption that the law is comprehensive and covers every conceivable scenario, which is far from reality.
Often, traffic law relies heavily on driver common sense and provides limited guidance on split-second decisions immediately before a crash. Consider the initial scenario again: a self-driving car programmed to strictly follow the law might refuse to cross a double yellow line, even to avoid hitting a pedestrian, even if the opposite lane is clear except for an empty autonomous vehicle. Laws rarely account for such specific emergencies as a pedestrian suddenly falling into the road. While some jurisdictions, like Virginia, have clauses allowing deviations from traffic laws in emergencies, they often use vague language like “provided such movement can be made safely.” In our scenario, it falls to the car’s developer to pre-define what constitutes “safe” in the context of crossing a double yellow line to avoid a pedestrian.
Alt text: Close-up of vehicle damage showing debris caught between car doors, illustrating the aftermath of a self-driving car accident.
The challenge lies in the inherent uncertainty. A self-driving car will rarely have absolute certainty about road conditions. It might estimate a 98% or 99.99% confidence level that crossing a double yellow line is safe. Engineers must pre-determine the acceptable confidence threshold for such maneuvers and how this threshold might vary depending on the severity of the situation – is it a plastic bag or a pedestrian it’s trying to avoid?
Interestingly, even in the present day, self-driving cars are exhibiting what could be termed “judgment” by intentionally breaking the law in specific circumstances. Google has publicly acknowledged allowing its vehicles to exceed speed limits to maintain traffic flow in situations where driving slower would be more dangerous. Many would likely agree with this logic in other emergency situations, like rushing someone to the hospital. Researchers at Stanford University, Chris Gerdes and Sarah Thornton, have argued against rigidly encoding traffic laws into software, suggesting that drivers often treat laws as flexible guidelines to be bent when it can improve efficiency or safety. Imagine being stuck behind a slow cyclist for miles simply because your car is programmed to never, even briefly, cross a double yellow line.
Even within legal boundaries, autonomous vehicles make numerous subtle safety decisions. For example, traffic laws are generally silent on lane positioning. Given that most lanes are significantly wider than vehicles, human drivers instinctively use this extra space to navigate around debris or maintain distance from erratic drivers.
Google has further explored this concept in a 2014 patent, detailing how an autonomous vehicle might optimize its lane position to minimize risk. They use the example of a self-driving car on a three-lane road with a large truck to its right and a smaller car to its left. To enhance its safety, the autonomous car would subtly shift its position within the lane, moving slightly closer to the smaller car and further from the truck.
This seems intuitively sensible and mirrors what many human drivers do unconsciously. However, it raises ethical questions about risk distribution. Should the driver of the smaller car inherently bear slightly more risk simply because of their vehicle’s size? While such minor risk redistribution might be negligible in individual human driving habits, if formalized and applied universally to all self-driving cars, it could have significant aggregate consequences.
In each of these examples, the car is making decisions based on weighing different values – the potential harm to objects or people it might hit, and the safety of its own occupants. Unlike humans, who make these decisions instinctively, an autonomous vehicle must rely on a pre-programmed strategy of risk management. Risk, in this context, is defined as the severity of a potential negative outcome multiplied by its probability.
Google further patented a risk management application in 2014, describing a scenario where a vehicle might choose to change lanes to gain a better view of a traffic light. The vehicle weighs the potential benefit of seeing the traffic light sooner against the small risk of a lane-change collision (perhaps due to a sensor malfunction). Each potential outcome is assigned a probability and a positive or negative value. By multiplying probability and value, and summing the results, the car can quantitatively assess whether the benefits of changing lanes outweigh the risks.
The challenge lies in the incredibly low probability of collisions. In the US, the average driver experiences a collision roughly every 257,000 kilometers (160,000 miles), or about every 12 years. Even with the vast amounts of driving data generated by autonomous vehicles, it will take considerable time to establish reliable crash probabilities for every conceivable driving scenario.
Assigning a value to the magnitude of damage is even more complex. Property damage costs can be estimated using insurance industry data. However, valuing injuries and deaths is a far more ethically fraught issue. The concept of “the value of a statistical life” has been used for decades, typically expressed as the amount of money justified to prevent one statistical fatality. For instance, a safety improvement with a 1% chance of saving 100 lives represents one statistical fatality. The US Department of Transportation recommends a figure of $9.1 million to prevent a fatality, a number derived from market data, including wage premiums for hazardous jobs and consumer willingness to pay for safety equipment like smoke detectors. Beyond safety, the USDOT also considers the value of lost mobility and time, estimated at $26.44 per hour for personal travel.
While seemingly systematic, this risk-benefit calculus based solely on lives saved and time lost overlooks crucial moral considerations surrounding risk exposure. For example, if an autonomous vehicle treated all human lives equally, it would logically need to give more space to an unhelmeted motorcyclist compared to a fully geared rider, as the former is statistically less likely to survive a crash. This raises ethical questions about fairness – should a safety-conscious rider be penalized for their responsible behavior?
Another critical distinction between robot ethics and human ethics is the potential for unintended biases to creep into algorithms, even with well-intentioned programmers. Imagine a self-driving car algorithm that adjusts the buffer space it maintains around pedestrians based on accident settlement data from different districts. While seemingly efficient and data-driven, this could inadvertently penalize pedestrians in low-income neighborhoods if, for example, lower settlement amounts are due to socioeconomic factors rather than less severe injuries. The algorithm would then, unintentionally, provide less buffer space in poorer areas, subtly increasing pedestrian risk.
It’s tempting to dismiss these concerns as abstract academic exercises. However, the literal nature of computer programs means these ethical considerations must be addressed proactively, during the design phase, not as after-the-fact patches.
This is why researchers often employ hypothetical ethical dilemmas, such as the famous “trolley problem.” In this scenario, a runaway trolley is about to hit a group of children. The only way to stop it is to push a large person onto the tracks to derail the trolley, sacrificing one life to save many. The dilemma forces us to confront whether it is ethically permissible to take a direct action to cause one death to prevent multiple deaths. If you answer “no,” consider the alternative: by inaction, you are effectively allowing multiple deaths. How can one justify this apparent contradiction?
The ethics of autonomous vehicle operation is, ultimately, a solvable problem. Other fields, such as organ donation allocation and military conscription exemptions, have successfully navigated comparable ethical complexities and risk-benefit trade-offs in a safe and reasonable manner.
However, self-driving cars present a unique challenge. They must make rapid decisions with incomplete information in unforeseen situations, guided by ethics explicitly encoded in software. Fortunately, the public doesn’t expect superhuman moral wisdom from these machines. What is expected is a rational and justifiable decision-making process that demonstrably considers ethical implications. The solution doesn’t need to be perfect, but it must be thoughtful, defensible, and transparent.