Reprinted with the permission of the Daily Journal. Originally printed on May 14, 2014.
Midnight, Downtown Los Angeles. Dave walks out of a crowded bar to smoke a cigarette. He sees a group of friends from college who say they’re going to grab some burgers and ask him to join. Embarrassed, he realizes he’s too drunk to drive home, but says he’ll meet them there. Calmly, Dave clicks his smartphone and activates his new sports car. Within minutes, an unmanned vehicle pulls up to the curb and Dave gets in the back. “Take me to Burger City,” he says to the empty vehicle. As the car drives perfectly to its destination, Dave passes out drunk in the back seat.
There’s a certain buzz in the air, if you haven’t noticed. The future is here. And the days of self-driving vehicles have arrived. Google, as well as other car companies, are manufacturing so-called autonomous vehicles (I call them “autobots”), and some predict they’ll be ready for market by 2020. Google’s autobot has already traveled more than 435,000 miles in California by itself.
Right now, more than 30 thousand people die each year from car accidents, with over 2 million accidents resulting in injury. But 90 percent of all accidents are caused by driver error, and 40 percent of all fatal accidents involve alcohol, distraction and/or fatigue. With autobots driving, those numbers will theoretically vanish. More importantly, think about how many people this will help. Now the blind, those too young or too old, the mentally disabled – or even those who are too drunk – will all be able to drive.
Yet, these autobots raise a stack of legal questions that need to be addressed. For example, since the car drives itself, does that mean people can drink and drive? What happens if a person is reading a book when the vehicle gets into an accident? Is the driver negligent? Should he have overridden the system? What if the driver was blind? What if the car was completely empty, but the owner sent it to pick up his son? Who’s to blame? The owner? The occupant? The manufacturer? Clearly, the courts and legislators are going to have a field day working out these issues.
Some have suggested that autobot manufacturers should be entirely responsible for all accidents, regardless of what the occupants were doing at the time of the accident. The simple reasoning being that if manufacturers are truly making autonomous vehicles, how can a passenger ever be liable?. I disagree – an autobot driver should be able to be comparatively at fault just like the conventional driver. To be sure, manufacturers will never be able to shift all liability to the autobot occupant, but in specific situations some fault should be apportioned to the driver, owner or occupant of the vehicle. This will largely take place through a new emerging theory of comparative fault.
In a conventional accident, the best defense for a manufacturer is to argue that the driver of the vehicle is comparatively at fault for operating the vehicle negligently. In this context, the jury apportions fault to all the parties involved. Accordingly, the question of whether the driver was negligent when operating the vehicle is of crucial importance under a comparative fault system. This doesn’t work in the autobot context. Autobots are being designed for the purpose of giving an occupant the luxury of reading a book, sleeping, watching a movie or doing anything else while traveling. Yet, under a traditional analysis, these occupants would all be 100 percent at fault because they were acting negligently while driving. Therefore, the law should recognize a new theory of comparative fault. One author, Jeffrey Gurney, believes this could be resolved if instead of focusing on the actions of the driver at the time of the accident, the law focuses on the ability or capacity of the driver to avoid the accident. See generally, Jeffrey K. Gurney, “Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles,” 2013 U. Ill. J.L. Tech & Pol’y 247 (2013). If the driver fails to exercise ordinary care, fault should be apportioned based on the driver’s ability or capacity to prevent the accident. This will benefit manufacturers in the long run, because it apportions some fault to the occupants. The following analysis is based on Gurney’s theory.
Since this defense theory focuses on the ability or capacity of the driver, it’s going to be very fact specific. Ideally, it will act as a safety valve, releasing liability pressure off of the manufacturer. Therefore, I call this the “Safety Valve Theory” of comparative fault. Let’s explore the theory in five specific situations.
It Wasn’t Me, It Was My Car! Okay, You Got Me, It Was Me: The Case of the Manual Driver
This in no way should be controversial. The driver of an autobot should be liable if he or she is driving the vehicle while it’s not in autonomous mode. The case should be assessed the same as a traditional car accident and the manufacturer should be able to point to driver error as the cause of the accident. End of story.
Cat-Like Reflexes: The Case of the Alert Driver
Okay, but now imagine that Superman is sitting in the driver’s compartment of his autobot, watching cautiously as the car drives itself. He notices the car is accelerating dangerously towards an elementary school. Although he believes the course of action is dangerous, he trusts his vehicle. The car crashes into the principal’s office. Is Superman negligent for not overriding the autobot’s autonomous mode? Yes. Superman was not distracted; he was fully alert and aware of the road conditions around him. He recognized that the car was taking a dangerous route, but allowed it to proceed. He had no cognitive problems, and could have taken over if he chose to. Some might argue that since the car was in autonomous mode, Superman had no “duty” to take control of the car. But although a person may not have a duty of care in a particular situation, he or she may assume that duty by his or her conduct. Here, Superman assumed a duty to override the vehicle in case of an accident when he monitored its driving performance. Therefore, Superman should be held partially liable.
As of now, autobots are legal in California, Nevada, Florida, Michigan and the District of Columbia. However, all jurisdictions require a human driver behind the wheel in case of a needed override. So not only could Superman assume a duty to override the autobot – he may be statutorily required to do so.
Ain’t Nobody Got Time for That: The Case of the Distracted Driver
How should we apportion liability to a distracted driver? On one hand, the purpose of an autobot is to give drivers the freedom to do anything but operate the vehicle. On the other hand, drivers cannot delegate total responsibility to the machine. This is why focusing on the driver’s ability to intervene – and not what they are doing – is so important. There will be situations when drivers will need to take control. And when that happens, if the driver is capable of overriding the system but does not, they should be held partially liable.
Imagine that Wolf Blitzer is reading the news while being driven by his autobot when it suddenly starts to snow. Wolf knows a user should never allow the autobot to drive in bad weather conditions, but decides to risk it. Wolf’s autobot skids while making a turn and crashes into another car. Here, Wolf was clearly distracted because he was reading the news, but he has the right to be while riding in an autonomous vehicle. Yet, because Wolf was not disabled or mentally incapacitated, he was fully able to intervene, override his car’s autonomous mode, and drive it safely in the snowy road conditions. Since he failed to do so he is partially at fault.
Driving Miss Daisy: The Case of the Diminished Driver and Blind as a Bat: The Case of the Disabled Driver
How do you apportion fault to a driver who is blind? Although manufacturers will be able to apportion fault to the alert driver and the distracted driver under the Safety Valve Theory, they will have no recourse against the diminished capacity driver or the disabled driver. Why? A diminished capacity “driver” is someone with cognitive problems, so it’s highly unlikely they would be able to override the vehicle’s autonomous mode in case of an accident. Since they could not intervene, a manufacturer’s defense would ring hollow. The same is true for disabled drivers. Someone who is seriously physically disabled and could not operate a vehicle could not be expected to override a vehicle in case of an emergency. It would be equally true if the person was blind or mentally handicapped. Realistically, it would also look horrible to argue in front of a jury that this poor blind driver was somehow at fault, even though the autobot was marketed as a “self-driving vehicle.”
The rise of the autobots is among us. They have the potential to make society a safer place. But they also have the ability to cripple manufacturers in liability. The best way to remedy this is to allow the manufacturers to apportion some fault to the driver, owner or occupant in some cases. This can be done through a new theory of comparative fault used for autobot litigation: the Safety Valve Theory of comparative negligence.