Tesla, Nissan, Google, and several carmakers have declared that they will have commercial self-driving cars on the highways before the end of this decade. Experts at the Institute of Electrical and Electronics Engineers predict that 75 percent of cars will be self-driving by 2040. So far California, Nevada, Florida, Michigan, and the District of Columbia have passed laws explicitly legalizing self-driving vehicles, and many other states are looking to do so.
The coming era of autonomous autos raises concerns about legal liability and safety, but there are good reasons to believe that robot cars may exceed human drivers when it comes to practical and even ethical decision making.
More than 90 percent of all traffic accidents are the result of human error. In 2011, there were 5.3 million automobile crashes in the United States, resulting in more than 2.2 million injuries and 32,000 deaths. Americans spend $230 billion annually to cover the costs of accidents, accounting for approximately 2 to 3 percent of GDP.
Proponents of autonomous cars argue that they will be much safer than vehicles driven by distracted and error-prone humans. The longest-running safety tests have been conducted by Google, whose autonomous vehicles have traveled more than 700,000 miles so far with only one accident (when a human driver rear-ended the car). So far, so good.
Stanford University law professor Bryant Walker Smith, however, correctly observes that there are no engineered systems that are perfectly safe. Smith has roughly calculated that "Google's cars would need to drive themselves more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars." Given expected improvements in sensor technologies, algorithms, and computation, it seems likely that this safety benchmark will soon be met.
Still, all systems fail eventually. So who will be liable when a robot car-howsoever rarely-crashes into someone?
An April 2014 report from the good-government think tank the Brookings Institution argues that the current liability system can handle the vast majority of claims that might arise from damages caused by self-driving cars. A similar April 2014 report from the free market Competitive Enterprise Institute (CEI) largely agrees, "Products liability is an area that may be able to sufficiently evolve through common law without statutory or administrative intervention."
A January 2014 RAND Corporation study suggests that one way to handle legal responsibility for accidents might be to extend a no-fault liability system, in which victims recover damages from their own auto insurers after a crash. Another RAND idea would be to legally establish an irrebuttable presumption of owner control over the autonomous vehicle. Legislation could require that "a single person be responsible for the control of the vehicle. This person could delegate that responsibility to the car, but would still be presumed to be in control of the vehicle in the case of a crash."
This would essentially leave the current liability system in place. To the extent that liability must be determined in some cases, the fact that self-driving cars will be embedded with all sorts of sensors, including cameras and radar, will provide a pretty comprehensive record of what happened during a crash.
Should we expect robot cars to be more ethical than human drivers? In a fascinating March 2014 Transportation Research Record study, Virginia Tech researcher Noah Goodall wonders about "Ethical Decision Making During Automated Vehicle Crashes." Goodall observes that engineers will necessarily install software in automated vehicles enabling them to "predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision."
To illustrate the challenge, Stanford's Smith considers a case in which you are driving on a narrow mountain road between two big trucks. "Suddenly, the brakes on the truck behind you fail, and it rapidly gains speed," he imagines. "If you stay in your lane, you will be crushed between the trucks. If you veer to the right, you will go off a cliff. If you veer to the left, you will strike a motorcyclist. What do you do? In short, who dies?"
Fortunately such fraught situations are rare. Although it may not be the moral thing to do, most drivers will react in ways that they hope will protect themselves and their passengers. So as a first approximation, autonomous vehicles should be programmed to choose actions that aim to protect their occupants.
Once the superior safety of driverless cars is established, they will dramatically change the shape of cities and the ways in which people live and work.
Roadway engineers estimate that typical highways now accommodate a maximum throughput of 2,200 human-driven vehicles per lane per hour, utilizing only about 5 percent of roadway capacity. Because self-driving cars would be safer and could thus drive closer and faster, switching to mostly self-driving cars would dramatically increase roadway throughput. One estimate by the University of South Florida's Center for Urban Transportation Research in November 2013 predicts that a 50 percent autonomous road fleet would boost highway capacity by 22 percent; an 80 percent robot fleet will goose capacity 50 percent, and a fully automated highway would see its throughput zoom by 80 percent.
Autonomous vehicles would also likely shift the way people think about car ownership. Currently most automobiles are idle most of the day in driveways or parking lots as their owners go about their lives. Truly autonomous vehicles make it possible for vehicles to be on the road much more of the time, essentially providing taxi service to users who summon them to their locations via mobile devices. Once riders are done with the cars, the vehicles can be dismissed to serve other patrons. Self-driving cars will also increase the mobility of the disabled, elderly, and those too young to drive.
Researchers at the University of Texas, devising a realistic simulation of vehicle usage in cities that takes into account issues such as congestion and rush hour patterns, found that if all cars were driverless each shared autonomous vehicle could replace 11 conventional cars. In their simulations, riders waited an average of 18 seconds for a driverless vehicle to show up, and each vehicle served 31 to 41 travelers per day. Less than one half of one percent of travelers waited more than five minutes for a ride.
By one estimate in a 2013 study from Columbia University's Earth Institute, shared autonomous vehicles would cut an individual's average cost of travel by as much as 75 percent compared to now. There are some 600 million parking spaces in American cities, occupying about 10 percent of urban land. In addition, 30 percent of city congestion originates from drivers seeking parking spaces close to their destinations. A fleet of shared driverless cars would free up lots of valuable urban land while at the same time reducing congestion on city streets. During low demand periods, vehicles would go to central locations for refueling and cleaning.
Since driving will be cheaper and more convenient, demand for travel will surely increase. People who can work while they commute might be willing to live even farther out from city centers. But more vehicle miles traveled would not necessarily translate into more fuel burned. For example, safer autonomous vehicles could be built much lighter than conventional vehicles and thus consume less fuel. Smoother acceleration and deceleration would reduce fuel consumption by up to 10 percent. Optimized autonomous vehicles could cut both the fuel used and pollutants emitted per mile. And poor countries could "leapfrog" to autonomous vehicles instead of embracing the personal ownership model of the 20th century West.
If driverless cars are in fact safer, every day of delay imposes a huge cost. People a generation hence will marvel at the carnage we inflicted as we hurtled down highways relying on just our own reflexes to keep us safe.
Comentários