Driverless Utopia

Automated vehicles, drivers and insurers are caught between the now and the not yet.

There is an oft-quoted statistic that presents misleading assumptions about the safety potential of driverless cars. The figure is parroted with the presumption that when fully automated vehicles are doing the driving, accidents will become comparatively rare.

The statistic — that 93 percent of accidents are caused by human error — originated in a decade-old National Highway Transportation Safety Administration (NHTSA) study. More recently, a NHTSA report that offers guidance for automated vehicles and related development reiterated that 9 out of 10 “serious roadway crashes” are due to “human behavior.”1

Although there is evidence that automation can help humans drive more safely, says David Zuby, chief research officer for the Insurance Institute for Highway Safety’s (IIHS) Vehicle Research Center, “there is no proof whatsoever that automated driving is going to be safer.”

There are also signs that in some instances and circumstances, automated technology can introduce new accident risks, such as greater hacking vulnerability or insufficient warning when the vehicle tells the driver to take the wheel. But since little relevant information is available about current automated vehicle features on the market and those being tested, it is pretty tough to gauge their safety today, let alone the distant tomorrow.

Automated vehicles, human drivers and insurers are caught in the now and the not yet. In the now, some cars have “Level 2” automation capability (see Figure 1). Driverless utopia — when automated vehicles are always or nearly always doing the driving — is in the not yet. Reaching “Level 5” could still take decades to become a reality for a high population of drivers.

Meanwhile, how driverless cars will affect everything from safety to premiums to liability and even perhaps the structure of insurance are on the table. These are discussed in great detail in the Casualty Actuarial Society’s Automated Vehicle Task Force’s recently released report, “Automated Vehicles and the Insurance Industry: A Pathway to Safety: The Case for Collaboration.” This article focuses on safety and liability.

Statistical Deconstruction

The CAS Automated Vehicle Task Force’s first report, released in 2014, had already deconstructed the NHTSA statistic. It concluded that driverless cars could only address 78 percent — not 93 percent — of accidents if they could not overcome weather, vehicle errors and inoperable traffic control devices. (AR November/December 2015).2 The National Motor Vehicle Crash Causation Survey (NMVCCS) notes that the other seven percent of accidents to close the 93 percent gap were caused by vehicles, environment and “unknown critical reasons.” The survey was based on 6,950 police-reported crashes from 2005 to 2007, before automated technology became available on the market.

Figure 1.

Source: National Highway Traffic Safety Administration, https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety

Figure 2.

Source: Favarò FM, Nader N, Eurich SO, Tripp M, Varadaraju N (2017) Examining accident reports involving autonomous vehicles in California. PLoS ONE 12(9): e0184952. https://doi.org/10.1371/journal.pone.0184952

Figure 3.

Source: NHTSA.

According to the 2014 task force report, 32.4 percent of accidents are caused by human behavior while 21.3 percent relate to “technology issues.” These percentages do not add to 100 percent to reduce duplication of accidents with multiple causation. (See Figure 3.) The 2014 report also states that the CAS task force “has re-evaluated the NMVCCS in the context of an automated vehicle world. It found that 49% of accidents contain at least one limiting factor that could disable the technology or reduce its effectiveness.”

For the most part, driverless car experiments have been operating in ideal driving conditions where the vehicles might not be ready to be tested in real-world chaotic situations.

 

Certainly the 2008 NHTSA figure mentioned earlier, that 93 percent of accidents are due to human behavior, deserves an update especially given safety features that have been introduced since 2007. What insurers find vexing, however, is the lack of information about automated features in cars entering the market and driverless cars being tested.

Information about driverless car safety and accidents is not publicly available in a national clearinghouse. Trying to find out something as basic as how many people suffered injuries related to more fully automated vehicles being tested is a time-consuming endeavor.

Most of what is known about driverless cars comes from a smattering of investigations, independent studies and manufacturer reports to states. And for the most part, driverless car experiments have been operating in ideal driving conditions where the vehicles might not be ready to be tested in real-world chaotic situations.

Not surprisingly, approaches and conclusions about the safety of driverless cars vary considerably. IIHS reviewed two studies, both of which accounted for the underreporting of human crashes to police but used different methods. A University of Michigan study found the Google crash rate was higher than the human crash rate although the autonomous cars were rarely at fault. A Virginia Tech study, which also compared naturalistic driving studies with Google-reported incident information, concluded Google cars were safer than human motorists.

IIHS took another approach, comparing Google’s automated cars with human drivers in conventional vehicles around Mountain View, California. Zuby says that IIHS found that the Google car rate of police-reported crashes was one-third that for human drivers in Mountain View and that the driverless-car involved crashes were less severe.

Manufacturers are simply not used to handing over what they consider to be proprietary information. As a result, there is not enough information on Level 2 cars already on the road, including automated features and body specifications.

 

To reach the closest apples-to-apples comparison, IIHS looked at the difference between Google cars and human drivers in the area where Google did most of its testing during the comparison period. This way, geography, traffic density, other drivers, weather and additional factors were similar. IIHS sorted through all Google car crashes that met the characteristics of human crashes typically reported to police. Notably, three-quarters of the crashes involving automated vehicles occurred when a person rear-ended the driverless car, which occurs at a lower rate for incidents involving conventional vehicles reported to police in Mountain View, Zuby adds.

Insurers still don’t know much about the automated technology currently available to consumers. Manufacturers are simply not used to handing over what they consider to be proprietary information. As a result, there is not enough information on Level 2 cars already on the road, including automated features and body specifications. “You can’t tell from a VIN which vehicles have automated driver assist or auto braking,” said Robert Passmore, assistant vice president of personal lines policy for the Property Casualty Insurers Association of America (PCI).

Last October, PCI advocated for a bipartisan provision that would require manufacturers to share more information about the vehicles they make. This was added to U.S. Senate Bill S. 1885, the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act, which is intended to encourage development and deployment of highly automated vehicles in a safe and responsible manner.3 However, in February 2018, three Senators blocked the full bill due to safety concerns about automated vehicles.4

Identifying New Risk Factors

Part of the CAS task force’s objective is to encourage greater collaboration. “Depending on the problem we’re looking to solve, we will need unique types of collaboration models. A data clearinghouse on automated vehicle data will assist in proper ratemaking and pricing of a risk,” says Jonathan Charak, assistant vice president at Zurich North America and vice chair of the CAS Automated Vehicle Task Force. “Further collaboration across the legislature, engineers, manufacturers and risk management professionals can lead to the safest possible introduction of automated vehicles to the public.”

To that end, the task force’s recent report offers some dataset suggestions for assessing the risk potential. (See sidebar, “A Call for New Datasets.”) Three of the datasets recommended by the task force — random errors, hacking and “pass-off” risk — are particularly relevant because they highlight how technology can also be a source of accidents.

Since technology is not perfect, “random errors” take place as new technology continuously evolves and learns from itself. In 2016, an Uber semi-driverless car ran a red light on its own on a busy San Francisco street.5 The same thing happened in Phoenix.6 In both cases, the professional drivers had no time to respond.

Even in conventional vehicles, new technology can introduce unanticipated hazards. In 2015, for instance, Toyota had to recall 31,000 full-sized Lexus and Toyota cars because the automatic braking system radar mistook steel joints or plates in the road for objects ahead and deployed the brakes, the Associated Press reports. That same year, Ford recalled 37,000 F-150 pickups because the vehicles stopped even when nothing was in the way.7

Vehicles can be hacked and remotely hijacked using internet-connected devices that are illegally plugged into the vehicles’ on board diagnostic ports.

 

Another potential technology-related cause of incidents is vehicular vulnerability to hacks. It is a very serious issue that has already been demonstrated in conventional cars. “Autonomous vehicles are at the apex of all the terrible things that can go wrong,” Charlie Miller, one of the masterminds behind the hacks inflicted on a Toyota Prius, Ford Escape and Jeep Cherokee, tells Wired.8 That is because in a driverless car, the computer controls everything.

“Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them … If a bad guy gets control of that, it’s going to be even worse,” adds Miller, who worked at Uber and other companies before securing a position at Didi, a Chinese company working on autonomous ridesharing. It does not stop there. Vehicles can be hacked and remotely hijacked using internet-connected devices that are illegally plugged into the vehicles’ on board diagnostic ports.9

There is also “pass-off risk” that can arise when a human driver either chooses or is forced to take control from the technology. Further, the task force report warns that drivers can become too reliant on the cars and more prone to distraction. It is also possible that motorists might not respond quickly enough to the car’s warning system.

Machine vs. Man

Pass-off risk is a gray area where the technology, the driver or both can blur accident cause, which complicates liability issues.

Experimental studies show that automated driving assistance systems unexpectedly stop functioning in common driving situations. “Typical scenarios include heading uphill when lane markers on the other side become obscured, going around certain bends and sections where the number of lanes increase or decrease,” Zuby says, noting that the reason this is a concern is that if the driver’s hands are not on the wheel with eyes on the road then he or she may not be able to keep the vehicle from crashing. “One of the big unanswered questions about partial automation is how to design it in a way that the human driver knows or understands the system’s limitations as well as his or her own responsibilities.”

The need for “immediate interaction between drivers and the vehicle could prove problematic,” observes Chris Nyce, a principle with KPMG. Nyce is a coauthor of the consulting firm’s report, “The Chaotic Middle: The Autonomous Vehicle and Disruption in Automobile Insurance,” Nyce says.

Figure 4. Real-world benefits of crash avoidance technologies
HLDI and IIHS study the effects of crash avoidance features by comparing rates of police-reported crashes and insurance claims for vehicles with and without the technologies.

© 2018, Insurance Institute for Highway Safety, Highway Loss Data Institute, 501(c)(3) organizations. Used with permission.

“Many in the automobile industry are considering whether that phase should be skipped over, in favor of more immediate introduction of Level 4 technology, self-driving within boundaries.”

The first fatal semi-automated car accident demonstrates how both the driver and the technology can contribute to causation. (See sidebar, “Fatal Lessons.”) How this affects liability when accidents occur presents a new host of questions.

NHTSA, which offers guidance for automated vehicle development, has changed its emphasis on liability. In 2016, its “Federal Automated Vehicles Policy” took the position that liability will depend on whether the human operator or the automated system is primarily responsible for monitoring the driving environment.10 However, in its 2017 “Automated Driving Systems Report,” the U.S. Department of Transportation put questions of liability back in the hands of states, which regulate insurance. The report stresses the responsibility of states to allocate liability, to determine who must carry vehicle insurance and to consider rules and laws allocating tort liability.

The report stresses the responsibility of states to allocate liability, to determine who must carry vehicle insurance and to consider rules and laws allocating tort liability.

 

“Ultimately the courts will guide the process of assigning financial responsibility for collisions involving automated vehicles,” Charak says. The CAS Automated Vehicles Task Force report looks deeply into the advantages and disadvantages of personal auto and product liability and how it will affect drivers, manufacturers, insurers and other parties. It includes an exploration of legal costs and potential insurance approaches to coverage, such as no-fault insurance, as well. More exploration is needed to determine how commercial auto, workers’ compensation and cyber coverage will come into play.

“An additional worry I have is that if product liability becomes involved in routine automobile accidents,” Nyce says, “the ability of the legal system to promptly compensate accident victims may become less timely, as products cases tend to take much longer compared to automobile liability cases.”

Perhaps the “saving grace is that the vehicle gathers a lot of data,” says PCI’s Passmore. “In order for the legal system to adapt to the change in the nature of driving risk, that data is going to be accessible in reasonable terms.”

The vision that fully automated vehicles will safely transport distracted and tired people from place to place, remains a long way off.

 

Most cases will be pretty clear because the vehicle will or will not have violated the vehicle code, says Robert W. Peterson, a recently retired law professor who specialized in torts and product liability at Santa Clara University’s School of Law in California.

Peterson also sees room for other types of insurance coverage. For example, if a trucker drives the truck into a tree, workers’ compensation may be the only legal remedy. “If the truck drives the truck into a tree, now there is a fully compensable tort claim against the OEM (original equipment manufacturer).” Cyber attacks may spawn OEM liability as well.

Conclusion

While there is evidence demonstrating the safety advantages of automated technology, there is also proof that safety features in conventional cars are already making a difference in reducing potential accidents. For example, automatic braking systems reduce rear-end crashes involving conventional vehicles by about 50 percent, while forward collision warning systems reduce them by 27 percent, according to the IIHS study, “Effectiveness of forward collision warning and autonomous emergency braking systems in reducing front-to-rear crash rates,” published in 2017 in Accident Analysis and Prevention.

“Unfortunately, the discussion (about automated vehicles) is way ahead of the technology,” Zuby says, when enforcing existing laws and making proven safety features standard would go a long way to reducing crashes.

Driverless utopia, the vision that fully automated vehicles will safely transport distracted and tired people from place to place, remains a long way off. Until then, pass-off risk will complicate causation.

The CAS task force call for more data so insurers can adjust to automated technology is important. “Pricing a risk appropriately will ensure a potentially lifesaving product will reach the market in the most efficient manner — too expensive and it may hinder vehicle sales, while not charging enough will lead to conventional vehicles subsidizing a new hazard on the road. As an actuary, data collection is crucial for proper pricing,” Charak says.

Until manufacturers, insurers, lawmakers, regulators, researchers and others can be better informed, the automated car dialogue will continue to be plagued by hopeful statistics of a truly uncertain future.


Annmarie Geddes Baribeau has been covering insurance and actuarial topics for more than 25 years. Her blog can be found at www.insurancecommunicators.com.


1 NHTSA, “Automated Driving Systems 2.0: A Vision for Safety,” https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf
2 “Restating the National Highway Transportation Safety Administration’s National Motor Vehicle Crash Causation Survey for Automated Vehicles,” The CAS Autonomous Vehicle Task Force, December 2014.
3 https://www.gpo.gov/fdsys/pkg/CRPT-115srpt187/pdf/CRPT-115srpt187.pdf
4 http://www.thedrive.com/sheetmetal/17962/federal-autonomous-car-legislation-blocked-in-senate
5 https://www.nytimes.com/2017/02/24/technology/anthony-levandowski-waymo-uber-google-lawsuit.html
6 https://www.washingtonpost.com/news/innovations/wp/2017/03/29/we-know-more-about-that-crash-involving-ubers-self-driving-car/?utm_term=.c4bc56675710
7 https://apnews.com/ee71bd075fb948308727b4bbff7b3ad8
8 https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/
9 https://www.wired.com/2017/04/ubers-former-top-hacker-securing-autonomous-cars-really-hard-problem/
10 https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf, page 10.

A Call for New Datasets

To measure the risk potential of driverless cars, the Casualty Actuarial Society’s Automated Vehicle Task Force’s latest report, “Automated Vehicles & the Insurance Industry: A Pathway to Safety: The Case for Collaboration,” recommends the following datasets for collection:

  • Driver Skill Deterioration. The more technology is in control, the more likely human drivers will become out of practice. The dynamic risk needs constant monitoring as driver proficiency may change over time.
  • Pass-Off Risk. This occurs when technological control transfers to human drivers either by their choice or when the vehicle encounters a scenario it is unable to perform.
  • Other Interaction with Drivers, Pedestrians and Bikers. Drivers’ reactions to others can change due to age, experience, technology familiarity, mood, etc.
  • Animal Hits. Animals may be even more unpredictable than people. State Farm, for example, estimates that accidents involving vehicles and deer cause more than 1.2 million collisions annually. Meanwhile, the National Highway Transportation Safety Administration’s 2008 “National Motor Vehicle Crash Causation Survey” lists animals as the cause in 1.0 percent of police-reported accidents.
  • Hacking. The more technology in the vehicle, the greater the potential vulnerability to hacking.
  • Random Errors. The task force assumes technological errors will still occur.
  • Unknown. It is important to include a placeholder for unpredictable events.
  • Incident Severity Risks. By dividing the automated vehicles into their respective risk components, actuaries can create a risk management structure that minimizes severity of unpreventable incidents. These data measures include speed, pedestrians, location and vehicle design.

 

Fatal Lessons

The first driverless car fatality provides insight into the complexities of causation and pass-off risk.

By several accounts, Joshua D. Brown of Canton, Ohio was a driverless car enthusiast. On May 7, 2016, the former Navy SEAL and founder of Nexu Innovations11 was relying on the autopilot feature of his 2015 Tesla S 070 while driving near Williston, Florida.

When a white tractor-trailer was crossing an intersection lacking a traffic light, the car and the drivers were unable to detect the pending crash. The automobile, which was purportedly set at 74 miles per hour on cruise control,12 proceeded to barrel under the truck, which sheared off its roof, before continuing through a drainage culvert and two wire fences, breaking a utility pole and finally landing in a residential front yard.13

The observations and conclusions of two federal agencies showcase the complexities of determining causation in accidents involving automated vehicle technology and human drivers.

The National Highway Traffic Safety Administration’s (NHTSA’s) incident inspection report emphasizes the need for drivers to pay constant attention to traffic conditions to respond to potential incidents when the advanced driver assistance system (ADAS) is operating, according to the January 2017 report. Among its findings, National Transportation Safety Board (NTSB)’s September 2017 investigation also noted that the truck driver and Brown had sufficient time to prevent the crash.

The NHTSA investigation did not identify defects in the autopilot system’s design or performance, but allowed for potential safety defects of the car. However, the NTSB report determined that the forward collision warning system did not provide “an alert and the automatic emergency braking did not activate.”14 Further, Tesla’s autopilot’s operational design allowed prolonged driver disengagement and for the driver to use the automation inconsistently with the automaker’s guidelines and warnings.15 These factors were also noted as part of the accident’s probable cause.

For its part, Tesla has since upgraded the software to depend more on radar instead of cameras to improve its accuracy in detecting hazards. The update also adds a feature that disables Autopilot if the driver repeatedly ignores requests to hold the steering wheel.16

The incident also highlights the “pass-off risk” discussed in the Casualty Actuarial Society’s Automated Vehicle Task Force’s latest report, “Automated Vehicles and the Insurance Industry: A pathway to safety: the case for collaboration.” Unless an automated vehicle can successfully navigate all the potential hazards that arise when driving, pass-off risk will play a role in accident cause and, potentially, liability.

In 2018, two women have died due to driverless car technology. Like Brown, a California woman died while her Tesla Model X was in autopilot mode.17 Another woman was hit by an Uber self-driving car while crossing the street.18

11  http://www.legacy.com/obituaries/triblive-murrysville-star/obituary.aspx?n=joshua-d-brown&pid=179986286&fhid=9878
12 http://www.abajournal.com/magazine/article/selfdriving_liability_highly_automated_vehicle
13 https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-BMG-abstract.pdf, page 1.
14 https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-BMG-abstract.pdf, page 2.
15 https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-BMG-abstract.pdf, page 3.
16 http://www.iihs.org/iihs/sr/statusreport/article/51/8/1
17 http://www.bbc.com/news/world-us-canada-43604440
18 https://www.scientificamerican.com/article/uber-self-driving-car-fatality-reveals-the-technologys-blind-spots1/