BY JEREMY ROGERS, ESQ.
In the near future, driving just won’t be the same as it used to be.
Tasks such as placing your hands at 10-and-2 on the steering wheel, looking for cross traffic at intersections, checking your blind-spot before changing lanes, and being attentive to surrounding traffic and weather conditions may well become obsolete because of advances in automotive technology.
In place of old driving maneuvers, “drivers” will be able to program their autonomous, self-driving vehicles by simply speaking the desired destination into an in-dash microphone and by keeping their hands off the wheel (if the vehicle is even equipped with a steering wheel).
Indeed, the beginning of the driving experience revolution is already here. Many drivers are quite familiar with parking assistance, which parallel parks the vehicle without any help from the driver or manual operation of the steering wheel, and electronic stability control. Current vehicles are also equipped with warnings for forward collisions, lane departure and intersection violations. All of these features have been implemented to make the driving experience safer.
Accidents will happen
Yet despite all of these current and future safety advances, as long as there are more than two vehicles on the road, it is inevitable that the cars will find a way to crash into each other. As everyone knows, technology is fallible and accidents happen. And, once this new technology fails and accidents involving autonomous cars do occur, the types of claims and potential at-fault parties will likely be much different from the current ones.
Instead of a simple tort negligence action between two drivers, future collisions may trigger product liability actions against automakers; component manufacturers; federal and state agencies; and various municipalities.
Here is what insurance professionals need to know in order to manage the upcoming product liability risks posed by self-driving vehicles.
Fully autonomous vehicles are coming down the road
According to the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA), in its May 2013 “Preliminary Statement of Policy Concerning Automated Vehicles,” there are five levels of vehicle automation, including:
- Level 0 (no automation). The driver is in complete and sole control of the vehicle’s controls and responsible for monitoring the roadway and traffic conditions.
- Level 1 (function-specific automation). The vehicle has one or more specific control functions (brake, steering, throttle or motive power) that is automated, but the driver maintains overall control of the vehicle.
- Level 2 (combined function automation). The vehicle has at least two primary control functions designed to work in unison to relieve the driver of control of those functions, but the driver is still responsible for monitoring the roadway.
- Level 3 (limited self-driving automation). The driver foregoes all key driving functions under specific traffic or other circumstances, but is able to resume control once those conditions no longer exist.
- Level 4 (full self-driving automation). The vehicle is designed to perform all primary driving functions and monitor roadway conditions for the entire length of travel.
Self-driving vehicles at Levels 3 and 4 are still in the testing phase.
At present, California, Florida, Michigan, Nevada, North Dakota and the District of Columbia have enacted statutes authorizing the testing of autonomous vehicles. Tennessee has a law in place forbidding its municipalities from prohibiting automated vehicles that otherwise comply with all local safety regulations. Virginia’s governor has entered a partnership allowing testing of self-driving cars to take place in the “Virginia Automated Corridors.” And Arizona’s governor ordered state agencies to support testing of automated vehicles on its roadways and establishing an oversight committee. Many other state legislatures are also considering regulations on how to incorporate testing of autonomous vehicle within their state lines.
What’s under the hood of a self-driving car?
Automakers such as Audi, BMW, Ford, Honda, Mercedes, Nissan, Tesla, Toyota and Volvo are currently testing some form of self-driving cars. While each manufacturer will certainly include features unique to their brands, all autonomous vehicles will likely contain several shared components that allow them to operate properly, such as cameras, sensors, GPS-tracking, radars, lasers, cyber security software, and “vehicle-to-vehicle” (V2V) communication technology.
In a February 2013 press release, the NHTSA suggested that V2V technology “would improve safety by allowing vehicles to ‘talk’ to each other and ultimately avoid many crashes altogether by exchanging basic safety data, such as speed and position, ten times per second.”
Computers and cars both crash (into each other)
In its February 2015 “Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey,” the NHTSA estimated that 94% of motor vehicle accidents were attributable to human behavior (i.e., drunk driving, speeding and driver inattentiveness), as opposed to 2% for vehicle conditions, 2% for environmental conditions, and 2% for unknown reasons. The primary purpose of autonomous vehicles is to reduce the number of accidents, in part, by attempting to limit human error entirely. However, the risk of automotive accidents will always exist because of human behavior, emergencies, unpredictable events and automated system failures.
While autonomous technology will come, there is a segment of the driving public that will prefer to remain low-tech (and retain full control of their vehicles). Thus, self-driving cars of the future will need to co-exist with the human drivers of the past, which introduces an innumerable combination of collision scenarios. For instance, presently, most accidents involve only human driver vs. human driver situations. In the coming years, however, there will be accidents involving human driver vs. semi-automated driver, human driver vs. fully-automated driver, semi-automated driver vs. semi-automated driver, semi-automated driver vs. fully-automated driver, and driver fully-automated vs. fully-automated driver, as well as other combinations (i.e., single vehicle collisions and multi-vehicle chain-reaction collisions).
Determining the causes of self-driving vehicle collisions will also be more complicated and present some unique challenges. Was the accident due to human error operating a vehicle with no or limited automation? Did the collision occur due to an inappropriate or unsuccessful driver override? Did the vehicle’s software malfunction, contain a computer virus, was it breached by hackers or cyber terrorists, or simply outdated (due to not receiving regular system upgrades)?
Assessing liability in autonomous vehicle accidents
As technology’s role in the driving experience increases and the element of human control decreases, the types of claims most likely arising from self-driving vehicle collisions will be based upon product liability theories. Generally, claimants in product liability claims can sue anyone involved in the manufacture or sale of the product, inclusive of retailers, distributors, importers and original manufacturers.
In the autonomous vehicle context, product liability claims have the potential for involving automakers, manufacturers of the cars’ critical technology and components, and car dealerships.
Additionally, based on the type of collision involved, product liability claims may be pursued under a design defect theory (that is, the product was dangerous as it was designed, and a safer alternative was available and should have been used), a manufacturing theory (an error occurred in the manufacturing process that made the product dangerous), or under both theories.
Parties involved in autonomous vehicle collisions may also attempt to file a negligence action (claiming that the manufacturer or retailer knew or should have known of a specific danger of the product) or breach of warranty action (the manufacturer or retailer failed to fulfil an expressed or implied warranty).
Beyond the sellers and manufacturers of autonomous vehicles, litigants may also seek to file suit against and attribute fault to various federal and state agencies and local municipalities. This is so because the infrastructure — accurate traffic and roadway conditions, communication devices and towers, electronic information bandwidth, etc. — upon which autonomous technology relies will likely be built, regulated and maintained by governmental entities. In the event this infrastructure fails because of lack of funding, natural disaster, acts of terrorism or human error, an injured claimant (and his or her counsel) may sue such entities to ensure that there are no empty chairs left at the defense table at trial.
Buckle-up and get ready for the ride
Self-driving vehicles are on the horizon and will be in dealership showrooms relatively soon. Fortunately, there is adequate time for the insurance industry to decide how it wants to get into the flow of autonomous vehicle traffic. Insurers will need to consider how to incorporate self-driving vehicles into their business models from policy declarations and endorsements, and property damage assessment and replacement, to claims handling, liability determinations and apportionments of fault. The coming of autonomous vehicles will be a sea change in the driving