Munich Re logo
Not if, but how

Explore Munich Re Group

Get to know our Group companies, branches and subsidiaries worldwide.

Liability for autonomous vehicles
    alt txt



    Thanks to driver assistance systems, semi-automated vehicles are already being driven on our roads. It seems likely that it is only a matter of time before fully automated vehicles are also rolling off the assembly lines. How will these developments affect liability for claims due to road accidents?
    Up to now, many national legal systems stipulate that each vehicle must have a driver. This requirement is based primarily on the 1968 Vienna Convention on Road Traffic, which has 76 signatory countries worldwide, including almost all European states. Article 8 of the Convention states: “Every moving vehicle or combination of vehicles shall have a driver.” In 2014, however, this regulation was modified to permit automated vehicles, provided drivers can deactivate or override assistance systems at all times. As soon as this is implemented in national legal systems, the way will be clear for fully automated vehicles. However, at least one person must still be on board to take control of the vehicle if necessary. At the same time, a number of countries have set out the requirements for testing automated vehicles on public roads. This ensures that certain safety standards are maintained and that victims can be indemnified if an accident occurs. Fully automated vehicles can now therefore be tested under everyday conditions.

    More, fewer or different risks?

    The majority of road accidents are caused by human error. At first glance, this would suggest that automated vehicles are safer than those driven (only) by humans: automated vehicles do not drive under the influence of alcohol or drugs, do not fall asleep, cannot be distracted, observe speed limits and maintain minimum distances. At the same time, however, automating vehicles also creates new risks. On the one hand, driver assistance systems can perform many routine tasks more effectively than a human driver. For example, optimum braking, keeping in lane or judging the distance to other road users. On the other hand, they do not possess all the skills at the disposal of an experienced human driver – at least not yet. This is particularly evident in complex situations where a quick reaction is needed and the driver has to choose between several hazards. Even the best program for controlling driver assistance systems cannot predefine every potential set of circumstances arising on the road.

    To complicate matters further, numerous legal systems have to be taken into account when programming extensive automation of vehicles: after all, vehicles regularly cross national borders. Even standardising legal requirements in the EU or between US states is difficult, lengthy and will probably never be entirely comprehensive. On a global scale, such standardisation is unlikely to be realised in the foreseeable future. Consequently, the vehicle itself must detect when it crosses a national border and adapt its driving behaviour in accordance with the local legal system. Finally, errors in the programs controlling the driver assistance systems can cause accidents which would not have occurred had they not been in use. Also, the more automated vehicles are connected to other systems, the higher the impact of typical cyber risks. Hackers, for instance, could access the on-board computer and cause an accident. Or the data collected by an on-board computer to control a vehicle could be accessed and misused by unauthorised parties.

    Who is liable in the case of an accident?

    Whenever automated vehicles are involved in road accidents, the question of liability arises. The vehicle keeper’s liability is not affected by the vehicle’s automation. In the event of a faulty driver assistance system, however, it is supplemented by the manufacturer’s liability. That the manufacturer of a vehicle is held liable after a road accident is nothing new. Such cases of product liability have, for instance, been brought when the liability cover of the keeper of the vehicle which caused the accident was inadequate, a situation which is not improbable following serious road accidents in the United States. Or if numerous similar accidents occur due to defects in a large number of vehicles. However, it may be assumed that as driver assistance systems become more widespread, manufacturers worldwide would be exposed to liability claims much more frequently than in the past.

    Modifying the duty of care

    As long as vehicles are not fully automated or at least legally require the physical presence of a human driver,  the driver’s duties of care will have to be modified: When does not using installed driver assistance systems qualify as negligence? And in which situations would it be negligent for a driver to let the driver assistance system make decisions instead of taking control personally? These questions could well lead to uncertainties for many years: due to the innumerable constellations possible, the decision as to when a certain form of behaviour constitutes negligence can never be exhaustively defined by law. Instead, this will invariably remain a matter of case law which can only develop gradually, after a sufficient number of different cases have been decided by the higher courts.

    Trade-off by the on-board computer

    Programming driver assistance systems for situations in which a crash appears unavoidable is also a legally difficult matter. Not uncommonly, two risks have to be weighed up and a decision made in a split second. While human drivers act intuitively, driver assistance systems have to be programmed in advance to deal with such situations. This is simple enough when damage to the car has to be weighed against destruction of a human life. But what if the driver assistance system has to choose between running over several pedestrians or risking serious injury to the people inside the vehicle under its control?

    Liability and ”nudging”

    Further legal aspects concerning automated vehicles could become increasingly important, particularly in highly litigious jurisdictions such as the United States: automated vehicles are supposed to relieve and help the driver. Among other things, by noting the destinations customarily targeted by the driver and his habits. But what if the driver intends to use the vehicle for illegal purposes? Or if the vehicle supports the driver’s unhealthy or illegal habits, for instance by drawing the attention of an alcoholic to the fact that there is a shop selling alcoholic beverages in the neighbourhood?


    Most legal obstacles for automated vehicles will be overcome in the next few years. In spite of this, however, not everything that is technically possible is ever likely to be unconditionally permitted on public roads.  Particularly in the early days, there will be extensive legal uncertainty in the case of accidents involving automated vehicles. At the same time, however, it will be many years before automated vehicles are introduced and become widespread, due not only to technical reasons and the costs associated with this technology, but also and not least on account of the public acceptance required. Lawmakers and courts will consequently have sufficient time to consider the legal problems associated with automated vehicles and to prepare a solution. What is important is that the insurance industry must also develop the required know-how in good time to be able to respond to this development appropriately.
    Munich Re Experts
    Ina Ebert
    Leading Expert, Liability and Insurance law


    Stay ahead of the curve with exclusive insights and industry updates! Subscribe to our Munich Re Insights Newsletter for a front-row seat to the latest trends in risk management, expert analyses and assessments, market insights, and innovations in the insurance industry. Join our community of forward-thinkers at Munich Re and empower your journey towards a more resilient future.