by Franziska Jandl
The use of intelligent robots and other autonomous systems is creating gaps in responsibility as autonomy advances. To close the gaps, the EU Commission, science and industry are debating the introduction of an electronic person, among other options.
It is becoming more and more difficult to decide who is responsible for actions when systems are increasingly developing and acting independently: For example, if an autonomous machine exceeds its authority during order placement or a system learns something incorrectly, and this results in damage, but there was no human intervention – who’s to blame?
One solution being discussed is the introduction of an electronic person. Liability assets would be assigned to this e-person so that injured parties can target these assets if the injuring party is not eligible for legal action.
Industry associations and the insurance industry are not in favor of this idea. One of the problems they see is that machines, unlike human beings, lack incentive to safeguard their economic existence and not to put their amount of liable capital at risk.
For the foreseeable future, gaps in responsibility in regard to concluding contracts or the contractual liability of autonomous robots can be closed with the aid of legal fiction and partial legal capacity.
However, there is a need for action when it comes to non-contractual liability and a corresponding change in product liability law aimed at solving problems related to evidence.
Company lawyers are advised to keep an eye on the discussions about strengthening producer liability. For intelligent products with a higher degree of autonomy, it will be especially important for the future to ensure that systems do not behave unpredictably in practice.
The behavior of autonomous systems with artificial intelligence cannot be fully predicted nor comprehended – only to a certain degree. “As autonomy progresses, it is becoming increasingly difficult to attribute actions to computers that are making their own decisions,” explains Dr. Martin Ebers, lecturer at Humboldt-University in Berlin and Chairman of the Robotics & AI Law Society (RAILS). Its interdisciplinary members promote the responsible development of intelligent systems, and participate, among others, in the EU Commission’s High-Level Expert Group on Artificial Intelligence. They contribute to proposals calling for a legal framework that facilitates technological development as well as transparency and equal treatment.
According to Ebers, gaps in responsibility arise, for example, in non-contractual liability: “According to the applicable law, a tortious claim presupposes fault on the part of the injuring party”. The user of an autonomous system could, however, claim that the damage was not foreseeable and that the malfunction could not have been avoided even if the best possible monitoring was in place. The manufacturer’s product liability is excluded if a defect could not have been detected under application of state-of-the-art science and technology.
In regard to contractual liability, the applicable law is based on a human actor such as a manufacturer, operator or user who could have foreseen and prevented the system’s damaging behavior.
In the case of fully autonomous machines, the question also arises: Who is responsible for a declaration of intent if they overstep their mandate or are simply wrong about their level of authority?
“After considerable opposition to the proposal to introduce an e-person, the Commission is considering how to adapt product liability instead.“
– Dr. Martin Ebers, Chairman of the Executive Board of Robotics & AI Law Society (RAILS)
The proposed solution
Taking this into account, the European Parliament called on the EU Commission last year to address the legal and ethical implications of artificial intelligence. “The proposal to introduce the status of an electronic person was considered so that it would be possible for robots to be held responsible for decisions and damages that are caused when a certain threshold of autonomy is exceeded,” reports RAILS Chairman Martin Ebers. If the system causes damage or exceeds its authority when making a legal declaration, it would not be the operator who would be held liable, but the robot itself. With the help of the manufacturer, the e-person could be assigned a compulsory insurance policy or taxpayers’ money with recoverable assets and this information could be recorded in a register.
Industry associations do not think highly of the e-person: “This is not a targeted approach and would raise so many new issues instead of solving the existing ones,“ says Patrick Schwarzkopf, Managing Director at VDMA Robotics + Automation Association. Representatives of the insurance industry and the Plattform Industrie 4.0, which includes the German Academy of Engineering Sciences, representatives from the University of Kassel and high-ranking members of industry and trade unions, fear that a machine lacks the incentive to not arbitrarily put its own liability assets at risk because, unlike a human being, it is not driven by the need to safeguard its existence. Another concern is that manufacturers could evade their responsibility, as the liability assets would be tied only to the e-person, which is similar to the situation for German limited liability companies (GmbH). “With this initiative, the Parliament has taken a broad approach to this issue. Now it is up to the Commission to put it under a microscope from a technological and legal point of view,” says Dr. Susanne Bieller, Project Manager of the European robotics association EUnited Robotics, which represents industrial and professional service robotics.
“The introduction of an electronic person is not a targeted approach and would raise so many new issues instead of solving the existing ones.“
– Patrick Schwarzkopf, Managing Director of the VDMA Robotics + Automation Association
What are the alternatives?
According to Bieller, members are far more concerned with the issue of liability than that of the e-person. After considerable opposition to the proposal of introducing an e-person, the Commission and other bodies in Brussels are now considering how to adapt product liability instead. On the one hand, strict and unlimited liability for manufacturers of robots and other autonomous systems is conceivable in order to close gaps in responsibility for non-contractual liability. “If manufacturers are placed at such high risk, they will never develop and market these systems,” Susanne Bieller of EUnited Robotics points out.
According to RAILS Chairman Ebers, this could be avoided if the system’s user – and not the manufacturer – was made responsible for using the particularly dangerous technologies, regardless of fault. For example, the operator of an autonomous production machine could be made liable in the same way as drivers are viewed as potential threats in road traffic. Risks such as injury to a worker or damage caused by a defect in the end product could be covered by insurance.
There are also alternatives to introducing an e-person for gaps in responsibility that arise when concluding a contract. Martin Ebers: “The legal system already uses legal fiction in many ways such as tolerated and apparent authority (Duldungs- und Anscheinsvollmacht). If an autonomous system makes a declaration of intent that is not in line with the operator’s specific business intention, you can get far with this approach.”
„If manufacturers are placed at such high risk, they will never develop and market these systems.“
– Dr. Susanne Bieller, Project Manager, EUnited Robotics
Consequences for company lawyers
What do company lawyers need to do? During the development phase of products that incorporate artificial intelligence, for example, documentation is a must: What measures have been taken to avoid damage? And how was the algorithm trained? According to RAILS Chairman Ebers, in the future the most important factor will be that a system does not behave unpredictably in practice. (see also box: “Reducing liability risks”)
In a conversation with the Deutsche Presse-Agentur (Germany press agency) at the end of September, IBM manager Bob Lord predicted that computers would not make autonomous decisions. And according to Patrick Schwarzkopf, the autonomy of robots and similar systems is subject to narrowly defined limits. “It must be managed within the existing legal framework, i.e. in line with the responsibility of natural and legal persons.” However, in view of the rapid pace of technological development and increasing autonomy, company lawyers should keep themselves informed about the current discussions on liability and collaborate with colleagues from the development department to pool technical and legal expertise.
„It is particularly important to get people involved in the process and provide them with effective training,”
– Friedrich Wimmer, Head of IT Forensics & Cyber Security Research at Corporate Trust Business Risk & Crisis Management GmbH
Reduce liability risks
- Development: Ensure that systems do not behave unpredictably in practice: For example, by limiting the learning environment during programming, by only allowing supervised learning under human control, or by allowing the system to learn only until it reaches the market.
- Documentation: To prove that a system was not defective at the time it was placed on the market and that the manufacturer could not have known about the defect even under application of state-of-the-art technology, documentation is a must: What measures have been taken to avoid damage? How was the algorithm trained?
- Product monitoring: Manufacturers must retain control over the further development of the system: For example, so that a self-driving car is not able to update itself autonomously in the operating environment while driving.