As part of its ‘first mover’ effort to build (dictate?) the world’s benchmark AI regulatory regime, the European Commission has recently released two proposals seeking to adapt liability rules to the digital age: one tweaks the long-standing Product Liability Directive to fold in AI systems; and the other would establish a new AI Liability Directive specifying new procedural rules in the application of existing Member States’ non-contractual civil liability laws to AI.
Why does AI require specific liability rules?
The Commission considered it needed to act (rather than leaving it to individual Member States) due to an apparent lack of preparedness in dealing with the emerging dominance of AI-enabled products in the international and local markets. In its 2021 study, the Commission found that uncertainty over liability was ranked highly among barriers faced by companies seeking to implement AI.
The Commission identified two major shortcomings:
- current liability frameworks are practically not suitable for AI-enabled products:
“.. victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI.”
- the courts of Member States will struggle with the challenge of adapting historic civil liability rules to rapidly developing AI models, producing inconsistent and ad hoc outcomes. Victims bringing compensation claims to court may find that the rules are applied in ways adapted to the specific characteristics of that AI, making it difficult to ensure a fair outcome for victims, whilst also increasing the burden on businesses, who are unable to predict how those liability rules may be applied to their technologies across Europe.
What has the EC done about it?
Rather than adapt a single comprehensive liability framework, the European Commission endeavours to ‘cobble together’ an approach which would amend the existing EU-level product liability rules to deal with physical loss or damage caused by AI; and deal with other ‘wrongs’ caused by AI by establishing “uniform rules for access to information and alleviation of the burden of proof”.
AI Liability Directive
There are two key features of the AI Liability Directive:
Presumption of Causality:
At a high level, a rebuttable presumption of causality is proposed to address the difficulties faced by claimants in establishing causal links between non-compliance to a duty of care and the output produced by AI systems under the tort or negligence-type laws of individual Member States. This reversal of the burden of proof shifts responsibility to the defendant to refute the presumption that the AI caused the harm to the complainant. The Commission considered that a rebuttable presumption of causality was the ‘least burdensome measure to address the need for fair compensation’ of the victim.
For most AI systems, the circumstances in which the onus of proof will be reversed where: (a) the conduct of the developer or deployer of AI did not meet a duty of care under EU or national law that was directly intended to protect against the damage that occurred; (b) the fault influenced the functioning of the AI; and (c) the AI system’s output (or failure to produce an output) was reasonably likely to have caused the damage.
For high-risk AI systems (as defined in the EU’s AI Act), the complainant can trip the onus reversal under limb (a) above by showing that:
- the training, validation and testing data sets did not meet the quality criteria of the AI Act;
- the AI system was not designed and developed in a way that meets the transparency requirements of the AI Act;
- the AI system was not designed and developed in a way that allows for an effective oversight by natural persons;
- the AI system had inadequate cybersecurity protections; or
- when problems were discovered, appropriate corrective actions were not ‘immediately taken’.
Right to Access:
Victims will be afforded a right of access to evidence from companies and suppliers, where ‘high-risk AI’ is involved. In common law jurisdictions like Australia, we have a process of discovery embedded in our legal rights and privileges. However, under the civil law system in Europe, discovery is not a guaranteed right, and is instead, at the courts discretion.
Product Directive
The EU Product Liability Directive, which has been on the books since 1985, applies a no-fault (strict) liability regime for defective products which cause death, personal injury (including medically recognised psychological harm) or damage to property. The key modifications the European Commission proposes to extend the directive to AI are as follows:
- software or AI systems and AI-enabled goods will be expressly defined as “products”. Software will be caught “irrespective of the mode of its supply or usage, and therefore irrespective of whether the software is stored on a device or accessed through cloud technologies.” But “[i]n order not to hamper innovation or research”, free and open-source software developed or supplied outside the course of a commercial activity is not caught.
- The concept of the “defectiveness” of a product will be expanded to reflect the characteristics of digital products. This should include, with the increasing prevalence of interconnection between products, the impact of other products on the allegedly defective product. The effect on a product’s safety of its ability to learn also should be taken into account, “to reflect the legitimate expectation that a product’s software and underlying algorithms are designed in such a way as to prevent hazardous product behaviour.” As AI developers typically retain ongoing involvement with AI once deployed, the assessment of defectiveness should not, unlike with a lawnmower or other physical product, end when the AI is rolled out the developer’s door. Finally, failures in cybersecurity protections also should be considered to make a software product ‘defective’.
- Compensation for “loss or damage” is expanded to include the loss or corruption of data, such as content deleted from a hard drive;
- as with the AI Liability Directive, courts will have enhanced powers to require ‘discovery’ and failure to comply will result in a reversal of the onus of proof. The courts also will have authority to reverse the onus of proof against a developer or deployer of AI where the court determines that it would be excessively difficult for the claimant, in light of the technical or scientific complexity of the case, to prove its defectiveness or the causal link, or both. The draft Directive notes that:
“Technical or scientific complexity should be determined by national courts on a case-by-case basis, taking into account various factors. Those factors should include the complex nature of the product, such as an innovative medical device; the complex nature of the technology used, such as machine learning; the complex nature of the information and data to be analysed by the claimant; and the complex nature of the causal link, such as a link between a pharmaceutical or food product and the onset of a health condition, or a link that, in order to be proven, would require the claimant to explain the inner workings of an AI system. The assessment of excessive difficulties should also be made by national courts on a case-by-case basis.”
The response
The European Commission’s proposed approach has been lauded as a canny way of getting to a new liability regime for AI without opening the pandora’s box of Member States’ individual civil liability laws.
However, the peak EU consumer body, BUEC, criticised the Commission’s ‘dual track’ approach of limiting the strict liability to tangible loss or damage caused by AI while leaving other ‘wrongs’ to be addressed through fault-based rules:
“It is essential that liability rules catch up with the fact we are increasingly surrounded by digital and AI-driven products and services like home assistants or insurance policies based on personalised pricing. However, consumers are going to be less well protected when it comes to AI services, because they will have to prove the operator was at fault or negligent in order to claim compensation for damages…Asking consumers to do this is a real let down. In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules. As a result, consumers will be better protected if a lawnmower shreds their shoes in the garden than if they are unfairly discriminated against through a credit scoring system.”
The UK’s Ida Lovelace Institute argues that viewing AI liability through the individual rights frame inherent in both product liability and tort law misses the real damage which an errant AI can cause. The opacity and complexity of AI leads to broader social and inequitable outcomes, from the use of AI-enabled products; concerns which are not remedied by an – albeit improved – individualised product liability approach. The institute argues that product liability assumes the victim is an individual and assumes the problem can be solved at the individual level, but ultimately AI is generating problems of liability which impact entire groups of people.
Authors: Amelia Harvey and Peter Waters
Read more: Liability Rules for Artificial Intelligence
Visit Smart Counsel