The development and deployment of artificial intelligence (AI) raises numerous legal questions – particularly regarding liability.
This article examines the current legal framework for liability for AI and AI-generated content, particularly under the German Civil Code (“BGB”) and the Product Liability Act.
However, it is also essential to look into the (not so distant) future, as liability law will be adjusted by the planned amendment of the Product Liability Directive (ProdHaftRL) and the AI Liability Directive (AI-Liability Directive).
AI Liability under Currently Applicable Law (as of July 2024)
In the development, operation, use, and distribution of AI systems, liability is currently determined according to the general principles of current law – there are no specific legal provisions (yet).
I. Liability under the BGB
Liability for the use of AI or its deficiencies is not explicitly regulated in the BGB – which is not surprising for a law that came into force in 1900. Nevertheless, the law sets the standard of liability even in connection with AI:
- The user is always liable for the content generated by AI – in case of doubt, the company that uses AI-generated texts and images in its name – be it as content on the website, in customer documents, or in marketing materials.
- “The AI” itself, however, is not liable. It lacks its own legal personality.
- The manufacturer may be liable according to general standards only if the AI, for example, does not have the contractually guaranteed characteristics or if the manufacturer has not taken sufficient security precautions within the AI, and this causes damage.
II. Liability under the Product Liability Act
The question of liability under the Product Liability Act is particularly exciting and passionately debated among lawyers. The key question is whether an AI system or an AI model can be classified as a product within the meaning of Sec. 2 ProdHaftG.
There are arguments that AI systems or models should be considered products within the meaning of the ProdHaftG due to their comparability with other software. The fact that they can be stored and distributed on physical media, which constitutes embodiment, fulfils the requirement of Sec. 2 ProdHaftG, which refers to “movable items.” Moreover, software is often qualified as an item within the meaning of Sec. 90 BGB, supporting its classification as a product. When software is installed and runs directly on the user’s system, this embodiment is clearly given. Furthermore, it can be argued that the use of standard software has a comparable product characteristic to physical goods.
However, the wording of the law, which explicitly mentions “electricity” as a non-embodied unit, argues against this view. The legislator has evidently recognized that this type of non-physical product should also fall under product liability. However, an explicit extension to software has not been made. Furthermore, the fact that software is explicitly defined as a product in the course of the Product Liability Directive’s revision also argues against its classification as a product under the current law.
Therefore, one can usually assume that AI – at least currently – does not fall under the ProdHaftG. This will, however, soon change.
AI Liability in the Future
I. Coming Soon: The Amendment of the Product Liability Directive
Since January 24, 2024, the draft of the Product Liability Directive (ProdHaftRL) has been before the European Council. Adoption of the amendment is almost certain by the end of 2024. Member states then have 24 months to transpose it into national law. The new law brings profound changes, including liability for software and thus for AI. But what exactly will change?
In a nutshell:
- Manufacturers, importers, and suppliers are liable for software, AI models, and AI systems;
- Reversal of the burden of proof: in many cases, defects or causality between defect and damage will be presumed in favour of the injured party – manufacturers, importers, and suppliers must rebut these legal presumptions to escape liability;
- Manufacturers or operators of AI systems cannot be excluded from liability when making substantial modifications;
- Extension of limitation periods;
1. Software as a Product
One of the key innovations is the explicit classification of software as a product under Art. 4(1)(1)(2) ProdHaftRL. This clarifies that AI models and AI systems, as software, are fundamentally subject to product liability. The accompanying discussions under current law (see above) would thus be settled. Manufacturers of AI systems would then also be liable under the ProdHaftG.
2. Reversal of the Burden of Proof
The Product Liability Directive provides for an effective reversal of the burden of proof, regulated in Art. 9 ProdHaftRL. Usually, the plaintiff must prove that a product is defective, that they suffered damage, and that this damage was caused by the product defect. The new directive stipulates that the product is considered defective if the manufacturer does not disclose important information, if the product does not meet prescribed safety standards, or if it obviously fails during normal use.
Moreover, the connection between product defect and damage is now presumed if the product is defective and the damage caused is of a kind typically consistent with the defect in question, Art. 9 para. 4 ProdHaftRL. In particularly complex cases, where proof is very difficult for the plaintiff, it will also be presumed that the product is defective and the damage was caused by it, as long as the plaintiff can credibly argue that this is likely. Manufacturers, importers, and suppliers must rebut these legal presumptions to escape liability.
3. No Liability Exclusions for Substantial Modifications
The new Product Liability Directive stipulates that manufacturers or operators of AI systems cannot be excluded from liability if they make substantial modifications to a product. This includes changes through software updates or the continuous learning of an AI system. A substantial modification, defined in Art. 4(17b) ProdHaftRL, is one that alters the original performance, purpose, or nature of the product and creates new hazards or increases the risk level not foreseen in the manufacturer’s original risk assessment.
Art. 10(2) ProdHaftRL clarifies that manufacturers remain liable even if a product’s defectiveness arises from such substantial modifications (in lit. d) or from missing necessary updates (lit. c). The prerequisite is that the manufacturer must have control over these aspects.
For companies, this means paying particular attention to how they update and adapt AI systems and software. Any substantial change can affect liability, and strict documentation of these changes is crucial to minimize legal risks. Note that changes such as adding filters, adjusting metaprompts, and fine-tuning models may also be considered substantial modifications.
4. Extension of the Exclusion Period
The amendment also envisages an effective extension of the exclusion period from 10 to up to 25 years. Particularly relevant for AI is the fact that not only the products themselves but also substantially modified products are explicitly mentioned, and their placing on the market is decisive for the start of the exclusion period, Art. 14(1)(b) ProdHaftRL. For continuously self-learning machines, this means that strict documentation is required. Without comprehensive documentation, it could happen that the exclusion period effectively never applies, significantly increasing liability risks.
Action Required for AI System Providers
Attention: Providers of AI systems within the meaning of the AI Act are classified as manufacturers within the meaning of the Product Liability Directive. Thus, it can currently be assumed from the draft ProdHaftRL that providers of any AI systems, regardless of their risk classification, are affected by the directive.
Overall, the amendment necessitates significant action for providers of AI systems. Companies need to review and, if necessary, adjust their development and documentation processes to meet the new requirements and minimize liability risks and the burden of proof they may bear.
II. Coming Possibly: The AI Liability Directive
The AI Liability Directive (AI-Liability Directive) has been in the planning and conceptual phase for some time, with a draft from the European Commission already available since September 28, 2022. This directive, which has not yet been adopted, is intended to complement the Product Liability Directive (ProdHaftRL) by introducing specific provisions for high-risk AI systems and facilitating the enforcement of compensation claims in the future. While the ProdHaftRL sets out general liability rules for all products, including AI systems, the AI-Liability Directive addresses the specific challenges and risks associated with high-risk AI systems under the AI Act. However, it is still uncertain whether and when the AI-Liability Directive will be adopted in its current form.
Conclusion
While manufacturers, importers, and suppliers of AI currently still benefit from the general liability principles, such as the burden of proof lying with the injured party, they must soon deal with the tightened liability rules, especially of the Product Liability Directive, and potentially the requirements of the AI-Liability Directive concerning high-risk AI systems. In particular, with regard to the reversal of the burden of proof, clean processes and documentation are necessary to keep one’s liability as manageable as possible.
To-Dos for AI Providers:
- Ensure that all modifications and updates of AI systems are documented without gaps;
- Implement processes that meet the new requirements of the ProdHaftRL (and, as far as foreseeable, the AI-Liability Directive);
- Conduct regular risk analyses to identify potential liability risks early and minimize them through appropriate measures;
- Ensure that employees are trained on the new legal situation and the associated duties and responsibilities.
We are happy to guide you through this complex and evolving legal landscape. With our extensive expertise in IT law and AI and our practical approach, we offer you tailored solutions to minimize your legal risks and ensure your compliance. Contact us to ensure that your AI products are not only innovative but also legally compliant.