Skip to content

Artificial intelligence (AI) is omnipresent; almost every company seems to be using AI or even developing its own AI systems. But it’s not just companies that use AI; private individuals are also making use of voice, image and video generation models. AI systems are now easily accessible and enable the creation of deceptively real, synthetic content. As a result, there is a growing call for more transparency: there are calls for insights into how AI systems work and for the use of AI to be clearly communicated, labelled and traceable. These demands were partly met by the AI Act, which came into force in August 2024 and created a legal framework for the regulation of AI for the first time.

This article provides an overview of the transparency obligations contained in the AI Act. For the most part, these are vaguely formulated and leave a lot of room for interpretation with regard to their concrete interpretation and implementation. More clarity is only likely to be provided by the guidelines to be drawn up by the Commission and the practical guidelines to be prepared by the Office for Artificial Intelligence.

1. Am I affected by the transparency obligations of the AI Act?

Before addressing the individual transparency obligations, you should first ensure that you fall within the scope of the AI Act at all. (If this is not the case, there is logically no obligation to fulfil transparency obligations under the AI Act) Are you actually using AI within the meaning of the AI Act? What is your role in relation to the AI system or model? You can find help in answering these questions in our article “Provider or operator? Deciphering the key roles in the AI Act“.

On the one hand, the scope of the transparency obligations depends on the role – are you a “provider” or an “operator”? On the other hand, the decisive factor is which category of the AI Act the AI system falls into. Our article “Risk classification according to the AI Act” provides information on the correct categorisation of your AI system.

The “provider” is defined in Art. 3 No. 3 of the AI Act. According to this, a provider is someone who is actively involved in the development of the AI system/model or who integrates an existing AI system/model into their own product and markets it under their own name or brand.

Anyone who uses an AI system under their own responsibility as part of their professional activity is categorised as an “operator” (see Art. 3 No. 4 AI Act). Private users of AI are expressly not covered by the obligations in the AI Act.

The central transparency obligations are aimed at the “providers” of AI systems/models. In contrast, the “operators” have fewer obligations. There are transparency obligations for high-risk AI systems within the meaning of Art. 6 of the AI Act, for certain AI systems pursuant to Art. 50 of the AI Act and for AI models with a general purpose, Art. 53 of the AI Act.

2. Transparency obligations for high-risk AI systems within the meaning of Art. 6 of the AI Act

a) For providers:

According to Art. 13 of the AI Act, providers are subject to the very vaguely formulated obligation to design and develop high-risk systems in such a way that their operation is sufficiently transparent so that operators can appropriately interpret and use the system’s output. To this end, providers should provide operating instructions, for example, which must contain the information listed in Art. 13 para. 3 of the AI Act (including the name of the provider, features, capabilities and performance limits of the AI system…). In addition, Art. 11 of the AI Actrequires providers to keep continuously updated technical documentation of the high-risk AI system, which must fulfil the general and specific requirements listed in Annex IV (general description of the AI system, detailed description of the components of the AI system and its development process (…)). Art. 12 of the AI Act also stipulates that providers must ensure that the technology of the high-risk AI system enables automatic recording of events. The records created in this way must be kept in accordance with Art. 19 of the AI Act.

b) For operators:

Operators are subject to information obligations with regard to the use of high-risk AI systems under certain conditions. For example, operators of high-risk AI systems that make decisions concerning natural persons or provide support for such decisions must inform the data subjects about the use of the system (see Art. 26 XI AI Act). Operators who are employers also have the obligation to inform the employee representatives and the affected employees prior to the commissioning or use of a high-risk AI system in the workplace that they will be subject to its use (Art. 26 (7) of the AI Act).

Persons affected by a decision made with the help of a high-risk AI system have a right to information from the operator about the role of the AI system in the decision-making process (Art. 86 para. 1 of the AI Act).

3. Transparency obligations for “certain” AI systems pursuant to Art. 50 of the AI Act

a) For providers:

According to Art. 50 (1) of the AI Act, providers of AI systems that are intended for direct interaction with natural persons (e.g. chatbots (e.g. chatbots or care robots) must ensure that the persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a reasonably well-informed, attentive and circumspect person based on the circumstances and context of use.

Providers shall ensure that AI-systems intended for direct interaction with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI-system, unless this is obvious from the perspective of a reasonably well-informed, observant and circumspect natural person based on the circumstances and context of use.

Since the provider usually does not know under what circumstances and in what context the AI system will be used, the obligation must be understood to mean that the information must be technically provided and provided as a default setting in the AI system.

According to Art. 50 (2) of the AI Act, providers of AI systems that generate or manipulate synthetic audio, image, video or text contentare obliged to ensure that the outputs of the AI system are labelled in a machine-readable format and are recognisable as artificially generated or manipulated. The labelling should not only serve to inform viewers of manipulated or generated content, but also enable the operators of AI systems and other parties involved to fulfil their respective information obligations.

Providers of AI-systems, including general purpose AI-systems, that generate synthetic audio, image, video or text content shall ensure that the output of the AI-system is labelled in a machine-readable format and identifiable as artificially generated or manipulated. Providers shall ensure that, where technically feasible, their technical solutions are effective, interoperable, resilient and reliable, taking into account the specificities and limitations of the different types of content, the costs of implementation and the generally recognised state of the art, as may be reflected in the relevant technical standards.

For example, Art. 25 para. 1 k) of the Digital Service Act (DSA) requires providers of very large online platforms such as Booking, Amazon or Instagram to label generated or manipulated media content conspicuously if it is possible to deceive recipients about the authenticity of the generated content or persons. There are no clear answers in the AI Act to the question of what requirements should be placed on the technical implementation of labelling. Art. 50 para. 2 sentence 2 of the AI Act merely states that the technical solution must be “effective, interoperable, robust and reliable”. Recital 133 mentions watermarking, metadata identification, cryptographic methods, logging methods and fingerprints as possible techniques.

b) For operators

Operators of AI systems for emotion recognition or biometric categorisation, the data subjects must inform the persons concerned about the operation of the system in accordance with Art. 50 para. 3 of the AI Act.

Operators of AI systems that generate or manipulate image, sound, video or text contentare obliged to disclose that the content has been artificially generated or manipulated if it constitutes deepfakes and/or news of public interest. What is meant by a “deepfake” is defined in Art. 3 No. 60 of the AI Act. Accordingly, a “deepfake” is content generated or manipulated by AI that gives the appearance of being genuine and truthful but is in fact neither genuine nor truthful. When a message concerns a matter of public interest is not defined in more detail. This therefore remains a matter of interpretation and should be assessed in the spatial and temporal context of the publication. The implementation of the disclosure that the content is AI-generated or manipulated is also left to the operator. There are no specifications or requirements for labelling. In order to avoid the risk of not recognising a deepfake or a message of public interest and consequently not labelling it, the operator could comply with its labelling obligation by always retaining the labelling provided by the provider as part of the obligation under Art. 50 para. 2. However, this is unlikely to be an attractive solution for the operator in every case.

4. Transparency obligations specifically for general purpose AI models (Art. 53 AI Act)

Only transparency obligations for providers can be found here. According to Art. 53 (1) of the AI Act, providers are obliged to provide continuously updated documentation and relevant information about the AI model. Among other things, this is intended to ensure that downstream providers understand the functioning and capabilities of the AI model and can fulfil their obligations under the AI Act. In addition, providers of general-purpose AI models are obliged to publish a sufficiently detailed summary of the content used to train the AI model (Art. 53 para. 1 d) of the AI Act).

5. Conclusion

The AI Act, which came into force in August 2024, will already apply from 2 August 2025 with regard to the transparency obligations regulated in Art. 53 AI Act for AI models with a general purpose of use, while the other transparency obligations described will apply from 2 August 2026. In view of the not inconsiderable scope of these obligations, especially for providers, it is advisable to start implementing appropriate measures at an early stage.