Skip to content

The AI Act came into force this August. The first wave of obligations arising from it is fast approaching, with implementation beginning in February 2025. For companies using or intending to offer AI-based or AI-assisted products, determining whether the AI Act applies to them is crucial. In addition to the risk classification of the AI, the company’s role is one of the key factors in determining the obligations under the AI Act. You can find an overview of the AI Act in our article “Artificial Intelligence (AI) – The AI Act is here!”

The two initial questions companies should ask themselves are:

  • Is the relevant product or application “artificial intelligence” within the meaning of the AI Act? This generally applies when it concerns a general-purpose AI model or an AI system.
  • What role does my company play concerning the AI system or model?

Only after considering the product and its specific use in the context of its risk classification can the specific obligations be determined.

 “AI Model” or “AI System”?

The AI Act does not explicitly define artificial intelligence but treats it as an umbrella term for “AI models” and “AI systems.”

 AI System under Art. 3 No. 1 AI Act

How exactly to define an “AI system” was highly debated throughout the negotiations leading up to the final version of the clause. In the end, the EU legislator settled on a rather specific, though at first glance difficult to grasp, wording. The legal definition of an AI system can be found in Art. 3 No. 1 AI Act. According to it, an AI system is:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Upon closer inspection, it becomes clear that several individual features need to be assessed to determine what constitutes an AI system. Below is a brief explanation of what these terms generally mean:

  • Machine-based is self-explanatory and means that the AI system must be operated by machines, primarily a computer environment, which can be accessed either directly or remotely.
  • Varying levels of autonomy means that certain tasks can be completed without human supervision or pre-settings.
  • Adaptiveness after deployment refers to the ability to produce outcomes beyond simple data processing, enabling learning, reasoning, and modeling processes based on training. It also implies that the system may be capable of learning and adjusting even after deployment.
  • The ability to infer means the system can independently draw conclusions from analyzed data sets to make predictions, decisions, or provide recommendations.
  • The conclusions must relate to the objectives specified within the system’s input by the users.
  • The output must be such that it can have an impact on the digital or real world, such as generating text, images, videos, audio, etc.

In most cases, the AI you are using will be classified as an AI system rather than a pure AI model.

AI Model (with General Purpose) under Art. 3 No. 63 AI Act

In contrast to an AI system, an “AI model” refers to a specific component within an AI system, often developed through machine learning. You can think of the AI model as the brain controlling the body, interacting with its environment through the body’s senses.

An AI model serves as the basis for decision-making or predictions within an AI system but, by itself, does not yet constitute a functional system. These models can be integrated into various AI systems but only become fully functional and classified as “AI systems” through additional components and functions. The AI Act exclusively provides a legal definition for “general-purpose AI models,” focusing on their broad applicability for a variety of tasks. The EU legislator assumes that such models pose an increased risk to natural persons, thus imposing obligations specifically for them.

The AI Act defines a general-purpose AI model in Art. 3 No. 63 AI Act as:

“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”

Your Company’s Role: Provider or Deployer?

The AI Act provides for a number of actors. In particular, providers, deployers, importers and distributors are defined in Art. 3 AI Act. The classification of the role of an actor is essential for determining a company’s obligations under the AI Act.

In practice, a distinction between ‘provider’ and ‘deployer’ will be made in most cases when determining the company’s own role. The exact classification into one of these categories depends on the way in which the AI is developed, integrated into own products and/or used. This differentiation is crucial in order to fulfil the respective regulatory requirements and ensure that the legal requirements are implemented correctly.

When Are You a Provider?

According to Art. 3 No. 3 AI Act, a provider is:

“a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.”

This means that a company is considered a provider if it is actively involved in the development or integrates an existing AI model into its own product and markets it under its brand. Examples include companies offering an AI-based platform as SaaS (Software as a Service), developing and marketing an AI system for internal use, or integrating existing AI models into their own products and offering them under their own name. Whether merely embedding an AI system from another company into one’s environment (e.g., website) is sufficient to be classified as a provider remains unclear. However, there are strong indications, particularly from the wording of the regulation, that this is not the case.

 When Are You a Deployer?

According to Art. 3 No. 4 AI Act, a deployer is:

“a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”

This means a company is classified as a deployer if it uses an AI system for internal purposes without developing or marketing it as its own product. Examples include using an external AI tool to support customer service or internally applying an AI model to optimize business processes. As a deployer, companies are responsible for ensuring the AI system is used in compliance with regulations, but they do not bear the comprehensive obligations of a provider.

Hidden Risks of a Gradual Transition from Deployer to Provider

The final classification of a company’s role is particularly influenced by the risk classification of the AI system used or developed. There are situations where a company that would generally be categorized as a deployer is equated with a provider and thus must fulfill the same obligations as a provider.

Article 25 of the AI Act sets out the conditions for this transition and outlines the resulting consequences. The transition to provider primarily concerns situations where an existing high-risk AI system:

  • is marketed under a new name or brand (Art. 25 para. 1 lit. a);
  • undergoes substantial modifications to its functionality or safety (Art. 25 para. 1 lit. b); or
  • the intended purpose of the system is modified, so that an AI system originally not classified as high-risk is now considered a high-risk AI system (Art. 25 para. 1 lit. c).

In practical terms, this means that a gradual transition from deployer to provider could occur through modifications like fine-tuning, the use of Retrieval-Augmented Generation (RAG), or through the inclusion of meta-prompts.

Under Art. 25 AI Act, the new responsible parties are considered providers and must ensure all regulatory requirements are met, while the original provider is relieved of this duty but must still cooperate with the new provider. It is critical to carefully review any modifications to an AI system to avoid potential legal and financial risks, as incorrect classification or insufficient adaptation can have serious consequences, including heavy fines of up to 3% of the total worldwide annual revenue of the preceding fiscal year.

Conclusion

The distinction between “provider” and “deployer” is crucial for companies developing or using AI systems to comply with the AI Act. This role determination significantly impacts the legal obligations and potential risks a company must face. The resulting responsibilities depend not only on how the AI technology is used but also on the specific adaptation and deployment of the AI system. To minimize legal and financial risks, it is essential to carefully assess any modifications to an AI system and make a clear role assignment.

We are happy to support you in correctly categorizing your company as a provider or deployer under the AI Act and offer comprehensive legal advice throughout the lifecycle of your AI systems. Let us work together to ensure your AI technologies comply with legal requirements and minimize legal risks.