The AI Act is here, and companies that (plan to) use or develop artificial intelligence are facing new challenges. A key focus is the risk classification, which determines the obligations and requirements for companies using AI in their operations or products. In this article, we explain how the various risk classes are structured and how you can determine which category the AI-tools used or developed by you fall into.
Overview
The AI Act and Risk Classes
The AI Act aims to develop and deploy artificial intelligence in line with the high EU standards for health, safety, and fundamental rights. It uses a risk-based approach to ensure that regulations are adjusted according to the intensity and scope of the risks posed by AI systems. Particularly dangerous practices are prohibited, while high-risk AI systems must meet strict requirements to ensure their safe use. Transparency obligations ensure that the use of certain AI systems remains comprehensible to affected users.
In its wording, the AI Act distinguishes only between prohibited AI practices (Art. 5 AI Act) and high-risk AI systems (Art. 6 AI Act). Other risk classes are not directly addressed. However, a closer look at the individual provisions allows for a broader classification. Thus, AI systems with limited risk and AI systems without risk should also be considered. Below, we will look at the individual risk classes and clarify the core obligations for affected actors.
Prohibited AI Practices (Art. 5 AI Act)
The AI Act prohibits the use of certain AI systems classified as posing an unacceptably high risk due to their deep interference with fundamental rights and their potential to cause significant harm.
These prohibitions particularly affect – but are not limited to – AI systems that:
- use subliminal manipulative techniques to influence people’s behavior in a way that undermines their decision-making autonomy and causes significant harm to them (Art. 5 para. 1 lit. a AI Act). Such systems can subtly change individuals’ behavior through stimuli that are not consciously perceived, leading them to make decisions they would not have made without the AI’s influence. A relevant buzzword in practice here is: Dark Patterns.
- exploit the vulnerabilities of people by targeting their weaknesses due to age, disability, or social or economic situations, to influence behavior in a harmful way (Art. 5 para. 1 lit. b AI Act). This mainly concerns groups that are more susceptible to manipulation due to their living conditions. This too is a form of Dark Patterns.
- are used for social scoring, where individuals are evaluated over time based on their behavior or personal characteristics, leading to social disadvantages (Art. 5 para. 1 lit. c AI Act). These systems can have discriminatory effects by unfairly disadvantaging people based on data in social contexts.
- predict the risk of future offenses based solely on profiling (Art. 5 para. 1 lit. d AI Act). These systems rely on personal traits and characteristics of individuals without using objective and verifiable facts, deeply infringing on the rights of the affected persons.
- use facial recognition, created by indiscriminately collecting facial images from the internet or video surveillance (Art. 5 para. 1 lit. e AI Act). These systems may contribute to mass surveillance.
- recognize emotions in workplaces or educational institutions, inferring the emotional state of individuals unless used for medical or safety purposes (Art. 5 para. 1 lit. f AI Act). In other contexts, the use of such systems is prohibited due to the potential for significant negative consequences for those affected.
The prohibitions in Art. 5 AI Act aim to protect fundamental rights and ensure that AI technologies are not used in manipulative or harmful ways.
High-Risk AI systems (Art. 6 AI Act)
The central concept in the AI Act, to which the core of the regulation pertains, is the high-risk AI system. If an AI system is classified as high-risk, extensive obligations follow for providers, deployers, and other actors involved in the AI lifecycle.
The classification of an AI system as high-risk under the AI Act is conducted through a multi-stage process, as outlined in Art. 6 AI Act. In simplified terms, the assessment of whether an AI system is high-risk proceeds as follows:
- First, it is checked whether the system is used as a safety-critical component of a product listed in the Union’s harmonized legislation (Annex I). If this is the case and the product is subject to a conformity assessment by a third party, the AI system is automatically classified as high-risk AI and is subject to the associated requirements (Art. 6 para. 1 AI Act).
- If the AI system does not fall into this category, Art. 6 para. 2 AI Act applies. Here, it is checked whether the system is used in one of the application areas listed in Annex III. Examples include (but are not limited to):
- AI systems in critical infrastructure, such as those used in energy or water supply,
- AI systems used in human resources, influencing decisions on hiring or promotions,
- AI systems determining access to essential public or private services, such as creditworthiness.
These types of systems pose an increased risk of harm to health, safety, or fundamental rights.
Providers must regularly check whether their AI applications are affected by delegated acts of the EU, as the Commission may expand the list of high-risk AI systems. Additionally, deployers must pay close attention to the fact that even after the implementation of an AI system, which was not initially classified as high-risk, the specific manner of use (Art. 25 para. 1 lit. c AI Act) or updates and modifications by the provider may lead to the system being reclassified as high-risk. Therefore, companies are effectively required to continuously monitor the current status of their AI systems and models and adjust the risk classification accordingly.
If the assessment concludes that the system is a high-risk AI system, the extent of the obligations you face will depend on your specific role as a provider, deployer, or another actor. You can learn more about the distinction between the roles of provider and deployer – the most relevant roles – in our article “Provider or Deployer? Decoding the Key Roles in the AI Act”
Limited Risk (Art. 50 AI Act)
AI systems that are not categorized as high-risk AI are generally permitted. However, for certain types of use and interaction possibilities with end users, the AI Regulation assumes a limited but still existing risk, which can and should be minimized by providing users with appropriate information. In order to manage these risks, the AI Act therefore defines certain transparency obligations that are aimed at providers and operators.
This is particularly the case under Art. 50 AI Act when:
- AI systems are intended for direct interaction with natural persons (e.g., chatbots). In such cases, the provider must ensure that affected individuals are informed they are interacting with an AI system. Exceptions: The transparency requirement does not apply if it is obvious from the context that the user is interacting with an AI system, or if the AI system is lawfully used to detect, prevent, investigate, or prosecute crimes (Art. 50 para. 1 AI Act);
- AI systems are used to generate synthetic audio, image, video, or text content. The provider must ensure that the generated content is labeled as artificially generated or manipulated. Exceptions: The transparency requirement does not apply if the AI system performs a supporting function and does not significantly alter the input data provided by the deployer or its meaning, or if it is lawfully used to detect, prevent, investigate, or prosecute crimes (Art. 50 para. 2 AI Act);
- Emotion recognition systems or biometric categorization systems are used. In such cases, the deployer must inform the affected individuals about the use of the system. Exception: The transparency requirement does not apply if the system is lawfully used to detect, prevent, or prosecute crimes (Art. 50 para. 3 AI Act);
- Deep fake technologies are used. In such cases, the deployer must disclose that the generated content was artificially created or manipulated. Exceptions: The transparency requirement does not apply to artistic, creative, satirical, or fictional works, or if the content is used to detect, prevent, or prosecute crimes (Art. 50 para. 4 AI Act).
The mandatory information must be provided clearly and unambiguously at the latest at the first interaction or exposure to the AI system and must comply with the applicable accessibility requirements. In addition, the EU Commission will promote the development of practical guidelines to facilitate the effective implementation of transparency obligations. If necessary, further provisions to concretize these obligations can be issued by means of implementing acts.
No Risk
AI systems that fall outside the aforementioned risk classes are not subject to any specific restrictions under the AI Act and can be used freely.
However, it is important to note that individuals responsible for implementing and using the AI system must possess sufficient AI literacy as defined by Art. 4 AI Act. AI literacy is generally a prerequisite that applies to all risk classes. For more information on AI literacy, feel free to read our article “AI Literacy in Companies – Do Companies Need an AI Officer? And if so, How Many?”
Conclusion
The AI Act presents new challenges and tasks for companies, but with the right approach, these can be successfully managed. A systematic risk assessment and implementation of the necessary measures can not only ensure compliance but also strengthen your long-term innovation and competitiveness.
What you should do now:
- Identify your AI systems and determine which risk category they fall into.
- Assess the legal requirements for your company based on your role as a provider, deployer, or another actor.
- Implement the necessary measures to ensure compliance with the regulation.
- Ensure AI competence in your company to meet the legal requirements.
- Continuously monitor your AI systems to respond quickly to new requirements or adjustments.
How we can support you:
We are happy to assist you every step of the way – from the first risk assessment and implementation of the necessary measures to the continuous legal monitoring of your AI systems. Contact us and let’s work together to ensure that you can use AI not only in a legally compliant, but also in a future-proof manner.