Skip to content

Access control via facial scanning, age classification in e-commerce, voice recognition in customer service, or emotion analysis in video calls – there are many applications for biometric systems in business practice. Artificial intelligence (AI) makes many of these systems better, faster, and more effective.
But the use of AI with biometric data not only raises concerns among data protectionists – against the backdrop of the AI Regulation (AI-VO), many companies are asking themselves:

Which biometric applications are prohibited under the AI-VO, which are considered high-risk, and when do “only” transparency obligations apply?

What is biometric data?

Biometric data refers to personal data resulting from the technical processing of a person’s physical, physiological or behavioural characteristics. This includes, for example, facial images, fingerprints, iris recognition data, voice or speaker characteristics, gait analysis or typing dynamics.
Unlike the GDPR, the AI Regulation does not require that a unique identification of this person be enabled or confirmed.

Checklist: Is a data record biometric? YES, if:

  • the data relates to a person’s physical, physiological or behavioural characteristics and
  • it results from technical processing (e.g. feature extraction, pattern matching).

The risk classes of the AI Regulation when using biometrics

Biometric data are generally special categories of personal data (Art. 9 GDPR) and their processing is therefore only permitted from a data protection perspective if there is explicit legal permission (in particular, express consent).

The AI Regulation regulates artificial intelligence on a risk-based basis. The AI Regulation does not regulate whether the use of AI with biometric data is permitted or not, but differentiates between the requirements for a corresponding AI system according to function, purpose and context. The AI Regulation assigns biometric AI systems to three risk classes:

  1. Prohibited AI practices (Art. 5 AI Regulation)
  2. High-risk AI systems (Art. 6(2) in conjunction with Annex III AI Regulation)
  3. AI systems with transparency obligations (Art. 50 AI Regulation)

The relevant risk class depends on the intended function of the AI system:

  • Biometric identification – “Who is this person?”

Biometric identification occurs when biometric features (e.g. face, voice, fingerprint) are used to uniquely identify or verify a person.

  • Prohibited: only in exceptional cases
    Biometric identification is not prohibited per se. Only real-time remote identification in publicly accessible spaces by law enforcement authorities is generally not permitted.
  • Principle:high-risk AI
    Furthermore, biometric identification of natural persons is classified as high risk regardless of the area of application (Annex III No. 1 AI Regulation). This applies, for example, to:

    • Access control to buildings
    • Visitor or customer check-in
    • Retrospective facial recognition in image or video material
  • No additionaltransparency obligation
    However, the identification function alone does not trigger a separate transparency obligation under Article 50 of the AI Regulation.
  • Biometric categorisation – “Which group does the person belong to?”

In biometric categorisation, individuals are assigned to categories based on biometric characteristics. These include, for example, age, gender, or voice and behaviour profiles.

In risk classification, the crucial question is: How sensitive are the characteristics?

  • Prohibited: the derivation of sensitive characteristics.
    AI systems are prohibited if the biometric categorisation targets special categories of personal data, specifically ethnic origin, religion, health, political opinion or sexual orientation.
  • High-risk AI and transparency obligations: other biometric categorisation
    If the AI system is to perform biometric categorisation, the AI system must be classified as high-risk AI (Annex III No. 1 AI Regulation) with regard to all categorisation characteristics – including less “sensitive” characteristics such as gender, age, hair colour, eye colour, tattoos or personal preferences and interests – and must also comply with the transparency obligations under Article 50(3) of the AI Regulation.
  • Exception: AI systems that are inseparably linked to another commercial service as a purely ancillary function for objective technical reasons.
    The use of AI is permitted, for example, when it is used in e-commerce to categorise facial or physical characteristics in order to enable consumers to virtually try on the clothing on offer.
  • Emotion recognition – “How does the person feel?”

Emotion recognition aims to recognise or deduce emotions or intentions from facial expressions, voice, posture or other physiological signals.

  • Prohibited: in work and education contexts
    AI-based emotion recognition in the workplace or in educational institutions is generally prohibited. Exceptions are only conceivable for medical or security reasons with high guarantees, but the requirements are high.
  • High risk: biometric AI systems for emotion recognition
    An AI system that automatically recognises emotions such as fear, anger or surprise based on facial expressions, gestures and voice poses high risks for those affected. Therefore, providers of such systems in particular must meet the high requirements of the AI Regulation for high-risk AI systems.
  • Principle: transparency obligation
    Like AI systems for biometric categorisation, emotion recognition systems are subject to the transparency obligations in Article 50(3) of the AI Regulation – data subjects must therefore be informed by the operator about the use and processing of personal data so that they can decide for themselves whether or not they wish to be exposed to it.
  • Special case: scraping

The indiscriminate mass reading (e.g. by web crawlers or bots) of faces from the internet, social media or surveillance recordings for the purpose of building or expanding facial recognition databases is strictly prohibited. Neither high-risk nor transparency rules apply here – the system simply must not be used.

However, the ban only applies to facial images, not to other biometric data such as voices or fingerprints. Furthermore, only the creation of a database for facial recognition purposes is prohibited – other purposes, such as training generative AI, are not prohibited per se.

Brief summary

Companies that use or plan to use biometric AI systems should therefore always consider the requirements of the AI Regulation in addition to data protection issues and check which biometric function is used in which context and for what purpose.

If you need assistance with the risk classification of a biometric AI system for your company, please do not hesitate to contact us!

Kontakt mit Marlene Schreiber aufnehmen

Name(Required)
E-Mail(Required)
Wie können wir Ihnen weiterhelfen? Schreiben Sie uns – wir sind gerne für Sie da.