In view of the advancing use of artificial intelligence (hereinafter “AI”), in particular generative AI such as ChatGPT, the French supervisory authority CNIL published on 16 May 2023 an action plan for the authority’s handling of AI systems, which focuses on the protection of personal data.
This article summarises the statement of the authority. On the part of the association of German supervisory authorities (Conference of Independent Data Protection Authorities of the Federation and the Länder, hereinafter “DSK”), there is no currently published focus, which is why a look at France is all the more worthwhile.
The CNIL’s action plan was born out of the following considerations:
- The CNIL has been working for several years to anticipate and answer the questions raised by AI.
- In 2023, the focus will be particularly on AR cameras and generative AI, large language models and AI chatbots.
- It will also prepare for the entry into force of the draft EU AI Regulation.
I. CNIL defines AI systems and gives examples
The CNIL starts with an explanation of AI systems and their development, in particular generative AI and Large Language Models (LLMs) such as GPT-3, BLOOM or Megatron NLG and chatbots derived from them (ChatGPT or Bard). AI applications of imaging (Dall-E, Midjourney, Stable Diffusion, etc.) and speech (Vall-E) are also explained:
According to the CNIL ,generative AI is a system capable of creating text, images or other content (music, video, speech, etc.) based on instructions from a human user. These systems can generate new content from training data. Due to the large amounts of data used for their training, the results are close to comparable content produced without AI. However, these systems require the user to specify their queries precisely in order to produce the expected results, as generative AI generates real expertise on how to compose the user’s queries (quick engineering).
II. CNIL drafts an action plan
The CNIL identifies use cases for the different variants of AI systems (classification, prediction, content generation, etc.) from a data protection point of view and defines four objectives to make the use of AI systems tangible and legally regulable:
- Understanding how AI systems work and their impact on humans;
- Enabling and guiding the development of AI that respects personal data;
- Supporting companies in France and Europe;
- Audit and control AI systems and protect people.
Step 1: Understanding how AI systems work and their impact on people
As a first step, the CNIL defines sets of issues that address the existing and future impact of AI systems on people and the protection of their personal data:
- Fairness and transparency of data processing operations underlying the operation of AI systems;
- Protection of publicly available data on the internet from being scraped or mined for tool development;
- Protection of data submitted by users when using these tools – from their collection (through an interface) to their possible reuse and processing by machine learning algorithms;
- Implications for individuals’ rights to their data, both in relation to data collected for model learning and data that may be provided by these AI systems, such as content created in the case of generative AI;
- Protection against bias and discrimination that may occur;
- Defining the security challenges for AI systems.
To clarify these issues, the CNIL has commissioned the CNIL Laboratory for Digital Innovation (LINC), which has written a dossier (in French) in this regard, with an outlook on the technical and ethical developments.
Step 2: Enabling and guiding the development of AI that respects personal data
In order to support companies in the field of artificial intelligence and to prepare for the entry into force of the European AI Regulation, the CNIL already has some fact sheets and guides on the use of AI. These can be found in English here.
In addition, publications on the topic of data use and further use of data within an AI system will appear from summer 2023. In addition, the focus will be on the data protection-compliant design of databases, with particular attention to the topic of machine learning.
Step 3: Joining forces and supporting companies in France and Europe
The CNIL has set itself the goal of actively promoting developers of AI systems and supporting them in their further development. To this end, projects have been developed in recent years that focus on various areas in which AI is and can be applied, and research and targeted advice has been provided to companies on this, for example in the areas of health and education. In addition, there are concrete support programmes for business actors, for example in the area of data protection compliance.
4. Step 4: Audit and control of AI systems and protection of people
Finally, the last step of the CNIL’s action plan is dedicated to the ethical issues around the impact and potential dangers of the use of generative AI. In particular, the focus is to be on the use of AI in the areas of video surveillance and fraud prevention. In addition, complaints about the use of AI will be investigated in order to be able to identify undesirable developments at an early stage.
From a data protection perspective, the CNIL will ensure that data protection impact assessments are carried out in accordance with Article 35 of the GDPR, that data subjects are adequately informed in accordance with Articles 13 and 14 of the GDPR and that the exercise of data subjects’ rights is guaranteed.
III. outlook
AI is an exemplary “new technology” within the meaning of Art. 35 (1) p. 1 GDPR, so that due to the nature, scope and circumstances of the processing of personal data involved, there is likely to be a high risk to the rights and freedoms of natural persons on a regular basis – with the consequence that data controllers must carry out a data protection impact assessment in advance when using AI.
For example, the Association of German Data Protection Authorities (DSK) already lists the use of AI chatbots (“DSFA must list for the non-public sector”, point 11) as a data processing for which a data protection impact assessment must be carried out pursuant to Article 35 (4) of the GDPR in a short paper from 2018. However, the DSK currently lacks an action plan comparable to those of the CNIL; the “Hambach Declaration on Artificial Intelligence” dates from 2019.
If the CNIL’s standards are taken as a basis, one thing becomes clear: Above all, it is likely to be important for companies that use AI to initially fulfil data protection documentation and information obligations, from which it can be seen that the risks associated with the use of AI have been recognised, mitigated and (appropriately) weighed up.
The challenges associated with this are manifold. In addition to the catalogue of obligations from Art. 13 and 14 of the GDPR, Art. 35 (7) of the GDPR must be taken into account with regard to the data protection impact assessment. Art. 35(7)(a) of the GDPR is already exciting: it requires the controller to systematically describe the planned processing operations – a black box, especially in the case of generative AI, where the controller is required to shed as much light as possible into the darkness. However, it is not necessary to allocate the operations too precisely, so it should be sufficient if the controller describes his idea of the analysis by the algorithm (ZD 2022, 316 (320)). It remains to be seen how this will be handled by the German supervisory authorities.