Skip to content

Switzerland wants to establish itself globally as a leading location in artificial intelligence (“AI”), but without creating a regulatory framework for this. In this regard, the Swiss government refers to the regulations in the revised Data Protection Act (nDSG). Meanwhile, the EU has done just that with its new regulation on AI – created a legal framework in which the requirements for transparency and accountability are clearly defined, not left to data protection, and violations are subject to heavy penalties.

Switzerland is not bound by the EU requirements, but it is nevertheless worthwhile for Swiss companies to implement the requirements. Firstly, because the scope of the Regulation extends to all who use AI in, or in relation to, EU citizens. Thus, the regulation should apply to all who place AI systems on the market in the EU or use them in relation to EU citizens[1]. On the other hand, AI systems are also increasingly being used in Switzerland, so that the regulatory framework in the EU, but also the use locally, give rise to an increasing need for action for Switzerland[2]. In the following, we offer an overview of the current legal situation, which additional requirements arise from the new EU regulation compared to the requirements from data protection with regard to transparency and accountability. Furthermore we have made an  “AI-Toolkit” in german that you can download for in our shop for free. One thing we can say in advance: “Accountability is everything”, so we will make it as easy as possible for you with regard to the new documentation requirements:

1. Switzerland – guidelines for AI without regulatory requirements

On 25 November 2020, the Swiss Federal Council adopted seven guidelines as an orientation framework for the handling of artificial intelligence by the federal administration. The focus in the development of AI software should be on the common good and the protection of fundamental rights. Furthermore, the best possible framework conditions are discussed in order to sustainably establish Switzerland as a leading location for companies in the field of AI. Transparency, comprehensibility and explainability are also mentioned as essential elements. Decision-making processes based on AI should be designed in such a way that they are verifiable and comprehensible for those affected. Furthermore, there must be clear responsibilities and, when using AI, liability must be clearly defined. It must not be possible to delegate responsibility to machines. AI systems must be designed to be secure, robust and resilient in order to have a positive impact and not be susceptible to misuse or misapplication. Finally, Switzerland wants to be involved in the development of global standards and norms in accordance with its interests and values and advocate that all relevant groups be included in the political decision-making processes[3]. As early as 2018, transparency in the sense of comprehensible processes was defined as a central topic within the framework of the “digital Switzerland” strategy[4]. Although a need for regulatory action has also been identified for Switzerland[5], a legal bill is lacking so far[6].

2. The EU – new regulatory requirements

a. The new EU regulation on AI systems

With a new regulation, the EU wants to ensure that AI can be trusted so that the EU remains competitive. Central to this is safeguarding the safety and fundamental rights of EU citizens. The regulation aims to ensure that AI systems are safe, transparent, ethical, impartial and under human control. The regulation first lays down a definition of AI systems in Article 3 (1). According to recital 6, this is intended to ensure legal certainty and at the same time offer sufficient flexibility to take account of future technological developments[7]. The EU follows a risk-based approach, with the following three distinctions in systems that: 1) pose unacceptable risk and are therefore prohibited, 2) pose high risk or 3) pose low or minimal risk[8].

b. Prohibited AI

The list of prohibited practices in Title II includes all AI systems that are considered unacceptable because they violate Union values, such as fundamental rights. Anything that is considered a clear threat to EU citizens is unacceptable. From government assessment of social behaviour (social scoring) to voice-assistant toys that entice children to engage in risky behaviour.

c. AI systems with high risks

With additional testing, documentation and registration requirements in a dedicated EU database, all AI systems are considered high risk. Chapter 1 of Title III specifies the classification rules and establishes two main categories for high-risk AI systems: AI systems to be used as safety components of products subject to prior conformity assessment by third parties; other stand-alone AI systems explicitly mentioned in Annex III and primarily impacting fundamental rights.

Thesystems listed in Annex III include namely

:

  • Critical infrastructure (e.g. transport) where the life and health of citizens could be put at risk;
  • Education or vocational training, where a person’s access to education and professional life could be affected (e.g. assessment of exams);
  • Safety components of products (e.g. an AI application for robotic assisted surgery);
  • Employment, human resource management and access to self-employment (e.g. software to evaluate CVs for recruitment processes);
  • Centralised private and public services (e.g. credit scoring, denying loans to citizens);
  • Law enforcement that could interfere with people’s fundamental rights (e.g. verifying the authenticity of evidence)
  • Migration, asylum and border control (e.g. verifying the authenticity of travel documents); administration of justice and democratic processes (e.g. applying the law to concrete facts)

Chapter 2 of the EU Regulation sets out the legal requirements that high-risk AI systems must meet in terms of data, data governance, documentation and record keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security. For this reason we have developed our own “AI-Toolkit” in german, that you can download in our shop for free.

d. Limited risk systems

Finally, for Limited Risk systems, transparency obligations apply: to allow users to decide whether or not to continue using the application based on the information about the technology used. Minimal risk systems can be provided without additional requirements, such as free use of applications like AI-powered video games or spam filters. But Title IX provides the basis for creating codes of conduct to incentivise providers of AI systems that are not high risk to voluntarily apply the mandatory requirements for high risk AI systems (under Title III). Providers of AI systems that are not high risk may themselves establish and implement codes of conduct[9]. According to Article 2(1)(a), the Regulation also establishes an extraterritorial scope of application already known from the GDPR and the revised Swiss Data Protection Act. This means that a Swiss company that offers AI systems on the European market should also comply with the requirements. Otherwise, according to Article 71, penalties of up to (the higher amount of) 30 million euros or 6% of global turnover may be imposed.

e. Transparency and other principles

In accordance with the data protection principles for data processing, Articles 10 to 15 of the EU Regulation contain requirements regarding transparency, data governance, technical security and documentation, in addition to the duty of human assessment. But how can the requirements of Art.13 and 14 of the EU Regulation regarding transparency and human oversight be fulfilled in the face of AI technology and how does it relate to data protection in this respect? The transparency and information obligations regarding data processing are provided for in Art.12 to 14 DSGVO as well as in Art.17 nDSG. A right to information in the case of automated decisions can also be found in Art.22 GDPR and Art.19 of the revised Swiss Data Protection Act. Now the EU Regulation specifies the transparency requirements in Art.13 for AI systems, in particular with regard to application risks of the systems. For Switzerland, such a concretisation of the transparency obligations for AI is missing. The requirements from data protection are of only limited help here, because this is limited to the processing of personal data, but AI is not always based on the processing of personal data[10]. In addition, the data protection law minimisation principle collides with the approach of training and improving AI through a variety of data. Without training data and without the use of data evaluation processes behind it, the use of AI would not be very productive or precise. Only the availability of masses of data and the rapid analysis and evaluation help the technology to become meaningful. At the same time, some business models or analytical methods cannot be realised with a set of minimised, pseudonymised or anonymised data. The processing of personal data is a fundamental component of business models in the data economy,[11] which makes it clear that Big Data and artificial intelligence are often interlinked.[12] In addition, the very essence of AI is that not all processes are obvious, rather AI operates like a “black box”[13]. Thus, the statistical category of AI, also known as “machine learning” (or in more complex networks so-called “deep learning”), is based on intelligence performance with “data feeds”. A predictable behaviour is learned or “trained”, but offers no insight into the learned solution paths. The knowledge is implicitly represented here and gives AI the character of a black box that lacks transparency, explainability and comprehensibility of the results[14]. The data protection impact assessment required by the GDPR and the NDSG cannot be adequately carried out in the area of AI due to its “black box” nature. Because AI is a self-learning system, the algorithm is no longer comprehensible to its developers and makes its own decisions[15].

3. Conclusion – Accountability is everything

While the “black box” nature of AI systems and the principles of data protection seem to partially collide with the requirements of the new EU Regulation, the work mandate is clear: according to Chapter 2 Annex III and IV EU Regulation, documentation requirements must be complied with when using systems that fall into the “high risk” classification.

You can download the “AI-Toolkit” in german in our Shop here.

Footnotes

[1] https://ec.europa.eu/commission/presscorner/detail/de/QANDA_21_1683

[2] University of Zurich, Digital Society Initiative Position Paper: A Legal Framework for Artificial Intelligence, November 2021, p.2

[3] Guidelines “Artificial Intelligence” for the Federal Administration adopted(admin.ch)

[4] New guidelines for digital Switzerland(admin.ch)

[5] Braun Binder, Nadja; Burri, Thomas; Lohmann, Melinda Florina; Simmler, Monika; Thouvenin, Florent; Volkinger, Kerstin Noelle: “Künstliche Intelligenz: Handlungsbedarf im Schweizer Recht”, in Jusletter 28 June 2021, p.4

[6] There is no need for regulatory action with reference to the revised Data Protection Act and consideration of the regulations contained therein according to the Interdepartmental Working Group on Artificial Intelligence Switzerland, see: State Secretariat for Education, Research and Innovation SERI Research and Innovation: “Challenges of Artificial Intelligence Report of the Interdepartmental Working Group on Artificial Intelligence to the Federal Council”, 13.12.2019, p.90

[7] see in contrast the approach that a “generally valid and accepted definition of artificial intelligence does not exist”, in: State Secretariat for Education, Research and Innovation SERI Research and Innovation: “Challenges of Artificial Intelligence Report of the Interdepartmental Working Group “Artificial Intelligence” to the Federal Council”, 13.12.2019, p.7

[8] resource.html(europa.eu)

[9] Artificial Intelligence – Excellence and Trust | EU Commission(europa.eu)] [EU regulation content: Europe fit for the Digital Age: Artificial Intelligence(europa.eu); EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI – Lexology

[10] Braun Binder, Nadja; Burri, Thomas; Lohmann, Melinda Florina; Simmler, Monika; Thouvenin, Florent; Volkinger, Kerstin Noelle: “Künstliche Intelligenz: Handlungsbedarf im Schweizer Recht”, in Jusletter 28 June 2021, p.7

[11] Leistner, Matthias; Antoine, Lucie; Sagstetter, Thomas: Big Data, Mohr Siebeck, 2021, Tübingen, p.203, see also here on the so-called “privacy paradox”, which is expressed on the one hand in users’ growing distrust of so-called “data octopuses” (distrust of market-dominating companies and their practices), and on the other hand in the generous disclosure of one’s own data as “consideration” in the data economy.

[12] Indra Spiecker called Döhmann, Profiling, Big Data, Artificial Intelligence und Social Media – Gefahren für eine Gesellschaft ohne effektive Datenschutz, in: Jusletter IT 21 December 2020 Author’s note: Depending on how the specification principle from Art.13(3)(v) of the EU Regulation is handled, this also conflicts with the data protection principles.

[13] Hogenhout, Lambert: “A Framework for Ethical AI at the United Nations, in UN Office for Information and Communications Technology, 15.3.2021, p.5, 8

[14] Bitkom Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V.: “Machine Learning und die Transparenzanforderungen der DS-GVO”(bitkom.org), 2018, p.8, 13; MAKING AI’S TRANSPARENCY TRANSPARENT: notes on the- EU Proposal for the AI Act – European Law BlogCollision DSGVO and AI Regulation

[15] Artificial Intelligence & Data Protection – a contradiction?(datenschutzexperte.de)