The European Union (hereinafter “The EU”) often leads the way in establishing comprehensive legal frameworks regarding novel issues. As a reminder, it was a pioneer in the area of data protection through its adoption of the EU Data Protection Directive as early as 1995, and more recently through its enactment of the General Data Protection Regulation (GDPR) in 2016, the most severe international law in the field of data protection. Similarly, the EU is currently pushing for the adoption of a detailed regulation for artificial intelligence (hereinafter “AI”) systems, the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (hereinafter “the EU AI Act draft”). First presented in April 2021 by the European Commission, this law is a breakthrough endeavor which will surely have many repercussions, on the EU’s level, but also internationally.
Why is the EU AI Act is an important piece of legislation?
Currently, in the sector of AI, the EU AI Act is a flagship initiative, which seeks to ensure the safety and trustworthiness of high-risk AI systems developed and used in the EU. It is the first law to solely address AI, and it is expected to become a “GDPR for AI”. The EU plans to adopt the EU AI Act within the next year. The protection of the fundamental human rights, as enshrined by the EU Charter of Fundamental Rights is at the core of this regulation, which “seeks to ensure a high level of protection for those fundamental rights and aims to address various sources of risks through a clearly defined risk-based approach .” Additionally, the draft regulation adopts a “future-proof approach 1 ” to allow its rules to adapt to the fast-evolving reality of AI sector (even though this claim stirs debate among experts, as some claim that the current version of the regulation fails to effectively address potential concerns raised by the uses of some AI systems).
How is AI defined in the EU AI Act draft ?
The EU AI Act draft defines an “artificial intelligence system” as “software that is developed with one or more of the techniques and approaches (listed in Annex I to the draft) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with 2 ”. This definition is quite broad and demonstrates the European Commission’s will to establish a technology-neutral definition of AI systems under EU law.
The Commission chose a unique classification system of AI systems through the risk that the concerned technology represents. Thus, the EU AI Act puts forth a “risk-based approach”, which points towards the European Commission’s efforts of producing a text that will adapt to the evolution of AI systems.
The defined four (4) levels of risk of AI go as follows:
(1) Unacceptable risk: AI systems presenting such a level of risk are considered as contravening the EU values and represent a clear threat to the safety, livelihoods and rights of people.
The EU AI Act draft prohibits “practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm”.
Furthermore, “[o]ther manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behaviour”.
Finally, the draft specifically prohibits AI-based social scoring for general purposes done by public authorities, as well as the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless certain limited exceptions apply 3 .
(2) High risk: this level of threat concerns AI systems that create a high risk to the health and safety or fundamental rights of natural persons. Such systems are permitted in the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment.
The classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used. Examples of such AI systems include, without limitation, the following:
• critical infrastructures (e.g., transport), that could put the life and health of citizens at risk;
• educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams);
• safety components of products (e.g., AI application in robot-assisted surgery); and
• administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
It is to be noted that all remote biometric identification systems are considered high risk and subject to strict requirements. High-risk AI systems will only be allowed to be put on the EU market if the following obligations are fulfilled:
• adequate risk assessment and mitigation systems;
• high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
• logging of activity to ensure traceability of results;
• detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
• clear and adequate information to the user;
• appropriate human oversight measures to minimise risk;
• high level of robustness, security and accuracy 4 .
(3) Limited risk: specific transparency obligations have to be fulfilled regarding AI systems presenting a limited risk. For instance, when using AI systems such as a chatbot, users must be aware that they are interacting with a machine to allow them to take an informed decision to continue or to step back.
(4) Minimal or no risk: the draft regulation allows the free use of minimal-risk AI, which includes AI-enabled video games or spam filters. It is to be noted that the vast majority of AI systems currently in use in the EU belong to this category.
What does this mean for companies?
From the outset, the EU AI Act draft introduces hefty fines for non-compliance, going up to 30 million € or 6% of the company’s global annual turnover (whichever is higher).
Furthermore, just like the GDPR on the protection of personal data, the EU AI Act will likely have an extraterritorial application, which signifies that it will apply to any company worldwide if they are selling into or using an AI system in the EU 5 .
It will thus be crucial for companies doing business in the EU to establish conformity assessments and technical standards, as well as AI risk management frameworks. Concerned companies will have to undertake conformity assessments to demonstrate compliance with the EU AI Act once it becomes applicable. An interesting avenue is to follow the technical standards developed by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). The latter are voluntary, but organizations who follow and adopt them will benefit from a presumption of conformity with the AI Act. These standards are yet to be developed.
Furthermore, the EU framework for AI will place the rights of the individual at its core. Indeed, on September 28, 2022, the EU Commission published proposals for the AI Liability Directive (hereinafter “the Directive”), which is designed to ensure that victims are fairly compensated if harm occurs because of an AI system. Consequently, the Directive introduces procedures to allow individuals to more easily claim damages against companies for harm caused by an AI system. The Directive directly links non-compliance with the EU AI Act to liability for AI-induced harm.
What comes next?
Although it is not yet clear when the EU AI Act will enter into force, it is generally expected that this will happen in 2024 or 2025. The first phase will likely be a transitional period, inter alia to allow businesses to come into compliance with their new obligations. During this period, standards would be mandated and developed, and the governance structures set up would be operational. Then the regulation would become applicable to operators with the standards ready and the first conformity assessments will be carried out.
Some issues still need to be agreed upon by the EU institutions in order for them to reach agreement on the final text. These include the list of high-risk AI use cases, with some organizations advocating for a delimited and proportionate list, whereas others push towards a more expansive one. The definition of AI itself is in question, as the original proposal is seen by many as too broad.
Finally, concerning the allocation of responsibility between AI “providers” and “users”, industry bodies are advocating for placing more compliance obligations on users, as they play an important role in how the AI system operates and what impact it has.
Companies leveraging AI technologies should start acting now, as the upcoming changes concerning AI systems will be major. Of course, it will first concern AI systems that are used or offered in the EU, but it is very likely that there will be a proliferation of such laws globally, similar to what happened as a result of the GDPR. Thus, businesses selling into or using an AI system in the EU should consider establishing AI risk management frameworks. This is also true for organizations in general, as legal changes are most likely to come globally.
1 EU AI Act, explanation notice, point 3.5, p. 11.
2 EU AI Act, art. 3 (1).
3 EU AI Act (draft), Title II.
4 “Regulatory framework proposal on artificial intelligence”, European Commission, online: < https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=The%20Regulatory%20Framework%20defines%204,Limited%20risk >.
5 EU AI Act (draft), art. 71 (3), (4).