On 21 May 2024, the Council of the European Union gave final approval to the Artificial Intelligence (AI) Act, the first European regulation on AI, marking the end of a legislative journey that began in 2021. The process involved the European Parliament, the European Commission and the Council of the European Union, culminating in an initial political agreement on the AI Bill in December 2023, preliminary approval in January 2024 and the first significant approval by the European Parliament on 13 March 2024.
The AI Law aims to strike a balance between protecting rights and freedoms and promoting a “space” conducive to technological innovation: its main objective is to ensure the safe introduction of AI systems in Europe and align their use with fundamental EU values and rights, while encouraging investment and innovation on the continent.
The Regulation fits into a broader European and international framework that, although fragmented, has consistently addressed the key issue of the use of AI tools and has the same focus on protection and development, and more notably, efforts by organisations such as the OECD and the UN, such as the adoption of the Principles on Artificial Intelligence in 2019 and the UN Interim Report on the Governance of AI for Humanity.
Definition of Artificial Intelligence
To provide a precise definition of AI tools while considering future adaptability, the Act aligns with the definition of the Organization for Economic Cooperation and Development (OECD), which states that AI tools are “machine-based systems designed to operate with different levels of autonomy, that are adaptive after deployment, and that infer how to generate outputs, such as predictions, content, recommendations or decisions that can impact the physical or virtual environment, for explicit or implicit purposes” from inputs they receive.
This broad definition intentionally excludes simpler traditional software systems and non-adaptive programming algorithms. The committee is tasked with developing specific guidelines for applying this definition.
Scope of application
The Regulation specifies which entities are covered. It applies to both public and private organizations in the EU and third countries that produce or distribute AI tools on the European market. AI systems used for military, defense or national security purposes, as well as AI systems developed exclusively for scientific research and development, are excluded from the Regulation’s scope.
Regulatory approach
The AI Act adopts a “risk-based” approach: the higher the risk to people’s safety and rights, the stricter the regulations. AI systems are classified into four risk levels:
Unacceptable risk: These are AI systems that go against EU values and principles and are prohibited. High risk: These systems may have significant adverse effects on people’s rights and safety, so market access is only granted if certain obligations and requirements are met, such as carrying out a conformity assessment and complying with European harmonized standards. Limited risk: These systems are subject to limited rules on transparency, as they pose a relatively low risk to users. Minimal risk: These systems are not bound by specific obligations, as they pose only a negligible risk to users.
Additionally, the Act includes provisions regarding General Purpose AI (GPAI) models, which it defines as “computer models that can be used for a variety of tasks, either alone or as components of AI systems, by being trained on vast amounts of data.” Due to their broad scope and potential for system-wide risks, GPAI models are subject to stricter requirements for validity, interoperability, transparency, and compliance.
Prohibited AI Practices
The rule specifically prohibits uses of AI systems that:
Using subliminal, manipulative, or deceptive techniques beyond an individual’s awareness; Exploiting the weaknesses of an individual or specific groups of individuals; Evaluating or ranking individuals or groups based on social scores leading to disadvantaged or unfavourable treatment in unrelated social situations or in a manner disproportionate to social behaviour; Using “real-time” remote biometric authentication in public places except for the following: Targeted searches for victims of kidnapping, trafficking or exploitation, or missing persons; Preventing imminent threats to life or serious harm; Identifying suspects of crimes.
Supervision and Enforcement
The AI Act mandates the establishment of an authority to ensure compliance.
National Supervisory Authorities: designated by each Member State and enforce the regulation at national level; European Commission on Artificial Intelligence: coordinates national authorities and ensures consistent application across Europe; Market Supervisory Authority: monitors market compliance with the regulation.
Regulatory Sandbox
The law introduces a “regulatory sandbox”, which is described as “a managed framework, set up by a competent authority, offering providers or prospective providers of AI systems the opportunity to develop, train, validate and test innovative AI systems in real-world conditions, where appropriate, for a limited period of time and in accordance with a sandbox plan under regulatory supervision”.
These sandboxes will allow for controlled experimentation and testing of AI systems, encouraging innovation by providing regulatory exemptions to deepen understanding and ensure compliance with EU law.
The EU is also setting up physical and virtual Testing and Experimentation Facilities (TEFs) to carry out large-scale AI testing in sectors such as agri-food, health, manufacturing and smart cities.
Control and Sanctions
The regulation outlines a variable sanctions regime based on the severity and nature of the violation and the operator’s turnover. The approach aims to be a balanced deterrent, taking into account the interests of SMEs and start-ups. Member states have discretion to set sanctions within the limits set by the EU. The Commission issues directives, mandates and guidelines to assist the standardization process.
Criticism
Initial criticism of the AI Act included:
Ambiguity regarding the roles and responsibilities of different actors, especially with regard to open source AI models; The need for stronger consumer protection, including a broader definition of AI systems and basic principles and obligations for all AI systems; A lack of provisions to address systemic sustainability risks, making rules on prohibited or high-risk practices potentially ineffective; Insufficient enforcement structures and coordination between relevant authorities.
Next steps
The approval of the AI Law marks a historic milestone, making the EU the first country to implement a comprehensive AI legal framework. The AI Law is expected to be published in the EU Official Journal in the coming days and will come into force 20 days after publication.
The implementation of the AI Act rules will unfold in such a way that systems with unacceptable risk prohibited by the regulations will be phased out within six months. The governance and powers rules will apply within 12 months, and the high-risk systems rules will come into force within 36 months. Full application of the regulations, including the rules for all high-risk systems, is expected within two years.
For more information on artificial intelligence, click here.