Key Point
On May 21, 2024, the European Council approved the Artificial Intelligence Law, which will come into force over the next 36 months. The AI Law defines AI systems very broadly and outlines the obligations of providers, deployers, importers and distributors, regardless of geographic location, when they sell AI systems in the EU, provide services to those who use AI systems, or exploit the “output” of AI systems. The law classifies AI systems into four categories based on the risk they pose, with higher obligations for greater risks. Importers and distributors have separate and specific responsibilities. The law could include significant fines comparable to the GDPR. The law creates a complex governance system, requiring member states to appoint national supervisory authorities.
The newly approved Artificial Intelligence Act (AI Act or the Act) aims to create a safe and trustworthy environment for the development and use of AI in the European Union.
The law, approved by the European Council on May 21, 2024, is a world first and could set a new standard for AI regulation, similar to what the General Data Protection Regulation (GDPR) did for privacy. Other jurisdictions have enacted laws regulating AI, but they are more localized. Examples include the New York City AI Bias Act and the Colorado Artificial Intelligence Act in the United States.
Companies that use AI in any form should consider preparing for the new law by assessing the risk level of their use and meeting risk management, oversight and other obligations.
Scope and classification of AI systems
Wide range
The AI Act broadly defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy, that may be adaptive once deployed, and that, for explicit or implicit purposes, infers how to generate outputs, such as predictions, content, recommendations or decisions that may impact the physical or virtual environment.”
The AI Law applies to providers, implementers, importers and distributors of AI systems, regardless of their location, if they do any of the following in the EU:
Selling AI systems; Providing services to users of AI systems; Utilizing the “output” of AI systems.
Exceptions include AI systems used for scientific research and development, or for personal non-professional activities. In the healthcare, pharmaceutical and life sciences sectors, it will be particularly interesting to see how the scientific research exception is interpreted.
Furthermore, AI systems provided under free and open source licenses are exempt from the AI Act, except where they are placed on the market as high-risk or unacceptable AI systems, or as certain AI systems subject to transparency obligations, such as AI systems used in medical devices or law enforcement.
Risk-Based Classification
The AI Act takes a risk-based approach and classifies AI systems into four categories:
Unacceptable risk: considered to pose a threat to individuals and violate EU fundamental rights and values (e.g. non-discrimination, data protection, right to privacy). The AI Law does not further define what is meant by an AI system of unacceptable risk, but gives some examples, such as social scoring systems (classifying individuals based on data about their social behavior), real-time biometrics, and systems that manipulate behavior. These systems are prohibited. High risk: may pose a high risk to the safety, fundamental rights and freedoms of individuals or society, such as systems that are product safety components or are used in certain areas such as law enforcement, immigration, and education. This category includes, for example, employment tools used for recruitment, systems used to determine creditworthiness, and medical devices. A complete and comprehensive set of requirements applies to these systems. Limited risk: may confuse or deceive users. Examples are chatbots and deepfakes. Transparency obligations apply to these systems. Low risk: exempt from the AI Law as they pose minimal or no risk. Text generators are an example.
Obligations for High-Risk AI Systems
The AI Law will introduce significant obligations for high-risk AI systems, with responsibilities that vary depending on the actor’s role.
Providers of high-risk AI systems (those who develop AI systems or have others developed that they plan to market or provide as services under their own name or trademark, whether for compensation or free of charge) have the greatest responsibilities, including:
Establish a risk management system. Ensure data quality. Maintain technical documentation. Implement human oversight. Achieve standards for accuracy, robustness and cybersecurity. Set up post-market monitoring. Register AI systems.
Adopters of high-risk AI systems (those using an AI system under their own authority, except where the system is used in the course of personal non-professional activities) have fewer obligations, mainly relating to appropriate use and oversight.
Importers and distributors also have their own obligations: for example, they must ensure that the product or software bears the necessary CE marking indicating that the AI system complies with the AI Law and other applicable EU laws.
If an importer, installer or distributor applies their own trademark to an AI system, significantly modifies the AI system, or uses it for a high-risk purpose not anticipated by the provider, they will themselves be categorised as a provider and will be subject to the set of obligations applicable to high-risk system providers under the Act.
timing
The AI Act will enter into force 20 days after its publication in the Official Journal of the EU and is due to come into force in June 2024. Specific provisions will come into force over the following three years.
The key steps after entry into force and the relevant dates that may be relevant are as follows:
6 months (December 2024): Restrictions on prohibited AI practices come into effect. 12 months (June 2025): Regulations on general-purpose AI come into effect. 24 months (June 2026): Requirements for high-risk AI systems come into effect. 36 months (June 2027): Rules on high-risk AI systems used as safety components in products come into effect.
Governance
The AI Law establishes a complex framework for oversight and enforcement of AI law involving both EU authorities (such as the European AI Office) and at national level (notably market surveillance authorities), which could result in organizations being subject to investigations and enforcement actions in multiple EU jurisdictions simultaneously.
This is in contrast to the GDPR, which generally permits organisations operating in multiple EU member states to only deal with a single lead supervisory authority.
Stiff penalties for violations
Violations of the AI Law can result in significant fines that vary depending on the nature of the violation and the size of the organization. Violations related to prohibited AI systems can result in fines of up to 35 million euros ($38.1 million) or 7% of global turnover. Other violations of AI Law obligations can result in fines of up to 15 million euros or 3% of global turnover.
Furthermore, providing false information can result in fines of up to €7.5 million or 1.5% of global turnover.
Unlike the GDPR, the AI Act does not include an individual civil right of action.
what to do
Given the breadth of the AI Act’s requirements, organizations should consider starting to prepare for the day the law goes into effect, even if that day is not imminent. Organizations may need to take a few key steps to successfully navigate these changes.
Identify AI systems: Start by cataloging the software and hardware products (both internal and external) used within or provided by your organization and assess which fall under the definition of an “AI system”. Assess whether legislation applies: For any identified AI systems, check whether they fall within the broad scope outlined in the AI Act (e.g. whether the systems are provided to users in any Member State). Classify systems: Classify AI systems according to the regulatory hierarchy under the law. Recognize that only a subset can be classified as prohibited or high risk. Determine organisational role: Understand the specific requirements your organisation must comply with with regard to these AI systems. For high-risk systems, identify your organisational role (provider, deployer, etc.) to determine your obligations. Develop a compliance plan: A comprehensive plan will help ensure compliance with these obligations and seamlessly integrate into a broader compliance framework.
It is important to remember that the AI Act does not cover all AI systems, but AI systems outside its scope will continue to be governed by other frameworks such as the GDPR, as well as consumer protection and intellectual property laws, which in fact also apply to AI systems that fall within the scope of the AI Act.
Furthermore, the AI Act is likely to be complemented by the proposed AI Liability Directive (AILD) and a new Product Liability Directive (PLD).
AILD. While the AI Act has no provisions regarding liability for damages claims, AILD gives more certainty regarding liability and creates a rebuttable presumption that any defects in an AI system are the responsibility of the developer. However, critics question how it can be established that an AI system malfunctions and is defective. PLD. The new PLD, which is expected to be adopted by the EU Council later this year, aims to modernize the existing rules on strict liability of manufacturers for defective products. The law gives individuals the right, based on strict liability, to claim compensation from manufacturers for damages suffered as a result of a product defect. The new PLD therefore creates a framework that makes it easier for individuals to assert and enforce such claims.
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended to be, and should not be construed as, legal advice. This memorandum may be considered advertising under applicable state law.