The European Union (EU) is leading the global push for AI regulation, but U.S. companies are not exempt from EU regulations and should prepare their AI risk mitigation efforts accordingly.
“The EU is taking a very regulatory approach to AI,” said Edward Turtle, an associate at the London office of law firm Cooley. At its heart is “an entirely new, bespoke AI regulatory regime, commonly known as AI Law.”
On May 21, 2024, the Council of the European Union gave final approval to the AI bill, following the European Parliament’s vote to adopt the bill on March 13. The European Council’s final vote paves the way for the bill to be formally signed and published in the Official Journal of the EU.
Regarding the importance of the AI Act, Turtle pointed out that “it is the first law in the world to specifically regulate AI technology across all fields.”
From a legal and compliance perspective, the international scope of the AI Act governs AI systems deployed by companies located anywhere in the world, including US companies, so long as those AI systems affect users in the EU.
Moreover, even companies with AI systems that don’t affect EU users should be vigilant, as the AI law’s impact could spill over into other countries, which may use it as a regulatory blueprint. “Many predict that EU AI regulation will be influential in determining how AI systems are regulated in other parts of the world, including the U.S.,” Turtle said.
High-risk AI systems
As outlined by the European Commission, the AI Regulation will, in part:
It addresses risks specifically posed by AI applications, prohibits AI practices that pose unacceptable risks, determines a list of high-risk applications, sets clear requirements for AI systems for high-risk applications, and defines specific obligations for deployers and providers of high-risk AI applications.
The AI law adopts a risk-based approach: the higher the risk an AP application poses, the more compliance obligations companies will have to meet. The European Commission said AI systems that pose a “clear threat to people’s safety, livelihoods and rights” will be banned.
Examples of AI systems identified as high risk include AI used in critical infrastructure that could put the life and health of citizens at risk, product safety components such as AI applications in robotic assisted surgery, and credit scoring that denies certain citizens the opportunity to obtain a loan. These are just a few examples – there are several other ways AI can pose high risk.
A report published by the European Parliament explains that companies offering high-risk AI systems will have to carry out a “conformity assessment procedure” before their products can be sold or used in the EU. They will have to comply with a range of requirements, including testing, data training and cybersecurity. In some cases, they will have to carry out a “fundamental rights impact assessment” to ensure their systems comply with EU law.
Conformity assessments would have to be carried out based on self-assessment or with the involvement of notified bodies. Compliance with harmonized European standards, which have yet to be developed, would give high-risk AI system providers a “presumption of conformity,” the report said. “Once such AI systems are placed on the market, providers would have to carry out post-market surveillance and take corrective measures, if necessary,” the European Parliament report explained.
The AI law also introduces “limited risk” transparency obligations: if companies use chatbots, for example, humans should know they are interacting with a machine so they can make an informed decision whether to continue, the Commission has recommended.
The Commission further recommended that providers “ensure that AI-generated content is identifiable.” Furthermore, the AI Act requires that AI-generated texts published to inform the public on matters of public interest, including “audio and video content that constitutes deepfakes,” be labeled as “artificially generated.”
Heavy fines
Fines for non-compliance vary depending on the type of violation, but in any case, violations are costly: in the most serious cases, companies can be fined up to €35 million or up to 7% of the company’s worldwide annual turnover for the previous financial year, whichever is higher, for prohibited uses of AI systems.
Other less serious AI violations may be penalized with fines of up to €15 million or 3% of the company’s annual worldwide turnover for the previous financial year, whichever is higher. Less serious violations of providing inaccurate or misleading information may be penalized with fines of up to €7.5 million or 1% of the company’s annual turnover.
AI Law Compliance Obligations
In light of the passage of the AI Act and taking into account the provisions of the Act, businesses may need to consider starting with some basic measures, such as:
Assess whether, where and how your company is using AI. Make this a cross-functional effort, getting insights from all key stakeholders across the business. Determine whether your use of AI systems falls into a high-risk or limited-risk category. Carry out a “suitability assessment procedure” as required by the AI Act. Companies, particularly in the banking and insurance sectors, may need a “fundamental rights impact assessment” to “effectively ensure that fundamental rights are protected,” the AI Act states. Develop appropriate risk-based policies and procedures for employees and third-party vendors, and train and communicate on those new policies and procedures. Define specific obligations for deployers and providers of high-risk AI applications.
Legal counsel and compliance officers also need to stay up to date on the latest AI regulations and industry-leading standards. One example is the National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF) and the recently published draft AI RMF Generative AI Profile. According to NIST, the profile is intended to “help organizations identify the unique risks posed by generative AI and suggest generative AI risk management actions that best align with organizational goals and priorities.”
“Compliance with international AI and life sciences regulations will be key to mitigating liability risks,” Turtle concluded, “which means it will be more important than ever to remain compliant with increasingly complex global regulations.”