Companies operating in the European Union need to prepare now to comply with the EU’s new AI law.
The global artificial intelligence (AI) landscape changed dramatically when the European Parliament approved the European Union Artificial Intelligence Act (EU AI Act) on March 13. At first glance, the act resembles the EU’s General Data Protection Regulation (GDPR) passed in 2016, but while GDPR is about privacy law, the EU AI Act goes into detail about regulating AI.
The EU AI law will have reverberations and impacts around the world as it encourages a paradigm that provides pre-emptive regulatory governance rather than punitive measures, and serves as a precursor to legislation that may be introduced in the United States and other countries.
An EU AI law compliance program or AI ethics risk and responsibility program should be created as a company-wide effort. Designing, implementing, scaling, and maintaining the program requires a variety of responsibilities to be determined and completed by a company’s board of directors, executive management, compliance professionals, and team managers.
Building an AI program
When beginning to implement these programs, it can be tempting to focus solely on technology as a solution, but the right programs include a combination of people, process, and technology from the start.
Saskia Vermeer de Jong, partner in AI and digital law at HVG Law LLP, part of the EY global law network, said of the law: “It clearly states that building trust in AI starts with ensuring human oversight. It is therefore good to see this value reflected, with the stipulation that safeguards must be matched to the risks, level of autonomy and context involved in the use of AI.”
Vermeer de Jong continues by noting that in order to “protect the opportunities that AI brings,” we need to better understand AI’s potential risks and develop the capacity to manage them effectively. Indeed, initiatives and guiding statements from international governing bodies such as the Organisation for Economic Co-operation and Development (OECD), the G7 AI Principles, and the Bletchley Park Summit are evidence of this: “The detailed legislation of the EU AI Law will provide a degree of clarity and certainty to companies from different sectors in the development and deployment of AI systems.”
The law itself is likely to have global repercussions as it covers AI systems “placed on the market, providing services or being used in the EU.” The law’s requirements generally apply to three roles: providers, implementers and users.
The general idea is to develop a tiered risk-based system to determine the level of oversight required for the system’s processes. The first level is an unacceptable risk system with a complete ban. The next level is high risk, requiring registration and with the burden of proving that it does not pose a significant threat to health, safety or fundamental rights. This level includes technologies used in critical infrastructure, education and vocational training, product safety, border control, law enforcement, essential services, administration of justice and employment. The third level is a limited and minimal risk system. This level is subject to its own transparency and ensures that humans are informed when necessary, fostering trust.
Although the EU AI Law is the first law to address AI, it will not be the last, which means it is important for companies to have business and risk mitigation plans in place, regardless of their location.
EU AI law has three broad and comprehensive exceptions. First, systems developed exclusively for military, defense, or national security purposes are exempt. Second, AI developed exclusively for scientific research is exempt. Third, free and open-source AI, where the code is in the public domain and available for anyone to use, modify, and distribute, is exempt.
The EU AI law sets out a graduated compliance timeline. First, it will ban prohibited AI systems within six months of the law coming into force. Within 12 months of the law coming into force, it must lay out requirements for the use of general-purpose AI models, including generative AI, that can perform a range of tasks, either alone or integrated into other applications. Finally, requirements regarding high-risk AI systems must be enacted within 24 months.
The maximum penalties for violating the prohibitions set out in Article 5 of the EU AI Law are administrative fines of up to €35 million or 7% of worldwide annual revenue, whichever is higher.
How does this law impact business?
With potentially severe penalties and complex criteria regarding risk, the EU AI law could have far-reaching implications for business. “Organizations should start preparing now by regularly updating their inventory of AI systems in development or deployment, assessing which AI systems are within the scope of the law, and identifying their risk classifications and associated compliance obligations,” Vermeer-de Jongh says, adding that this is particularly important as there are three classes that require different levels of attention:
Additionally, organizations will need to thoroughly understand the act’s many requirements, risks, and opportunities, and then review, evaluate, and adjust their current AI strategies accordingly, she explains. “Companies will also need to train AI users, maintain transparency in their AI operations, ensure high-quality datasets are used to develop AI systems, and maintain robust privacy standards.”
Vermeer de Jong also recommends consulting legal and technical experts to help navigate the compliance process: “Finally, companies should put in place appropriate accountability and governance frameworks and keep appropriate documentation ready for when EU AI law comes into force. AI regulations are evolving, so companies need to stay up to date on changes to stay compliant.”
Although the EU AI Law is the first law to address AI, it will not be the last, which means it is important for companies to have business and risk mitigation plans in place, regardless of their location.
“Different regions have adopted very different strategies on AI policy, reflecting diverse cultural approaches to any regulation,” says Vermeer de Jong, “but there are some general trends in EU AI law, including consistency with the core principles for AI developed by the OECD and endorsed by the G20.” These core principles include respect for human rights, sustainability, transparency and strong risk management, among others.
“While comprehensive legislation is not expected in the US in the near term, there is growing general consensus on the need to limit bias, strengthen data privacy and mitigate the impact of AI on the US workforce.”