The AI Law is expected to come into force this summer, and policymakers must now focus on implementing this complex law. One of the first steps will be the establishment of an AI Office, a central body mandated by the law to coordinate its application, conduct AI safety studies, develop codes of conduct, and investigate compliance issues. However, there is a risk that the Office could fall victim to the same bureaucratic obstacles that affect many other EU institutions and bodies, resulting in a lengthy and complex administrative burden that will ultimately have a negative impact on AI innovation in Europe.
To better support companies in advancing the application of this law across Member States, the AI Office should prioritize two tasks. First, it should work closely with Member States to support their national implementation of this law, paying attention to the specific needs of Member States. The AI Office should foster cooperation among Member States on regulatory sandboxes, as well as with the European High Performance Computing Joint Undertaking (EuroHPC JU), an initiative to coordinate and pool supercomputing resources among Member States. Second, the AI Office should develop, as soon as practicable, a code of practice outlining clear obligations for General Purpose AI (GPAI) developers and watermarking techniques. This code of practice would serve as a placeholder for uniform standards that will not be available until Member States fully implement the AI Law. The AI Office should iteratively develop the code of practice to reflect the latest research and best practices. Voluntary adherence by companies to the code, assuming compliance, would go a long way to significantly reduce the complexity of EU AI company regulation.
Supporting national adoption through the AI Commission
The AI Law gives the AI Office the task of coordinating its implementation, but it is unclear how the AI Office will achieve this: it should leverage its role as the secretariat for the AI Committee, a group representing the 27 Member States and the European Data Protection Supervisory Authorities (EDPS), to liaise between Member States and between them.
To achieve this, the AI Secretariat should introduce an AI Commission Liaison Officer to manage discussions between the AI Secretariat and the AI Commission. As a non-voting member, the Secretariat can provide independent advice on how national authorities should implement the law. First and foremost, the AI Commission should serve as a forum for national authorities of the Member States tasked with implementation to talk to each other. The AI Secretariat should leverage this connection to provide targeted support, especially to Member States that may lack domestic infrastructure and be unable to roll out the law in line with the implementation timeline. Given the structure of the law itself and the prohibited and permitted use cases, it is likely that part of the law’s enforcement will be left in the hands of national sector regulators who will monitor those use cases. Thus, in coordinating and monitoring implementation among Member States, the Secretariat should support specific needs among Member States, such as consultation with national sector regulators expressed through the AI Commission.
Second, the Secretariat should promote cross-border regulatory sandboxes. The AI Law mandates the coordinated deployment of regulatory sandboxes to foster cutting-edge AI innovation, with at least one per Member State by two years after its entry into force. This timeline is too slow compared to the incredible pace of AI innovation. The Secretariat should therefore work with the Member States of the AI Commission and, through intermediation between Member States with complementary regulatory frameworks, set up fewer and more coordinated cross-border regulatory sandboxes. This would ensure a broader and faster deployment, making these regulatory sandboxes more attractive for companies, as they would provide access to a wider range of markets. It would also serve as a microclimate for testing the broader EU-wide digital single market.
Third, the Office should establish stronger collaboration with the EuroHPC JU to ensure access to the most promising innovative AI solutions. For example, EU member states could nominate the most promising AI solutions through the AI Committee, and the AI Office could use these proposals to coordinate with the EuroHPC JU to ensure adequate access. Challenge Grants could also offer awards that enhance access to the EuroHPC JU.
Iteratively develop a code of practice
The AI Office must develop a code of conduct by Q2 2025. The code of conduct will serve as the first standard for compliance with AI law until the EU establishes a uniform standard. As such, creating a code of conduct should be a top priority, especially since GPAI developers’ voluntary adherence to the code of conduct would create a presumption of compliance with the law and significantly reduce regulatory complexity.
The Code should have concrete, measurable goals and key performance indicators (KPIs). As AI model assessment, safety, transparency and explainability technologies are emerging fields, the development of the Code should be agile and iterative. To promote real-world compliance, the Code should be based on technical feasibility and should be achieved in close collaboration with a scientific panel of independent experts. The AI Office should conduct consultations to hear the views of industry and other interest groups. Similarly, as the law applies to foreign companies wishing to operate in the EU market, the Office should ensure engagement with the international research community and prioritize best practices and immediately actionable solutions.
The AI Law also requires GPAI developers to publish summaries of the data they used for model training. Given the tensions around protecting trade secrets and providing sufficient information, the Department should thoroughly explore the level of granularity required for model training summaries. For example, it should work with various stakeholders, including the EDPS, the AI Committee, the scientific community, and industry, to determine the minimum level of information required for compliance. It is important for the Department to strike the right balance to limit unnecessary disclosure of model training data. This could affect fair competition and drive away companies. Companies forced to share trade secrets may shy away from doing business in the EU. There is also a security element. Currently, it is unclear what information would be useful to bad actors, including state actors. Sharing information about these models could inadvertently expose sensitive information that could be used by bad actors. The Department should keep these concerns in mind when publishing its requirements for GPAI developers.
Competitiveness Depends on Execution
The implementation of the AI Law comes at a time of flux for both the industry and the technology itself. It is essential that the AI Office lead an innovation-oriented perspective on AI governance to ensure that AI solutions have the opportunity to develop and spread through society. Moreover, Europe’s competitiveness depends on this and on the attitude of the EU’s top-level institutions towards emerging technologies. The AI Office should not miss the opportunity to set the direction of its next mandate and lead Europe towards AI innovation through the AI Law.
Image credit: Copyright Flickr/Lisbon Council/Creative Commons.