Meta’s AI assistant isn’t available in the European Union, at least for now: After clashing with the Irish data regulator over privacy concerns, Meta announced earlier this month that it was delaying the release of its AI assistant in the EU.
Meta was collecting publicly shared content from Facebook and Instagram users around the world to train its large-scale language model (LLM), Meta Llama 3. Such LLMs are trained on large datasets to generate, summarize, translate and predict digital content. Powered by Llama 3, Meta’s new AI assistant integrates these capabilities into Meta’s social platform.
Meta’s AI assistant was first released in the United States in September 2023, before expanding to Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe. A European rollout was also imminent, but the Irish Data Protection Commission (DPC) demanded (with the implicit threat of fines and further legal action) that Meta stop using social media posts from Europeans to train Llama 3, thwarting Meta’s plans to release its AI assistant in Europe.
While AI regulation remains largely a theoretical concept in the United States, the European Union has taken a much more aggressive stance in recent years with two major instruments: the General Data Protection Regulation (GDPR), which came into force in 2018, and the EU AI Act, which took effect in March. Both regulations apply to all EU member states, with the GDPR also covering Iceland, Liechtenstein and Norway, which are part of the European Economic Area but not the EU.
Described by the EU itself as “the toughest privacy and security law in the world,” GDPR sets standards for the protection of personal data and imposes stiff fines, which can reach tens of millions of euros, on violators. GDPR applies to any organization that processes the personal data of European citizens or residents, including organizations outside the EU like Meta.
Each of the 30 countries to which the GDPR applies operates its own data protection authority responsible for monitoring compliance with the law. These authorities report to the European Data Protection Board, which ensures that the GDPR is applied uniformly across Europe and reviews appeals from people who are subject to penalties. The GDPR also provides many individual rights regarding access to, restriction, and erasure of their personal data, and gives internet users a private right of action to seek damages in civil courts.
The GDPR defines data processing broadly as “any action which is performed on data, whether automated or manual,” and covers the recording, organization, storage, use and erasure of personal information. According to the law, personal data may only be processed:
Where an individual has given explicit consent to the processing of their data (for example, when subscribing to an email list), where the processing is necessary for the purposes of a contract, personal legal obligation, or to save someone’s life, where the processing serves the public interest or some official function, particularly in the case of children, and where there are other “legitimate interests” that are not in conflict with the “fundamental rights and freedoms of data subjects”.
The GDPR sets out seven principles of data protection and accountability:
Data processing must be lawful, fair, and transparent to individuals. Processing of personal data must be related to the specific and legitimate purpose for which the data was originally collected. Data collection must be limited to what is necessary for that specific purpose. Processors must keep users’ personal data up to date and rectify inaccuracies “without undue delay”. Processors may not store personally identifiable data for any longer than is necessary for the original purpose. Processors must ensure “appropriate” security and confidentiality. Data processors must be able to demonstrate compliance with all GDPR requirements at any time.
The language of the GDPR is somewhat vague by design. The EU itself describes the regulation as “few specifics” and justifies this vagueness as a measure to prevent obsolescence. According to the EU, technology evolves quickly, so a certain degree of generality is necessary to keep the law applicable.
The law’s ambiguity gives regulators wide discretion. For example, the Irish DPC stands out as being particularly enthusiastic about enforcing the GDPR. Meta’s European headquarters is in Dublin, so the Irish regulator has taken the lead in bringing charges against the tech giant, as evidenced by this month’s move. According to its 2023 annual report, the Irish DPC accounted for 87% of GDPR fines across the EU, most of which were against Meta for privacy violations.
This contentious dynamic is likely to continue. In a June 14 statement, the DPC declared that it would “continue to engage with Meta on this issue and work with other EU data protection authorities to enforce the GDPR.” Meanwhile, Meta expressed “disappointment” with the DPC’s request, arguing that its LLM needs access to public content shared on social media platforms to “accurately understand important local language, cultural and trending topics on social media.” Meta noted that several competitors, including Google and OpenAI, still train their LLMs with EU users’ data, and stressed that it does not use private posts or messages to train its software.
The DPC’s action has temporarily halted development of Meta’s AI assistant in Europe, while the company remains in discussions with regulators. “We remain strongly confident that our approach is in compliance with European laws and regulations,” Mehta said in a statement.
What does EU AI law do?
The EU AI Law, passed in March 2024, is believed to be the world’s first comprehensive AI regulation. According to the European Commission, the law restricts certain forms of AI with the aim of ensuring “AI systems respect fundamental rights, safety and ethical principles.”
The law is primarily aimed at AI developers, but also covers individuals and organizations that use AI systems in a professional capacity, such as customer service chatbots or websites with personalized shopping recommendations. Like the GDPR, the AI law applies to organizations that operate in the EU, regardless of location. In contrast to the GDPR’s decentralized regulatory framework, the AI law will be enforced centrally by the European AI Secretariat.
EU AI law classifies AI into four risk categories:
“Unacceptable risk”: Prohibited. This refers to AI used for manipulation, biometric classification, creating facial recognition databases, or social scoring for public authorities to assess the trustworthiness of individuals, as practiced by the Chinese Communist Party. “High risk”: Highly regulated. This applies to AI that creates profiles of individuals (e.g. resume scanners for job applicants) and AI used in critical sectors such as infrastructure, education, employment, law enforcement, and justice. “Limited risk”: Minimal transparency requirements. This targets general-purpose AI systems (GPAI) such as chatbots and deepfakes. Providers of such systems must make people aware that they are manipulating AI-generated content. “Minimal risk”: Largely unregulated. This includes spam filters and AI-enhanced video games.
A large part of EU AI law targets “high-risk” AI systems, requiring providers to develop guardrails to monitor risk, ensure accuracy, allow for human oversight, and help “downstream providers” who integrate GPAI into other platforms comply with AI law requirements. They will also be required to keep detailed records to document compliance with AI law to the European AI Secretariat enforcement authorities.
For GPAI systems designated as “limited risk”, the regulatory burden is significantly reduced. Providers will still need to document the training process for their GPAI systems, comply with the EU Copyright Directive, and inform “downstream providers” of the system’s capabilities and limitations so they can comply with AI law.
The EU AI law will be implemented in stages depending on the level of risk. AI systems that pose “unacceptable risk” will be banned by September, six months after the law is enacted. “High risk” AI systems will have to comply with regulatory requirements within 24 to 36 months, depending on their type, while “limited risk” GPAI systems, like Meta’s Llama 3, will be given just 12 months.
The law has yet to come into force, but tensions are already rising between the new European AI Agency and major tech companies. Executives from Amazon and Meta have warned that regulations could stifle AI research and development in the EU. “We need to make sure that innovation continues to happen and that it doesn’t spread outside of Europe,” Amazon’s chief technology officer Werner Vogels told CNN. “Europe has long been underinvested in research and development.”
What does this mean for future AI regulation?
The conflict between Meta and the Irish DPC illustrates the ongoing struggle to balance innovation and ethics in the EU’s complex regulatory environment. The need for vast amounts of data to train AI systems runs head-on against EU privacy restrictions, creating a zero-sum battle between regulators and developers. With GDPR fully implemented and AI legislation on the horizon, we’re likely to see more clashes in the future.
Meta is not the only tech company battling regulators over the development of AI capabilities: Apple recently announced it would not offer Apple Intelligence to the EU, citing concerns over digital markets law.
Michael Frank, founder and CEO of AI consulting firm Seldon Strategies, doubts that the new AI law will truly establish the new “global standard” that EU regulators have proclaimed. “I don’t think it will have extraterritorial application,” Frank says. “Either the EU will weaken regulation at the implementation stage, or AI providers will exit the market.”