The EU AI Law was adopted by the Council of the European Union on May 21, 2024. It was formally published in the EU Official Journal in late July and is expected to come into force by August this year, instead of July as originally envisaged. However, many legal rules regarding artificial intelligence (AI) are already in place.
This will also delay the European Commission’s internal timeline, by which the EU executive prepares the implementation of the AI regulation, by a few weeks. The timeline had envisaged the regulation coming into force in June or July. The AI law will be fully applicable 24 months after it comes into force. However, the ban on AI systems that pose, for example, an unacceptable risk will apply already six months after it comes into force. This means that companies will likely not be able to use banned AI technologies after February 2025. Violations will be subject to heavy fines, similar to those of the General Data Protection Regulation (GDPR).
As the GDPR is already in place, the use of AI with regard to data privacy requires immediate attention. The recommendations of the German Data Protection Board (DSK) must already be adhered to with regard to AI. For example, the guidance “Artificial Intelligence and Data Protection – Version 1.0” published by the DSK on May 6, 2024, contains a set of legal requirements, which apply to all AI applications where personal data is processed by or through AI.
For example, according to the DSK, data controllers themselves must verify whether and to what extent the AI applications they use have been lawfully trained. Specifically, especially when selecting and using generative AI systems, covered companies must check and document whether the AI systems they deploy or develop have been trained in accordance with applicable (data privacy) law. If companies do not train the AI systems themselves, according to the DSK, they must check whether the AI systems do not generate erroneous results, which is difficult in practice.
The DSK requires that companies decide which specific application areas should be considered when using AI applications. The DSK considers that closed systems should be preferred over open systems (e.g. cloud-based systems), which in practice pose at least some challenges. This rule applies in particular to AI systems used in connection with legally relevant decision-making (e.g. application procedures). In all cases, companies should establish internal rules for the use of AI and provide training to employees. When using AI systems from third-party providers, the DSK Guidance requires the conclusion of a separate data processing agreement or joint controller agreement under the GDPR.
Overall, it is clear that data privacy and AI are hardly separable from one another. In practice, companies should bear in mind that, for example, only radically irreversible anonymization of data leads to an exemption from the GDPR, while mere pseudonymization does not.
Therefore, the rules of the new AI Office in Brussels, which aim to safeguard a unified European AI governance system, will play a key role in the implementation of AI law. Companies should closely monitor the development of new rules put forward by the AI Office and actively participate in the debate on AI.
[View source.]