Agreement in the trialogue for the application of AI.

VS – 12/2023

Negotiators from the European Parliament and the Council reached a compromise on the EU's planned regulation covering AI systems on Friday 8 December after a marathon 36-hour negotiating session. The compromise follows the risk-based approach included in the European Commission's (EC) draft law of 21 April 2021. Points of contention during the final phase were regulating the so-called basic AI models, such as those on which ChatGPT is based, as well as the possibilities of biometric monitoring being implemented.

Risk-based approach

The vast majority of AI applications will fall into the minimal risk category, which does not have to fulfil any requirements. They include AI-supported recommendation systems or filters. Conversely, strict requirements will apply to AI systems categorised as high-risk. They also apply to risk mitigation systems, data set quality, documentation, information for users and cybersecurity issues as well as mandatory human supervision. Such high-risk AI systems also include critical infrastructures as well as the systems used in law enforcement, border controlling and justice.

Dangerous practices that entail an unacceptable risk, such as using AI to manipulate free will or for social scoring, should be generally prohibited. Emotion recognition will also be banned in the workplace and in educational institutions.

Transparency obligations

Providers must design their systems in such a way that synthetic content such as audio, video, text and images produced in a machine-readable format are marked as being artificially generated or manipulated and can be recognised as such. When interacting with AI systems such as chatbots, it must be made clear to users that they are in contact with a machine. Deepfakes and other AI-generated content must be therefore marked as such, and users must also be informed whenever systems for biometric categorisation or emotion recognition are being used.

Disclosing the data sources being used

Special regulations will be enacted for large so-called basic AI models, which are trained on an extensive database and can be adapted to a wide range of different tasks. This is intended to ensure transparency along the value chain. A new European Office for Artificial Intelligence will also be set up within the EC. It will monitor how the new regulations that apply to AI models are implemented and enforced in conjunction with the responsible national market monitoring authorities.

What are the next steps

The trialogue's political agreement must now be formally confirmed by the European Parliament and the Council. The AI Act will come into force 20 days after publication in the Official Gazette and will be applied after two years. Exceptions are bans will apply after six months and regulations for large basic models will apply after twelve months.