New regu­la­tions for dealing with AI have come into force.

PP – 02/2025

The initial regulations for dealing with artificial intelligence (AI) have been in force in the EU since 2 February. These regulations set clear standards for using and developing AI. They include AI system definitions, AI employee expertise requirements as well as the banning of specific AI systems and are intended to ensure the protection of basic rights and ethical principles when dealing with AI.

Risk-based approach to hazard assess­ment

The AI Act regulations follow a risk-based approach. AI systems are classified differently as they depend on the risk they pose to the security and basic rights of citizens. AI applications that pose clear threats to basic rights, livelihoods and security are considered to be unacceptable risks and are banned. High-risk AI systems that are used in critical areas such as healthcare, human resources or justice are permitted subject to strict security and transparency requirements. AI systems that pose a limited risk must be clearly identifiable as such in order to avoid any deceptions. Conversely, applications with minimal risks are subject to hardly any regulation.

Special require­ments for compa­nies using AI

Since 2 February, all companies that develop or use AI systems must assess them according to their level of risk and implement appropriate measures to meet the legal requirements. These requirements include transparency obligations that will inform users when they are interacting with an AI. Furthermore, businesses are obliged to clearly explain how their AI systems work. High-risk AI is not allowed to make decisions completely autonomously, but requires human supervision. Employees who work with AI systems must also be trained in the safe use of AI technologies. The object of these requirements is to ensure responsible AI use.

Guide­lines covering risk assess­ment have been published

The European Commission has also published non-legally binding guidelines that define the bans on specific AI systems. For example, emotion recognition in the workplace, social scoring and manipulative AI are all banned. Manipulative AI refers to systems that use subconscious stimuli or targeted deceptions. These guidelines are intended to make it easier for companies to assess the risk of their AI systems and to realise a standardised interpretation of AI law throughout the EU.

High penal­ties for viola­tions of the AI Act

Companies that violate the new regulations must expect high penalties. Fines of up to 35 million euros or seven per cent of annual global turnover can be imposed, depending on the severity of the violation. This is intended to ensure that AI technologies are used responsibly within the EU and in line with European values.

Mixed reac­tions to the new regu­la­tions

Whereas some experts welcome the regulations in order to create legal certainty for companies, others express concerns about the possible scope of interpretation and practical implementation. These requirements pose major challenges especially for small and medium-sized businesses. The extent to which the regulations will affect rapid AI development is also being discussed. Some people are calling for regular updates in order to adapt the regulations to help developments and not to hinder innovation. Overall, the regulations mark a significant step in European AI regulation. However, it remains to be seen to what extent they will be implemented in practice and what impact they will have on AI development and use in the EU.

We use cookies and similar technologies to understand how you use our services and improve your experience. By clicking 'Accept', you accept all cookies. Otherwise we use only functionally essential cookies. For more information, please see our Data Protection Policy