Implementation of the AI Act
New EU Artificial Intelligence Board meets for the first time.
HS – 09/2024
The EU
has set itself the goal of tapping the full potential of artificial
intelligence (AI), while ensuring its safe and reliable application. This is to
be achieved primarily through the Regulation
laying down harmonised rules on artificial intelligence (AI Act), which
came into force on 1 August 2024. As part of the adoption of the AI Act, a new European
Artificial Intelligence Board (AI Board) met officially for the first time on
10 September. Articles 65 and 66 of the AI Act form the basis for the
establishment of the AI Board.
Composition of the AI Board
The AI
Board comprises representatives of the European Commission and the EU Member
States. Moreover, the European Data Protection Supervisor is part of the AI
Board as an observer. Representatives of the European AI Office also take part
in meetings of the AI Board, but do not participate in voting. Other
authorities or bodies of the EU or the Member States may be invited to
individual meetings of the AI Board depending on their relevance. The AI Board is
chaired by a representative of the Member States.
Organisation of the AI Board
There
are two permanent subgroups in the AI Board to provide a platform for
cooperation and exchange between national market surveillance authorities and
to inform authorities on matters relating to market surveillance and notified
bodies. In Germany, the Federal Network Agency has been appointed as the market
surveillance authority and the German Accreditation Body as the notifying body.
In addition, the AI Board can set up further permanent or temporary subgroups
as required.
Tasks of the AI Board
According
to the AI Act, the AI Board is to support and advise the Commission and the
Member States, so as to facilitate the harmonised and effective application of
the law. With this aim in mind, the AI Board can, for example, issue
recommendations and written opinions on relevant issues relating to the
adoption and application of the AI Act, including the annual review of
applications categorised as high-risk AI systems, which are subject to
particularly strict regulation under the AI Act.
Impact on social insurance
AI
systems that are used in the areas of accessibility and recourse to essential
private and public services and benefits are classified as high-risk AI
systems. This applies, for example, to AI systems that regulate access to
healthcare or rehabilitation services. The regulations for such systems include
the establishment of a risk management system as well as quality management and
information obligations. Applications used by individual social insurance
agencies do not yet fall into this category of high-risk AI systems.
Nevertheless, this could change in the future if machine learning procedures
are used to review a benefit claim, for instance.