Magazine ed*
ed* No. 02/2024

What are the implications of the AI Act for social insurance?

ed* No. 02/2024 – Chapter 4

With regard to the use of AI systems by social insurance institutions, it can be summarised that this has so far been limited to rather simple systems that support and relieve employees in their day-to-day work. The ultimate decision is always made by humans. The AI Act may also require action with regard to such lower-risk AI systems. For example, AI systems already in use must be made transparent to the user, especially for applications that involve human interaction. These include AI chatbots, for example.


In contrast, the category of high-risk AI systems does not yet include any applications used by individual social insurance institutions. Nevertheless, this could change in the future if machine learning procedures are used to review a benefit claim, for instance. In the Netherlands, an AI project is in the development phase to make predictions about changes in the living conditions of insured persons. Automated notifications are intended as a reminder to check for any changes so that benefits can be adapted to living conditions and personalised more precisely. This would also reduce the administrative burden on insurance institutions.1

In Belgium and Estonia, automated decision-making in the area of unemployment insurance is already partially permitted.

Estonia and Belgium have gone even further, as shown by examples from the area of unemployment insurance. In Estonia, the law allows automated decision-making to grant or reject un­employment benefits. An AI system is used to check the accuracy of infor­mation in applications and to issue both negative and positive notices. In Belgium, the verification of un­employment benefit applications is also partially automated. Nonetheless, only positive notices are issued on the basis of AI. Rejection notices are checked by a human employee prior to delivery.


The selected examples clearly show: AI is developing rapidly and its range of applications in the field of social security is expanding just as quickly. Against this background, social insurance institutions, as operators of AI systems, will have to continuously deal with the provisions of the AI Act from now on.

Challenges of AI in social insurance

In principle, the use of AI in the adminis­tration of public services – from simple AI bots to systems that assist with bene­fit decisions – requires the employ­ment of AI experts. This is because the maintenance of AI sys­tems is more complex than that of conventional soft­ware; it must be continuously main­tained and updated. On the one hand, this results from the need to comply with legal regulations such as those arising from the AI Act. On the other, it is essential to keep an eye on AI and counteract any discrimination, especially when it comes to sensitive data relevant to social insurance.


A second key challenge in the use of AI in social insurance relates to moral considerations regarding values, stand­­ards and principles. Not with­out good reason does the AI Act classify information and decisions of AI sys­tems that assist, for example, in the evaluation of benefit or reimbursement claims as high-risk systems, as they have a direct impact on people’s access to essential services. This must be taken into account by ensuring ethical decision-making, which requires risk assessment in advance, transparency about the use and operation of the AI system and a guarantee of its safety for the people concerned. Any bias in the AI system must be ruled out when it comes to the decision-making process itself. Training the underlying AI model using suitable data is crucial for this.