These developments and examples show that the enormous potential of AI also shifts the balance of power between human and machine. The feeling of being ‘powerless’ can only be countered with transparent AI. What sounds like science fiction, preventing algorithms from taking power, has long been the subject of discussions in science and politics, including at European level. The European Parliament has published a resolution entitled ‘A comprehensive European industrial policy on artificial intelligence and robotics’. It calls for people to have a right to know, a right of appeal and a right to redress when AI is used in decisions that may pose a significant risk to or harm the rights and freedoms of individuals.1 The underlying fear is that the ability of AI systems to deal with complex data sets and autonomous learning could lead to decisions that they make no longer being fully understood and controlled by humans. In the field of social security, this could have considerable consequences for insured persons. It is a matter of having adequate human control of AI and the right legal and ethical framework conditions necessary to ensure this.
In its resolution, the European Parliament stresses that the development of AI systems must respect the principles of transparency and accountability with regard to the algorithms used, so that their activities can be understood by humans.2 However, this poses a problem, because it is no longer just the general public that lacks an understanding of how algorithmic systems work. This is the conclusion of a study proposing various policy measures for transparency and accountability.3 It calls for awareness-raising and capacity building to better understand the functioning of algorithmic systems and their basic selection and decision-making criteria. Other measures relate to transparency and accountability in the use of such decisions in the public sector, including social security institutions. Here, an algorithmic impact assessment (AIA) should help to explain where such systems are used and evaluate their intended use and implementation.
However, the financial and administrative burden of such an impact assessment could be disproportionate in the private sector, especially for smaller, low-risk applications. The resolution proposes creating a legal liability framework that would provide for reduced transparency and impact assessment requirements in return for more extensive liability. It is proposed to establish a regulatory authority with expertise in the analysis of algorithmic systems and a network of external advisory experts.