AdobeStock /Сake78

Ethical guidelines – which values are AI based on?

However, regardless of these legal aspects, AI also raises ethical questions; for example, with regard to potential bias and discrimination in AI decision-making, or with regard to values such as self-determination, avoidance of harm, fairness and accountability, which must be taken into account as part of the development of an algorithm.

The Commission tasked the High-Level Expert Group on Artificial Intelligence with developing ethics guidelines, which were presented as part of Digital Day 2019.1 One of the basic principles of the guidelines is the development of integrated ethics, which would see ethical principles taken into consideration from the very beginning of the design process of AI products and services.2

Now that the main requirements for trustworthy AI have been identified, the Commission intends to launch a pilot project in summer 2019 and gather feedback on how to improve the assessment list. After the pilot phase, and based on the feedback received, the High-Level Expert Group will review the evaluation lists in early 2020 and propose any next steps to the Commission if necessary.

Artificial intelligence will also play an increasingly significant role in the field of social security. From the examples given, we can clearly see that AI can be used for a wide range of applications for social insurance that can benefit insured persons. But the limitations and risks of using AI must not be ignored. A legal and ethical framework for the use of AI, transparency, accountability and the effective monitoring of the decisions taken by AI can help to create acceptance and trust among insured persons.