High-level Expert Group publishes policy and investment recommendations.

SW – 07/2019

On 26 June 2019, a High-Level Expert Group set up by the European Commission presented recommendations on policy and investment for trustworthy Artificial Intelligence (AI). The experts made 33 recommendations that aim to steer a trustworthy AI towards sustainability, growth, competitiveness and inclusion while strengthening, promoting and protecting people.

 

The experts generally recommend a risk-mitigation approach to regulating AIs, taking into account proportionality and the principle of prevention of harm. The greater the impact and/or the higher the probability of a risk caused by AI, the stronger the regulation should be. The term ‘risk’ should be broadly defined in order to cover all adverse effects both on the individual and on society.

However, for AI applications that present an unacceptable risk, the experts believe that the principle of prevention of harm should apply, that is, regulators should take precautionary measures if, for example, scientific evidence is available about a risk to the environment or human health. Exactly what constitutes an unacceptable risk should be discussed and decided after an open, transparent and accountable societal debate, taking into account the European legal framework, including the Charter of Fundamental Rights of the European Union.

 

The importance of AI will also increase in the field of social security, whether for medical care or the management of services and benefits, and will make it necessary to examine opportunities and risks as well as the ethical and legal framework conditions.

 

The published recommendations supplement the ‘Ethics Guidelines for Trustworthy AI’ published in April and are not claimed to be exhaustive. The experts only want to address the most urgent areas for action. The guidelines contain policy and investment recommendations for trustworthy AI, addressed to EU institutions and Member States. The expert group believes that the EU institutions and the Member States have a key role to play in achieving these objectives as key actors in the data economy, as procurers of trustworthy AI systems and as benchmarks for sound governance.

Background

In its Communication ‘Artificial Intelligence for Europe’, the European Commission set out its vision for trustworthy and human-centric AI. The aims of the Commission’s initiative are:

 

  • boost public and private investment in AI in order to increase its uptake,
  • prepare for socio-economic changes resulting from AI,
  • ensure an appropriate ethical and legal framework to protect and strengthen European values.

 

The Commission had already set up a High-Level Expert Group on Artificial Intelligence for this purpose. The Expert Group will support the implementation of the European Strategy on Artificial Intelligence by developing ethics guidelines and strategy and investment recommendations related to AI.

 

In addition to the ethics guidelines published in April, the European Commission has launched an assessment procedure to give all interested parties the opportunity to provide feedback on the evaluation list which contains the most important requirements for trustworthy AI. More than 300 stakeholders have already registered for the piloting process, which runs until 1 December 2019. In early 2020, the Expert Group will review the evaluation list on the basis of the results of the project and, if necessary, propose further steps to the Commission.