On June 26, 2019, the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) announced two important developments: (1) the launch of the pilot phase of the assessment list in its Ethics Guidelines for Trustworthy AI (the “Ethics Guidelines”); and (2) the publication of its Policy and Investment Recommendations for Trustworthy AI (the “Recommendations”).

The AI HLEG is an independent expert group established by the European Commission in June 2018.  The Recommendations are the second deliverable of the AI HLEG; the first was the Group’s Ethics Guidelines of April 2019, which defined the contours of “Trustworthy AI” (see our previous blog post here).  The Recommendations are addressed to policymakers and call for 33 actions to ensure the EU, together with its Member States, enable, develop, and build “Trustworthy AI” – that is, AI systems and technologies that reflect the AI HLEG’s now-established ethics guidelines.  Neither the Ethics Guidelines nor the Recommendations are binding, but together they provide significant insight into how the EU or Member States might regulate AI in the future.

Throughout the remainder of 2019, the AI HLEG will undertake a number of sectoral analyses of “enabling AI ecosystems” — i.e., networks of companies, research institutions and policymakers — to identify the concrete actions that will be most impactful in those sectors where AI can play a strategic role.

Pilot phase of Assessment List of Ethics Guidelines

The Ethics Guidelines of April 2019 included a checklist for stakeholders to use when assessing whether an AI system is “Trustworthy.”  In the current pilot phase, stakeholders are invited to test this assessment list and provide feedback through the European AI Alliance via an online survey that will be available until December 1, 2019.  The AI HLEG will use this feedback, along with information collected from interviews with selected representatives from the private and public sectors, to prepare a revised version of the assessment list that it will present in early 2020 to the Commission.

Recommendations for Trustworthy AI

The Recommendations urge policy-makers both at European and national level to promote the development and use of “Trustworthy AI” in Europe through adoption of 33 actions.  The Recommendations are divided into two Chapters, which are summarized below:

Chapter I sets out recommendations for policy-makers to ensure AI has a positive impact in Europe.  Each recommendation seeks to promote a human-centric approach to AI, in line with the Ethics Guidelines.  These recommendations also seek to foster cooperation between stakeholders and cross-sector collaboration, and note the importance of stakeholder consultation in particular in the context of harmonizing and standardizing regulations.

Some of the key recommendations laid out in Chapter I are as follows:

  • Boost the uptake of AI technology and services across sectors in Europe. Policy-makers should enable and foster the digitization of companies by earmarking investments in AI, fostering AI skills development through education, training and financial support, and providing technical know-how and support for SMEs.
  • Set up public-private partnerships to foster sectoral AI ecosystems. Policymakers should conduct an analysis of several selected AI ecosystems in the short term and, in the medium term, set up Sectoral Multi-Stakeholder Alliances (SMUHAs) for strategic sectors in Europe.
  • Approach government as a platform, catalyzing AI development in Europe. For contracts between a public sector organization and a company, consider introducing a requirement that data which is not proprietary to the company, and which is of general public interest, should be handed back to the public sector, allowing its reuse for beneficial innovation.
  • Increase and streamline funding for fundamental and purpose-driven research. Create incentives for interdisciplinary and multi-stakeholder research, including through the funding of AI business incubators, research labs and hackathons.
  • Promote an approach to AI centered on humans, society and the protecting the environment. For instance the recommendations call on policymakers to refrain from using AI to engage in disproportionate and mass surveillance of individuals (either for commercial or government purposes), require AI systems to disclose that they are non-human when interacting with individuals, to introduce a duty of care on suppliers of consumer-oriented AI systems to ensure the accessibility of services, encourage the development of tools that protect vulnerable demographics, and foster collaborative AI-human systems that promote safety and empower humans at work.

Chapter II puts forward recommendations to develop the skills, infrastructure, governance and investment necessary to deliver on the Trustworthy AI concept in the EU. Some of the key recommendations laid out in Chapter II are as follows:

  • Develop legally compliant and ethical data management and sharing initiatives in Europe. Policymakers should support research and development of industrial solutions for fast, secure and legally compliant data sharing (e.g., encryption) and common standards that promote the interoperability of datasets.  A data donor scheme, allowing individuals to donate data for specific purposes, should also be considered.
  • Develop and support AI-specific cyber-security infrastructures. The EU should build upon the Cybersecurity Act adopted by the EU in spring 2019 to protect networks, data and users from risks.
  • Evaluate and potentially revise EU laws, starting with the most relevant legal domains. Policymakers should conduct systemic mapping and evaluation of all existing laws that are particularly relevant to AI systems. In particular, the AI HLEG recommends considering whether data protection rules are, on the one hand, overly rigid with respect to access to public data for research purposes, and, on the other hand, under-protective by excluding non-personal data from transparency and explainability requirements.
  • Consider the need for new regulation to ensure adequate protection from adverse impacts. For AI systems with the potential to have a significant impact on human lives, policymakers should consider introducing a mandatory obligation to conduct a Trustworthy AI assessment.
  • Establish governance mechanisms for a Single Market for Trustworthy AI in Europe. Policymakers should harmonize regulation, and establish a comprehensive strategy for Member State cooperation.

The Recommendations call for significant new investment in, and resources dedicated to, transforming the regulatory and investment environment for Trustworthy AI in Europe. Both private sector and public sector organizations developing, implementing or managing AI technologies in Europe should review these Recommendations and plan for the potential opportunities and challenges on the horizon.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.