On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.

As a preliminary step, the AI HLEG recommends that organizations perform a fundamental rights impact assessment to establish whether the artificial intelligence system respects the fundamental rights of the EU Charter of Fundamental Rights and the European Convention on Human Rights. That assessment could include the following questions:

  1. Does the AI system potentially negatively discriminate against people on any basis?
    1. Have you put in place processes to test, monitor, address, and rectify potential negative discrimination bias?
  2. Does the AI system respect children’s rights?
    1. Have you put in place processes to test, monitor, address, and rectify potential harm to children?
  3. Does the AI system protect personal data relating to individuals in line with the EU’s General Data Protection Regulation (“GDPR”) (for example, requirements relating to data protection impact assessments or measures to safeguard personal data)?
  4. Does the AI system respect the rights to freedom of expression and information and/or freedom of assembly and association?
    1. Have you put in place processes to test, monitor, address, and rectify potential infringement on freedom of expression and information, and/or freedom of assembly and association?

Following the performance of the fundamental rights impact assessment, organizations can then proceed to carry out the self-assessment for trustworthy AI. The Assessment List proposes a set of questions for each of the seven requirements for trustworthy AI set out in the AI HLEG’s earlier Ethics Guidelines for Trustworthy Artificial Intelligence. A non-exhaustive list of the key questions relating to each of the seven requirements are as follows:

  1. Human Agency and Oversight
  • Is the AI system designed to interact with, guide, or take decisions by human end-users that affect humans or society?
  • Could the AI system generate confusion for some or all end-users or subjects on whether they are interacting with a human or AI system?
  • Could the AI system affect human autonomy by interfering with the end-user’s decision-making process in any other unintended and undesirable way?
  • Is the AI system a self-learning or autonomous system, or is it overseen by a Human-in-the-Loop/Human-on-the-Loop/Human-in-Command?
  • Did you establish any detection and response mechanisms for undesirable adverse effects of the AI system for the end-user or subject?
  1. Technical Robustness and Safety
  • Did you define risks, risk metrics and risk levels of the AI system in each specific use case?
  • Did you develop a mechanism to evaluate when the AI system has been changed in such a way as to merit a new review of its technical robustness and safety?
  • Did you put in place a series of steps to monitor and document the AI system’s accuracy?
  • Did you put in place a proper procedure for handling the cases where the AI system yields results with a low confidence score?
  1. Privacy and Data Governance
  • Did you put in place measures to ensure compliance with the GDPR or a non-European equivalent (e.g., data protection impact assessment, appointment of a Data Protection Officer, data minimization, etc.)?
  • Did you implement the right to withdraw consent, the right to object, and the right to be forgotten into the development of the AI system?
  • Did you consider the privacy and data protection implications of data collected, generated, or processed over the course of the AI system’s life cycle?
  1. Transparency
  • Did you put in place measures that address the traceability of the AI system during its entire lifecycle?
  • Did you explain the decision(s) of the AI system to the users?
  • Did you establish mechanisms to inform users about the purpose, criteria, and
  • limitations of the decision(s) generated by the AI system?
  1. Diversity, Non-discrimination, and Fairness
  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design?
  • Did you ensure a mechanism that allows for the flagging of issues related to bias, discrimination or poor performance of the AI system?
  • Did you assess whether the AI system’s user interface is usable by those with special needs or disabilities or those at risk of exclusion?
  1. Societal and Environmental Well-being
  • Where possible, did you establish mechanisms to evaluate the environmental impact of the AI system’s development, deployment and/or use (for example, the amount of energy used and carbon emissions)?
  • Could the AI system create the risk of de-skilling of the workforce? Did you take measures to counteract de-skilling risks?
  • Does the system promote or require new (digital) skills? Did you provide training opportunities and materials for re- and up-skilling?
  • Did you assess the societal impact of the AI system’s use beyond the (end-)user and subject, such as potentially indirectly affected stakeholders or society at large?
  1. Accountability
  • Did you establish mechanisms that facilitate the AI system’s auditability (e.g., traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?
  • Did you ensure that the AI system can be audited by independent third parties?
  • Did you establish a process to discuss and continuously monitor and assess the AI system’s adherence to the Assessment List?
  • For applications that can adversely affect individuals, have redress by design mechanisms been put in place?

 

The Assessment List is part of the EU’s strategy on artificial intelligence outlined in the communication released by the European Commission in April 2018. A previous version of the Assessment List was included in April 2019 Ethics Guidelines for Trustworthy AI issued by the AI HLEG, which we discussed in our prior blog post here. The revised Assessment List reflects the learnings from the piloting phase from 26 June until 1 December 2019 in which over 350 stakeholders participated.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.