On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

The Interim Report summarizes the results of recent engagements with public and industry stakeholders to obtain views on how best to explain AI decision-making, which in turn will inform the ICO’s development of guidance on this issue. The research was carried out by using a “citizen’s jury” method to find out public perception on the issues and holding roundtables with industry stakeholders represented by data scientists, researchers, Chief Data Officers, C-suite executives, Data Protection Officers, lawyers and consultants.

Following the results of the research, the Interim Report provides three key findings:

  1. the importance of context in providing the right type of explanations for AI;
  2. the need for greater education and awareness of AI systems; and
  3. the challenges of providing explanations (such as cost, commercial sensitivities, and lack of internal accountability within organizations).

In relation to context, the Institute’s engagement with members of the public found that the type and usefulness of AI explanations was highly context-dependent. For instance, interviews with members of the public found that most jurors felt it was less important to receive an explanation of the AI system in the healthcare sector, but that such explanations were more important when AI is used to make decisions about recruitment and criminal justice. Participants also felt that the importance of an explanation of an AI decision is also likely to vary depending on the person it is given to. For instance, in a healthcare setting, it may be more important for a healthcare professional to receive an explanation of a decision than the patient. Some participants also expressed the view that in some situations (such as in the healthcare or criminal justice scenarios), explanations of AI decisions may be too complex, or delivered at a time when individuals would not understand the rationale.

Industry stakeholders presented similar but nuanced views, highlighting that using explanations to identify and address underlying system bias was a key consideration. While some industry stakeholders agreed with the jurors that explanations of AI decisions should be context-specific and reflect the way in which human decision-makers provide explanations, others argued that AI decisions should be held to higher standards. Besides the risk that such explanations of AI may be too complex, industry stakeholders also identified several additional risks with AI explanations that are too detailed, such as the risks of potential disclosure of commercially sensitive material or allowing the system to be gamed. The Interim Report provides a list of contextual factors that the research found may be relevant when considering the importance, purpose and explanations of AI decision-making (see p.23).

In terms of next steps, the ICO plans to publish a first draft of its guidance over the summer, which will be subject to public consultation. Following the consultation, the ICO plans to publish the final guidance later in the autumn. The Interim Report concluded three possible implications for the development of the guidance:

  1. there is no one-size-fits-all approach for explaining AI decisions;
  2. the need for board-level buy-in on explaining AI decisions; and
  3. the value in a standardized approach to internal accountability to help assign responsibility for explainable AI decision-systems.

The Interim Report provides a taster of what’s to come by providing the current planned format and content for the guidance, which focuses on three key principles: (i) transparency; (ii) context; and (iii) accountability. It will also provide guidance on organizational controls (such as roles, policies, procedures, and documentation), technical controls (such as on data collection, model selection and explanation extraction), and on delivery of explanations. The ICO will also finalize its AI Auditing Framework in 2020, which will also address the data protection risks arising from AI systems.

Print:
EmailTweetLikeLinkedIn
Photo of Mark Young Mark Young

Mark Young advises clients on data protection, cybersecurity and other tech regulatory matters. He has particular expertise in product counselling, GDPR regulatory investigations, and legislative advocacy. Mr. Young leads on EU cybersecurity regulatory matters, and helps to oversee our internet enforcement team.

He…

Mark Young advises clients on data protection, cybersecurity and other tech regulatory matters. He has particular expertise in product counselling, GDPR regulatory investigations, and legislative advocacy. Mr. Young leads on EU cybersecurity regulatory matters, and helps to oversee our internet enforcement team.

He has been recognized in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field.” Recent editions note that he is “deeply knowledgeable in the area of privacy and data protection,” “fast, thorough and responsive,” and has “great insight into the regulators.”

Mr. Young has over 15 years of experience advising global companies, particularly in the technology, health and pharmaceutical sectors, on all aspects of data protection and security. This includes providing practical guidance on analyzing and using personal data, transferring personal data across borders, and potential liability exposure. He specializes in advising in relation to new products and services, and providing strategic advice and advocacy on a range of EU law reform issues and references to the EU Court of Justice.

For cybersecurity matters, he counsels clients on practices to protect business-critical information and comply with national and sector-specific regulation, and on preparing for and responding to cyber-based attacks and internal threats to their networks and information. He has helped a range of organizations respond to cyber and data security incidents – including external data breaches and insider theft of trade secrets – through the stages of initial detection, containment, notification, recovery and remediation.

In the IP enforcement space, Mr. Young represents right owners in the sport, media, publishing, fashion and luxury goods industries, and helps coordinate a team of internet investigators that has nearly two decades of experience conducting global notice and takedown programs to combat internet piracy.

Gemma Nash

Gemma Nash advises emerging and leading companies on data protection and intellectual property issues, including cybersecurity, copyright, trademarks, and e-commerce. She has experience advising companies in the technology, pharmaceutical, and media sectors. Her practice encompasses regulatory compliance and advisory work. Ms. Nash regularly…

Gemma Nash advises emerging and leading companies on data protection and intellectual property issues, including cybersecurity, copyright, trademarks, and e-commerce. She has experience advising companies in the technology, pharmaceutical, and media sectors. Her practice encompasses regulatory compliance and advisory work. Ms. Nash regularly provides strategic advice to global companies on complying with data protection laws in Europe and the UK.