On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

The ICO identifies that the following trade-offs may arise in AI projects:

  • Accuracy vs. privacy. Large amounts of data are needed to improve the accuracy of AI systems but this may impact the privacy rights of the individuals involved.
  • Fairness vs. accuracy. Certain factors need to be removed from AI algorithms to ensure that AI systems are fair and do not discriminate individuals on the basis of any protected characteristics (as well as known proxies, such as postcode as a proxy for race).  However, this may impact the accuracy of the AI system.
  • Fairness vs. privacy. In order to test whether an AI system is discriminatory, it needs to be tested using data labelled by protected characteristics, but this may be restricted under privacy law (i.e., under the rules on processing special category personal data).
  • Explainability vs. accuracy. For complex AI systems, it may be difficult to explain the logic of the system in an easy-to-understand way that is also accurate.  The ICO considers, however, that this trade-off between explainability and accuracy is often a false dichotomy.  See our previous blog post on the ICO’s separate report on explaining AI for more on the topic.
  • Explainability vs. security. Providing detailed explanations about the logic of an AI system may lead to inadvertently disclosing information in the process that can be used to infer private information about the individuals whose personal data was used to build the AI system.  The ICO recognizes that this area is under active research, and the full extent of the risks are not yet known.

The ICO recommends that organizations take the following steps in order to manage these trade-offs that may arise:

  1. Identify and assess existing or potential trade-offs;
  2. Consider available technical means to minimize trade-offs;
  3. Have clear criteria and lines of accountability for making trade-off decisions, including a “robust, risk-based and independent approval process”;
  4. Explain trade-offs to data subjects or humans reviewing the AI outputs;
  5. Continue to regularly review trade-offs.

The ICO makes a number of additional recommendations.  For example:

  • Organizations should document decisions to an “auditable standard”, including, where required, by performing a Data Protection Impact Assessment. Such documentation should: (i) consider the risks to individuals’ personal data, (ii) use a methodology to identify and assess trade-offs; (iii) provide a rational for final decisions; and (iv) explain how the decision aligns with the organization’s risk appetite.
  • When outsourcing AI solutions, assessing trade-offs should form part of organizations’ due diligence of third parties. Organizations should ensure they can request that solutions be modified to strike the right balance between the trade-offs identified above.

In the final section of the blog, the ICO offers some worked examples demonstrating mathematical approaches to help organizations visualize and make decisions to balance the trade-offs.  Although elements of trade-offs can be precisely quantified in some cases, the ICO recognizes that not all aspects of privacy and fairness can be fully quantified.  The ICO therefore recommends that such methods should “always be supplemented with a more holistic approach”.

The ICO has published a separate blog post on the use of fully automated decision making AI systems and the right to human intervention under the GDPR.  The ICO provides practical advice for organizations on how to ensure compliance with the GDPR, such as: (i) consider necessary requirements to support a meaningful human review; (ii) provide training for human reviewers; and (iii) support and incentivize staff to escalate concerns raised by data subjects.  For more information, read the ICO’s blog here.

The ICO intends to publish a formal consultation paper on the framework for auditing AI in January 2020, followed by the final AI Auditing Framework in the spring.  In the meantime, the ICO welcomes feedback on its current thinking, and has provided a dedicated email address to obtain views (available at the bottom of the blog).  We will continue to monitor the ICO’s developments in this area and will keep you apprised on this blog.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.