On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  

The following is a summary of the key elements of the White Paper and the ICO’s response to the Government’s proposals.

Scope – Defining AI

The White Paper confirms the Government’s decision (initially proposed in its Policy Statement) to define AI by reference to two functional characteristics that call for a specific regulatory response:

  • Adaptive systems that operate by inferring patterns in data which are often not easily discernible or envisioned by their human programmers. The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes.
  • Autonomous systems that can automate complex tasks and make decisions without the express intent or ongoing control of a human. The ‘autonomy’ of AI can make it difficult to assign responsibility for outcomes.

The UK’s regulatory framework will be focused on addressing the challenges created by these unique characteristics of AI. This approach stands in contrast to the EU’s proposed AI Act which adopts a general definition of AI (for further details see our blog post here).

Cross-sectoral Principles

The White Paper outlines five core principles regulators will be expected to consider to guide the safe and innovative use of AI in their industries:

  • Safety, Security and Robustness – AI systems should function in a robust, secure and safe way;
  • Transparency and Explainability – organizations developing and deploying AI should be able to communicate the purpose of AI systems, how they work, when they are to be used, and their decision-making processes;
  • Fairness – AI systems should not discriminate against individuals or undermine their rights, nor should they create unfair commercial outcomes;
  • Accountability and governance – appropriate measures should be taken to ensure effective oversight of AI systems and clarity as to those responsible for their output; and
  • Contestability and redress – there must be clear routes to dispute harmful outcomes or decisions generated by AI.

These cross-sectoral principles are consistent with many of the EU AI Act’s objectives and align with international AI principles including the OECD AI Principles (2019), the Council of Europe’s 2021 paper on a legal framework for artificial intelligence (for more information, see here), and the Blueprint for an AI Bill of Rights proposed by the White House’s Office of Science and Technology Policy in 2022 (for further details, see here).

Over the next 12 months, regulators will be expected to issue guidance for businesses on how the principles apply in practice and cooperate with each other by issuing joint guidance in cases where their remits overlap. The Government may later impose a statutory duty on regulators to have regard to the cross-sectoral principles in the performance of their tasks.

Central Coordination and Oversight

The White Paper recognizes that there are risks with a de-centralized regulatory framework, including inconsistent enforcement or guidance across regulators. To address this, the White Paper proposes to create new functions in central Government to encourage regulatory consistency and support regulators in implementing the cross-sectoral principles. The support functions include:

  • assessment of the effectiveness of the de-centralized regulatory framework, including a commitment to remain responsive and adapt the framework if necessary;
  • central monitoring of AI risks arising in the UK;
  • public education and awareness-raising around AI; and
  • testbeds and sandbox initiatives for the development of new AI-based technologies.

The White Paper also recognizes the likely importance of technical standards as a way of providing consistent, cross-sectoral assurance that AI has been developed responsibly and safely. To this end, the Government will continue to invest in the AI Standards Hub, formed in 2022, whose role is to lead the UK’s contribution to the development of international standards for the development of AI systems.

ICO’s Response to the White Paper

On 11 April 2023, the ICO issued its response to the Government’s consultation on the White Paper. The ICO recognises that, given that a substantial portion of AI systems are ‘powered’ by personal data, the development and use of AI where processing personal data takes place will fall under the ICO’s remit.

The ICO’s response welcomes the opportunity to work with Government to ensure that the AI White Paper’s principles are interpreted in a way that is compatible with existing data protection principles. It requests clarification of how the White Paper’s proposals interact with Article 22 of the UK GDPR. In particular, whereas the White Paper sets out that, where automated decisions have a legal or significant effect on a person, regulators must consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties, the ICO response suggests that, if Article 22 UK GDPR applies, this justification is required, rather than simply a consideration. The ICO response therefore requests clarification on this point, to ensure that this does not create confusion and contradictory standards.

Next Steps

The Government is inviting AI sector participants to provide feedback on the White Paper until 21 June 2023 (for details of how to respond to the public consultation, see here). Following this, the Government will respond to the consultation and publish an AI Regulation Roadmap. Regulators are expected to issue guidance on how to apply the White Paper’s principles within the next year.

—-

Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the UK, EU and other major markets. We are monitoring developments in AI policy and regulation very closely and will be updating this site regularly – please watch this space for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Jasmine Agyekum Jasmine Agyekum

Jasmine Agyekum advises clients on a broad range of technology, AI, data protection, privacy and cybersecurity issues. She focuses her practice on providing practical and strategic advice on compliance with the EU and UK General Data Protection Regulations (GDPR), EU e-Privacy laws and…

Jasmine Agyekum advises clients on a broad range of technology, AI, data protection, privacy and cybersecurity issues. She focuses her practice on providing practical and strategic advice on compliance with the EU and UK General Data Protection Regulations (GDPR), EU e-Privacy laws and the UK Data Protection Act. Jasmine also advises on a variety of policy proposals and developments in Europe, including on the EU’s proposed Data Governance Act and AI Regulation.

Jasmine’s experience includes:

  • Advising a leading technology company on GDPR compliance in connection with the launch of an ad supported video on demand and live streaming service.
  • Advising global technology companies on the territorial application of the GDPR and EU Member State data localization laws.
  • Representing clients in numerous industries, including, life sciences, consumer products, digital health and technology and gaming, in connection with privacy due diligence in cross-border corporate mergers & acquisitions.
  • Advising clients on responding to data breaches and security incidents, including rapid incident response planning and notifications to data protection authorities and data subjects.

Jasmine’s pro bono work includes providing data protection advice to a mental health charity in connection with their launch of a directory of mental health and wellbeing support to children and working with a social mobility non-profit organization focused on widening access to opportunities in the law to individuals from various socio-economic backgrounds.

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.