The UK Government recently published its AI Governance and Regulation: Policy Statement (the “AI Statement”) setting out its proposed approach to regulating Artificial Intelligence (“AI”) in the UK. The AI Statement was published alongside the draft Data Protection and Digital Information Bill (see our blog post here for further details on the Bill) and is intended to live alongside the Government’s post-Brexit data protection reforms. The Statement builds on the UK Government’s ten year National AI Strategy (as detailed in our blog post here) and includes a call for views and evidence that closes on 26 September.

Unlike the EU’s cross-sector AI Act which is currently making its way through the legislative process (see our post on the proposed Regulation here), the AI Statement does not propose introducing a new standalone regulatory framework for AI. The UK Government notes that it does not think it is currently necessary to introduce legislation on AI; rather it envisages adopting a set of high-level AI principles to be developed and implemented by sectoral regulators. Regulators will be asked to consider guidance or voluntary measures in the first instance, and the UK Government will keep this non-statutory approach under review.

Scope – defining AI

The AI Statement does not put forward a universally applicable definition of AI, and instead suggests two broad characteristics that would put an AI system within the scope of regulation:

  • Adaptive systems that operate on the basis of instructions and “have not been expressly programmed with human intent”, highlighting in particular the difficulties in explaining the logic or intent by which an output has been produced; and
  • Autonomous systems that can operate in dynamic environments by automating complex tasks and making decisions without the ongoing control of a human, highlighting the challenges of assigning responsibility for actions taken by AI systems.

The UK Government hopes that this approach can be as flexible as the technology requires and regulators will be able to develop a more granular and bespoke definition of AI within each sector.

Cross-sectoral AI principles

The AI Statement identifies several key challenges the Government will seek to address as part of its approach to regulating AI: lack of clarity over how the UK’s existing laws apply to AI; inconsistency between the powers of regulators to address the use of AI within their remit; and current and future AI risks not being adequately addressed by existing legislation. The AI Statement confirms that the UK is heading towards a sector-based approach for AI regulation, and asserts that regulators are best placed to shape the approach for their area of expertise. The intention is that AI regulation should be context-specific, and that interventions are based on “real, identifiable, unacceptable levels of risk” so as not to stifle innovation.

Acknowledging that there’s less uniformity in a cross-sectoral approach, the AI Statement proposes that all regulation of AI systems be subject to six overarching principles to ensure coherence and streamlining. The six principles are based on the OECD’s Principles on AI (as previously discussed on this blog) and demonstrate the UK’s commitment to them:

  1. Ensure that AI is used safely
  2. Ensure that AI is technically secure and functions as designed
  3. Make sure that AI is appropriately transparent and explainable
  4. Embed considerations of fairness into AI
  5. Define legal persons’ responsibility for AI governance
  6. Clarify routes to redress or contestability

These principles will be interpreted and implemented by existing regulators. The AI Statement notes that in considering the roles, powers, remits and capabilities of regulators, the Government will work with a broad selection of regulators, such as the Information Commissioner’s Office (“ICO”), the Competition and Markets Authority (“CMA”), Ofcom, the Medicine and Healthcare Regulatory Authority (“MHRA”) and the Equality and Human Rights Commission (“EHRC”). Importantly, the Government will consider if there is a need to update the powers and remits of these regulators to be able to apply this new regime, noting at the same time that equal powers for regulators is unnecessary, and neither is a uniform approach.

Next steps

In addition to the AI Statement, the Government published its AI Action Plan, rounding up actions taken and planned to deliver the UK’s National AI Strategy. The plan confirms that a white paper on AI governance will be published towards the end of this year along with a public consultation. The call for views that forms part of the AI Statement will be open until 26 September.

* * * *

Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the UK and other major markets. We are monitoring the UK’s developments very closely and will be updating this site regularly – please watch this space for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Tomos Griffiths

Tomos Griffiths is a Trainee. He attended Durham University.