On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.

The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.

The UK AI Strategy is centred around three pillars:

(1) Investment and planning

This pillar focuses on the need to invest in the skills and resources that lead to AI innovation with the aim of increasing the type, frequency and scale of AI discoveries in the UK. The Pillar has twelve action points, with an emphasis on the importance of access to and availability of data. The Action Points include (among others):

  • Publishing a policy framework setting out the Government’s plans to enable better data availability in the wider economy. This framework will include supporting the activities of data intermediaries, including data trusts, and providing stewardship services between those sharing and accessing data.
  • Exploring how privacy-enhancing technologies can remove barriers to data sharing by more effectively managing the risks associated with sharing commercially sensitive and personal data.
  • Continuing to publish open and machine-readable public data on which AI models for both public and commercial benefit can depend.
  • Considering what datasets the Government should incentivize or curate to accelerate the development of valuable AI applications.
  • Consulting on the potential of expanding the UK’s capability in “cyber-physical infrastructure”: how common, interoperable digital tools and platforms, as well as physical testing and innovation spaces, can be brought together to form a digital and physical shared infrastructure for innovators (e.g., digital twins, test beds, and living labs).
  • Subject to the outcomes of the public consultation Data: A new direction, the Government could more explicitly permit the collection and processing of sensitive and protected characteristics data to monitor and mitigate bias in AI systems.

(2) Supporting the diffusion of AI across the whole economy

This pillar aims to ensure that the benefits of AI innovation are shared across all sectors and regions of the UK economy. An important element of this pillar is ensuring that businesses are capable of commercializing their intellectual property (“IP”) rights in AI technologies. The UK Government has already launched a consultation on AI and IP (see here) and will seek to launch a further consultation on copyright and patents for AI through the Intellectual Property Office (“IPO”). This will enable businesses to understand and identify their AI-related intellectual assets, as well as protecting, exploiting, and enforcing their rights in AI technologies.

Other actions to be taken by the Government under the second pillar include:

  • The imminent publication of the Ministry of Defence’s AI strategy, which will explain how the UK can achieve technological advantage in defence, including details on establishing a new Defence AI Centre and how to galvanize a stronger relationship between industry and defence.
  • The publication of the National Strategy for AI in Health and Social Care. This will set the direction for AI in health and social care up to 2030, and is expected to be published in early 2022.
  • Considering how the UK can use climate technologies to support the delivery of the Government’s net zero targets. This will also be complemented by the extension of UK aid to support local innovation ecosystems in developing AI nations.
  • Extending the UK’s science partnerships in international development and diplomacy to ensure that collaboration unlocks AI’s potential to accelerate progress on global challenges, from climate change to poverty reduction.

(3) Regulatory and governance framework

The AI Strategy recognizes that building a trusted and pro-innovation system necessitates addressing the potential risks and harms posed by AI. These include concerns around fairness, bias, accountability, safety, liability, and transparency of AI systems.

The AI Strategy notes that, while the UK currently regulates many aspects of the development and use of AI through cross-sectoral legislation (including competition, data protection, and financial services legislation), the sector-led approach can lead to overlaps or inconsistencies. In order to remove these inconsistencies, the Strategy’s third pillar proposes a number of measures including:

  • The Office for AI will publish a White Paper on regulating AI in early 2022, which will set out the risks and harms of AI and outline proposals to address them.
  • The Centre for Data Ethics and Innovation (“CDEI”) will publish a roadmap to ensure that AI systems are safe, fair and trustworthy. The roadmap will clarify the activities needed to build a mature assurance ecosystem and identify the roles and responsibilities of different stakeholders across these activities.
  • Working with The Alan Turing Institute to update existing guidance on AI ethics and safety in the public sector to ensure it remains relevant with the continuing developments in responsible AI innovation.
  • Piloting an AI Standards Hub to coordinate UK engagement in AI standardization globally, and explore the development of an AI standards engagement toolkit to support the AI ecosystem to engage in the global AI standardization landscape.
  • Developing a cross-government standard for algorithmic transparency of AI systems used in the public sector.
  • Continuing the UK’s engagement on the international stage in helping to shape international frameworks, norms and standards for governing AI to reflect human rights, democratic principles, and the rule of law.
  • The Government is also considering reforming the UK data protection framework within the broader context of AI governance through the Data: A new direction public consultation (see our previous blog post here).

Political analysis

The Parliamentary Under Secretary of State at the Department for Digital, Culture, Media and Sport, stressed this vision when he highlighted the UK Government’s intent to keep regulation to a minimum, including through using existing frameworks and structures. The rationale of this approach is that less regulation will encourage innovation in the sector.

The UK’s de minimis position on regulation is at odds with the EU’s stance. In April 2021, the European Commission published its proposal for an AI Regulation (see our blog on this issue), making clear that it intends to strictly regulate “high risk” AI. The UK views AI as a sector of major economic competitive advantage for the UK. It was this vision that lay behind the UK-Japan Trade Agreement, which among other things was intended to facilitate Japanese access to UK AI, and UK access to Japanese robotics. However, divergence from the EU on key points, such as those related to data, may ultimately make it harder for UK AI developers and companies using AI technologies to operate in the EU (and vice versa) and could have an impact on the EU’s willingness to continue to grant the UK a Data Adequacy Decision (see our blog on this issue).

The timelines for AI regulation also play a key role. The European Commission’s AI proposal is now going through the legislative stages in the European Parliament and Council. By the time the UK publishes its regulatory AI proposal, it is likely that the EU would have already adopted its legislation, which would mean the EU would have taken first mover advantage, depriving the UK of the opportunity to act as a trendsetter for AI regulatory standards. The UK could find itself obliged instead to align with EU standards.

Next steps

The UK AI Strategy sets out the short, medium, and long-term plans for achieving each of the three pillars. The Office for AI, which sits under the Department for Digital, Culture, Media & Sport and the Department for Business, Energy & Industrial Strategy, will be responsible for overall delivery of the strategy, monitoring progress and enabling its implementation across Government, industry, academia, and civil society. Additionally, the AI Strategy outlines how the UK Government envisages the various consultations and policies relating to data and innovation will work together to create a pro-innovation environment for AI.

*  *  *  *  *

Covington regularly advises the world’s top technology companies on their most challenging regulatory, compliance, and public policy issues in the UK and other major markets. We are monitoring the UK’s developments very closely and will be updating this site regularly – please watch this space for further updates.

If you have questions about the UK’s AI Strategy, or other tech regulatory or public policy matters, please feel free to reach out to any of the following:

Dan Cooper

Marty Hansen

Lisa Peets

Mark Young

Thomas Reilly

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as Privacy International and the European security agency, ENISA.

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Mark Young Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to…

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” and having “great insight into the regulators.”

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Thomas Reilly Thomas Reilly

Ambassador Thomas Reilly, Covington’s Head of UK Public Policy and a key member of the firm’s Global Problem Solving Group and Brexit Task Force, draws on over 20 years of diplomatic and commercial roles to advise clients on their strategic business objectives.

Ambassador…

Ambassador Thomas Reilly, Covington’s Head of UK Public Policy and a key member of the firm’s Global Problem Solving Group and Brexit Task Force, draws on over 20 years of diplomatic and commercial roles to advise clients on their strategic business objectives.

Ambassador Reilly was most recently British Ambassador to Morocco between 2017 and 2020, and prior to this, the Senior Advisor on International Government Relations & Regulatory Affairs and Head of Government Relations at Royal Dutch Shell between 2012 and 2017. His former roles with the Foreign and Commonwealth Office included British Ambassador Morocco & Mauritania (2017-2018), Deputy Head of Mission at the British Embassy in Egypt (2010-2012), Deputy Head of the Climate Change & Energy Department (2007-2009), and Deputy Head of the Counter Terrorism Department (2005-2007). He has lived or worked in a number of countries including Jordan, Kuwait, Yemen, Libya, Iraq, Saudi Arabia, Bahrain, and Argentina.

At Covington, Ambassador Reilly works closely with our global team of lawyers and investigators as well as over 100 former diplomats and senior government officials, with significant depth of experience in dealing with the types of complex problems that involve both legal and governmental institutions.

Ambassador Reilly started his career as a solicitor specialising in EU and commercial law but no longer practices as a solicitor.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.