On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Understanding AI

The introductory section of the Guidance on understanding AI defines AI as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.”  The Guidance provides that AI systems must comply with applicable laws, calling out in particular the GDPR, and specifically the obligations on automated decision-making. (As discussed in our earlier blog post, the ICO has previously highlighted the relevance of Article 22 of the GDPR on automated decision-making in their Interim Report on Project ExplAIn.)

The Guidance also explains that the UK Government has created three new bodies and two new funds to help integrate AI into the private and public sectors. The three new bodies are the AI Council, the Office for AI, and the Centre for Data Ethics and Innovation; the two funds are the Gov-Tech Catalyst and the Regulator’s Pioneer Fund.

Assessing, Planning and Managing AI

When assessing AI systems, and in particular how to build or buy them, the Guidance recommends that public sector organizations should:

  • Assess which AI technology is suitable for the situation. The Guidance describes, at a high-level, several types of common machine learning techniques and applications of machine learning;
  • Obtain approval from the Government Digital Services by carrying out discovery to show feasibility. Most AI solutions are categorized as ‘novel’, and therefore requiring further scrutiny;
  • Define their purchasing strategy, in the same way as they would for any other technology;
  • Address ethical concerns and comply with forthcoming guidance from the Office of AI and the World Economic Forum on AI procurement;
  • Allocate responsibility and governance for AI projects with partnering organizations and make sure that the team building and managing the AI project has appropriate skills and resources.

The Guidance also outlines a three-phase plan that organizations typically follow when planning and preparing to implement AI systems:

  1. Discovery. In this phase, organizations must assess whether AI is right for their needs. If it is, they will prepare their data and will build an AI implementation team (normally comprised of a data scientist, data engineer, data architect, and ethicist). Data should be made secure in accordance with guidance from the National Cyber Security Centre (“NCSC”) and by complying with applicable data protection law.
  2. Alpha Phase. Data is divided into a training set, a validation set and a test set. A base model is used as a benchmark and more complex models are created to suit the client’s problem. The best of these models is tested and evaluated economically, ethically and socially.
  3. Beta Phase. The chosen model is integrated and performance tested. The product is continually evaluated and improved versions are created and deployed – a specialist team is maintained to carry out these improvements.

The Guidance stresses the importance of having appropriate governance in place in order to manage the risks that arise from the implementation of AI systems. The section on managing AI projects outlines a number of factors that organizations should consider when running AI projects, and provides a table of common risks that arise in AI projects along with recommended mitigation measures.

Using AI ethically and Safely

The section of the Guidance on using AI ethically and safely is addressed to all parties involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers and departmental leads.  The Guidance summarizes the Alan Turing Institute’s detailed guidance, published as part of their public policy programme, and is designed to work within the UK Government’s August 2018 Data Ethics Framework.

The Guidance focuses heavily on the need for a human-centric approach to AI systems.  This aligns with positions of other forums (such as the European Commission’s High Level Working Group’s Ethics Guidelines for Trustworthy AI – see our blog here). The Guidance stresses the importance of building a culture of responsible innovation, and recommends that the governance architecture of AI systems should consist of: (1) a framework of ethical values; (2) a set of actionable principles; and (3) a process-based governance framework.

The Guidance points to the Alan Turing Institute’s recommended ethical values:

  • Respect the dignity of individuals;
  • Connect with each other sincerely, openly, and inclusively;
  • Care for the wellbeing of all; and
  • Protect the priorities of social values, justice, and public interest

Organizations should pursue these ethical values through four “FAST Track principles”, which are:

  • Fairness (being unbiased and using fair data);
  • Accountability (having a clear chain of accountability and system of review);
  • Sustainability (making sure the project is safe and has longevity); and
  • Transparency (decisions should be explained and justified).

Organizations should bring these values and principles together in an integrated process-based governance framework, which should encompass:

  • the relevant team members and roles involved in each governance action;
  • the relevant stages of the workflow in which intervention and targeted consideration are necessary to meet governance goals;
  • explicit timeframes for any evaluations, follow-up actions, re-assessments, and continuous monitoring; and
  • clear and well-defined protocols for logging activity and for implementing mechanisms to support end-to-end auditability.

Governance and ethics of AI systems is currently a hot topic, with a number of different guidelines and approaches emerging in the UK, the EU and other jurisdictions. Organizations developing AI technologies or adopting AI solutions should keep abreast of the evolving landscape in this field, and consider providing input to policymakers.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.