On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”). The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.
The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems. Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.
The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.
Understanding AI
The introductory section of the Guidance on understanding AI defines AI as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.” The Guidance provides that AI systems must comply with applicable laws, calling out in particular the GDPR, and specifically the obligations on automated decision-making. (As discussed in our earlier blog post, the ICO has previously highlighted the relevance of Article 22 of the GDPR on automated decision-making in their Interim Report on Project ExplAIn.)
The Guidance also explains that the UK Government has created three new bodies and two new funds to help integrate AI into the private and public sectors. The three new bodies are the AI Council, the Office for AI, and the Centre for Data Ethics and Innovation; the two funds are the Gov-Tech Catalyst and the Regulator’s Pioneer Fund.
Assessing, Planning and Managing AI
When assessing AI systems, and in particular how to build or buy them, the Guidance recommends that public sector organizations should:
- Assess which AI technology is suitable for the situation. The Guidance describes, at a high-level, several types of common machine learning techniques and applications of machine learning;
- Obtain approval from the Government Digital Services by carrying out discovery to show feasibility. Most AI solutions are categorized as ‘novel’, and therefore requiring further scrutiny;
- Define their purchasing strategy, in the same way as they would for any other technology;
- Address ethical concerns and comply with forthcoming guidance from the Office of AI and the World Economic Forum on AI procurement;
- Allocate responsibility and governance for AI projects with partnering organizations and make sure that the team building and managing the AI project has appropriate skills and resources.
The Guidance also outlines a three-phase plan that organizations typically follow when planning and preparing to implement AI systems:
- Discovery. In this phase, organizations must assess whether AI is right for their needs. If it is, they will prepare their data and will build an AI implementation team (normally comprised of a data scientist, data engineer, data architect, and ethicist). Data should be made secure in accordance with guidance from the National Cyber Security Centre (“NCSC”) and by complying with applicable data protection law.
- Alpha Phase. Data is divided into a training set, a validation set and a test set. A base model is used as a benchmark and more complex models are created to suit the client’s problem. The best of these models is tested and evaluated economically, ethically and socially.
- Beta Phase. The chosen model is integrated and performance tested. The product is continually evaluated and improved versions are created and deployed – a specialist team is maintained to carry out these improvements.
The Guidance stresses the importance of having appropriate governance in place in order to manage the risks that arise from the implementation of AI systems. The section on managing AI projects outlines a number of factors that organizations should consider when running AI projects, and provides a table of common risks that arise in AI projects along with recommended mitigation measures.
Using AI ethically and Safely
The section of the Guidance on using AI ethically and safely is addressed to all parties involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers and departmental leads. The Guidance summarizes the Alan Turing Institute’s detailed guidance, published as part of their public policy programme, and is designed to work within the UK Government’s August 2018 Data Ethics Framework.
The Guidance focuses heavily on the need for a human-centric approach to AI systems. This aligns with positions of other forums (such as the European Commission’s High Level Working Group’s Ethics Guidelines for Trustworthy AI – see our blog here). The Guidance stresses the importance of building a culture of responsible innovation, and recommends that the governance architecture of AI systems should consist of: (1) a framework of ethical values; (2) a set of actionable principles; and (3) a process-based governance framework.
The Guidance points to the Alan Turing Institute’s recommended ethical values:
- Respect the dignity of individuals;
- Connect with each other sincerely, openly, and inclusively;
- Care for the wellbeing of all; and
- Protect the priorities of social values, justice, and public interest
Organizations should pursue these ethical values through four “FAST Track principles”, which are:
- Fairness (being unbiased and using fair data);
- Accountability (having a clear chain of accountability and system of review);
- Sustainability (making sure the project is safe and has longevity); and
- Transparency (decisions should be explained and justified).
Organizations should bring these values and principles together in an integrated process-based governance framework, which should encompass:
- the relevant team members and roles involved in each governance action;
- the relevant stages of the workflow in which intervention and targeted consideration are necessary to meet governance goals;
- explicit timeframes for any evaluations, follow-up actions, re-assessments, and continuous monitoring; and
- clear and well-defined protocols for logging activity and for implementing mechanisms to support end-to-end auditability.
Governance and ethics of AI systems is currently a hot topic, with a number of different guidelines and approaches emerging in the UK, the EU and other jurisdictions. Organizations developing AI technologies or adopting AI solutions should keep abreast of the evolving landscape in this field, and consider providing input to policymakers.