United Kingdom

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.
Continue Reading AI Update: The Future of AI Policy in the UK

In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).

This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).

The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.

Key points of the UK Government’s plans for the OSB are set out below.Continue Reading UK Government Plans for an Online Safety Bill

Over the past 9 months, the UK has been hammering out the shape of its future trading relationship with the EU, as well as many others, and there apparently are signs of progress in the past few days as a result of intensified talks between the two sides. Some are
Continue Reading Inside Privacy Audiocast: Episode 7 – Brexit and the Future of UK Data Privacy Law

The English High Court has recently awarded damages in a data privacy case, with two features of particular interest.  First, the nature of the claim is more reminiscent of a claim in defamation than for data privacy breaches, which is a development in the use of data protection legislation.  Secondly, the damages awarded (perhaps influenced by the nature of the case) were unusually high for a data privacy case.

The decision highlights an unusual use of data protection in English law, as a freestanding form of quasi-defamation claim, as the claimants sought damages for reputational harm (as well as distress) solely under the Data Protection Act 1998 (the “DPA”, since replaced by the Data Protection Act 2018, which implemented the General Data Protection Regulation ((EU) 2016/679) (GDPR) in the UK) rather than in a libel or defamation claim, or in parallel with such a claim.  It also sets a potentially unhelpful precedent by awarding two of the claimants £18,000 each for inaccurate processing of their personal data, an amount that is significantly higher than has been awarded in other data protection cases brought under the DPA.  If such awards were to be made in the context of a class action, the potential liability for data controllers could be significant.
Continue Reading English High Court Awards Damages for Quasi-Defamation Data Claim

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence

On April 17, 2020, the UK’s Information Commissioner’s Office (“ICO”) issued an opinion on the recently announced Apple-Google initiative to develop a Bluetooth-based Contact Tracing Framework (“CTF”) to help prevent the spread of COVID-19.  The ICO opinion is generally supportive of the Apple-Google proposal and perceives it to be, at this early phase, aligned with principles of data protection by design and by default.  The ICO also cautions that since apps developed under the CTF could also be used to collect additional data using other techniques beyond those currently planned, developers of such apps must ensure compliance with data protection laws.
Continue Reading UK ICO Issues Opinion on Apple-Google Initiative for a Contact Tracing Framework

On 1 April 2020, the UK Supreme Court handed down its ruling in WM Morrison Supermarkets plc v Various Claimants [2020] UKSC 12.  The Court ruled that Morrisons was not vicariously liable for a data breach deliberately perpetrated by an employee.  The judgment is significant in that it overturned the decisions of the two lower courts (the High Court and Court of Appeal) and provides guidance for employers on when they may be held vicariously liable for data breaches and other violations of the GDPR involving employees, who act as independent controllers in their own right.
Continue Reading UK Supreme Court Rules That Supermarket Is Not Vicariously Liable For Data Breach Committed By Employee

On November 14, 2019, the UK Information Commissioner’s Office (“ICO”) published detailed guidance on the processing of special category data.  The guidance sets out (i) what are the  special categories of data, (ii) the rules that apply to the processing of special category data under the General Data Protection Regulation (“GDPR”) and UK Data Protection Act 2018 (“DPA); (iii) the conditions for processing special category data; and (iv) additional guidance on the substantial public interest condition, including what is an “appropriate policy document”.

Under the GDPR, stricter rules apply to the processing of special category data, which includes genetic and biometric data as well as information about a person’s health, sex life, sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, and trade union membership.  As noted in the guidance, there is a presumption that “this type of data needs to be treated with greater care”  because the “use of this data could create significant risks to the individual’s fundamental rights and freedoms”.  This blog post provides a summary of the key takeaways from the ICO’s guidance.
Continue Reading UK ICO Publishes New Guidance on Special Category Data

On October 31, 2019, Elizabeth Denham, the UK’s Information Commissioner issued an Opinion and an accompanying blog urging police forces to slow down adoption of live facial recognition technology and take steps to justify its use.  The Commissioner calls on the UK government to introduce a statutory binding code of practice on the use of biometric technology such as live facial recognition technology.  The Commissioner also announced that the ICO is separately investigating the use of facial recognition by private sector organizations, and will be reporting on those findings in due course.

The Opinion follows the ICO’s investigation into the use of live facial recognition technology in trials conducted by the Metropolitan Police Service (MPS) and South Wales Police (SWP).  The ICO’s investigation was triggered by the recent UK High Court decision in R (Bridges) v The Chief Constable of South Wales (see our previous blog post here), where the court held that the use of facial recognition technology by the South Wales Police Force (“SWP”) was lawful.

The ICO had intervened in the case.  In the Opinion, the Commissioner notes that, in some areas, the High Court did not agree with the Commissioner’s submissions.  The Opinion states that the Commissioner respects and acknowledges the decision of the High Court, but does not consider that the decision should be seen as a blanket authorization to use live facial recognition in all circumstances.Continue Reading AI/IoT Update: UK’s Information Commissioner Issues Opinion on Use of Live Facial Recognition Technology by Police Forces

R (on the application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC 2341 (Admin)

Case Note

Introduction

In Bridges, an application for judicial review, the UK High Court (Lord Justice Haddon-Cave and Mr. Justice Swift) considered the lawfulness of policing operations conducted by the South Wales Police force (“SWP”) which utilised Automated Facial Recognition (“AFR”) technology.  The Court rejected Mr Bridges’ allegations that the SWP’s conduct was unlawful as contrary to the European Convention on Human Rights (“ECHR”), Article 8, the Data Protection Acts 1998 and 2018 (“DPA 98 and 18”), and the Equality Act 2010.  In this blog post we consider several key aspects of the case.Continue Reading UK Court upholds police use of automated facial recognition technology