Facial Recognition Technology

On October 9, 2020, the French Supervisory Authority (“CNIL”) issued guidance on the use of facial recognition technology for identity checks at airports (available here, in French).  The CNIL indicates that it has issued this guidance in response to a request from several operators and service providers of airports in France who are planning to deploy this technology on an experimental basis.  In this blog post, we summarize the main principles that the CNIL says airports should observe when deploying biometric technology.
Continue Reading French Supervisory Authority Releases Strict Guidance on the Use of Facial Recognition Technology at Airports

On October 31, 2019, Elizabeth Denham, the UK’s Information Commissioner issued an Opinion and an accompanying blog urging police forces to slow down adoption of live facial recognition technology and take steps to justify its use.  The Commissioner calls on the UK government to introduce a statutory binding code of practice on the use of biometric technology such as live facial recognition technology.  The Commissioner also announced that the ICO is separately investigating the use of facial recognition by private sector organizations, and will be reporting on those findings in due course.

The Opinion follows the ICO’s investigation into the use of live facial recognition technology in trials conducted by the Metropolitan Police Service (MPS) and South Wales Police (SWP).  The ICO’s investigation was triggered by the recent UK High Court decision in R (Bridges) v The Chief Constable of South Wales (see our previous blog post here), where the court held that the use of facial recognition technology by the South Wales Police Force (“SWP”) was lawful.

The ICO had intervened in the case.  In the Opinion, the Commissioner notes that, in some areas, the High Court did not agree with the Commissioner’s submissions.  The Opinion states that the Commissioner respects and acknowledges the decision of the High Court, but does not consider that the decision should be seen as a blanket authorization to use live facial recognition in all circumstances.Continue Reading AI/IoT Update: UK’s Information Commissioner Issues Opinion on Use of Live Facial Recognition Technology by Police Forces

R (on the application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC 2341 (Admin)

Case Note

Introduction

In Bridges, an application for judicial review, the UK High Court (Lord Justice Haddon-Cave and Mr. Justice Swift) considered the lawfulness of policing operations conducted by the South Wales Police force (“SWP”) which utilised Automated Facial Recognition (“AFR”) technology.  The Court rejected Mr Bridges’ allegations that the SWP’s conduct was unlawful as contrary to the European Convention on Human Rights (“ECHR”), Article 8, the Data Protection Acts 1998 and 2018 (“DPA 98 and 18”), and the Equality Act 2010.  In this blog post we consider several key aspects of the case.Continue Reading UK Court upholds police use of automated facial recognition technology

On March 29, 2019, the ICO opened the beta phase of the “regulatory sandbox” scheme (the “Sandbox”), which is a new service designed to support organizations that are developing innovative and beneficial projects that use personal data.  The application process for participating in the Sandbox is now open, and applications must be submitted to the ICO by noon on Friday May 24, 2019. The ICO has published on its website a Guide to the Sandbox, which explains the scheme in detail.

The purpose of the Sandbox is to support organizations that are developing innovative products and services using personal data and develop a shared understanding of what compliance looks like in particular innovative areas.  Organizations participating in the Sandbox are likely to benefit from having the opportunity to liaise directly with the regulator on innovative projects with complex data protection issues.  The Sandbox will also be an opportunity for market leaders in innovative technologies to influence the ICO’s approach to certain use cases with challenging aspects of data protection compliance or where there is uncertainty about what compliance looks like.

The beta phase of the Sandbox is planned to run from July 2019 to September 2020.  Around 10 organizations from private, public and third sectors will be selected to participate.  In the beta phase, the ICO is focusing on data processing that falls within the remit of UK data protection law.  
Continue Reading ICO opens beta phase of privacy “regulatory sandbox”

Last week, the multistakeholder group convened by the National Telecommunications and Information Administration (“NTIA”) to create set of voluntary best practices for the commercial use of facial recognition technology finalized its guidelines.  While the three-page code of conduct was praised by industry groups, including the Software & Information Industry Association and Consumer Technology Association, many consumer groups, who withdrew from the process before the guidelines were finalized, criticized the final product as weak and flawed.

The guidelines are the result of a more than two-year process, first announced by the NTIA in December 2013.  They recommend commercial entities do the following:

  • Disclose their practices regarding collection, storage, and use of facial template data to consumers, including any sharing, retention, and de-identification policies;
  • Provide notice to consumers where facial recognition is used on a physical premises;
  • Consider privacy concerns when developing data management programs;
  • Protect facial recognition data by implementing a program that contains administrative, technical, and physical safeguards appropriate to the entity’s size, complexity, the nature of its activities, and the sensitivity of the data;
  • Take reasonable steps to maintain the integrity of the data collected; and,
  • Provide a means for consumers to contact the entity regarding its use of the data.

Continue Reading NTIA Multistakeholder Group Reaches Consensus on Best Practices for Commercial Use of Facial Recognition Technology

The National Telecommunications & Information Administration (“NTIA”) announced today that it will convene a series of meetings about the commercial uses of facial recognition technology.  The goal of the meetings will be to develop a voluntary, enforceable code of conduct specifying how the Obama Administration’s “Consumer Privacy Bill of Rights” applies to facial

This week, the FTC released a staff report urging companies to adopt best practices for commercial uses of facial recognition technology.  The report, entitled Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies, follows a workshop held last December and more than 80 public comments addressing issues raised at the workshop.  Facing Facts largely discusses how the core privacy principles from the Commission’s March 2012 privacy report — privacy by design, simplified choice, and transparency — should inform the use of facial recognition technologies, such as digital signs that can assess the age and gender of consumers standing before them as well as online photo tagging tools. 

In this post, we provide an overview of the staff report’s guidance on how each of the principles should be applied by companies that employ facial recognition in their products and services.Continue Reading FTC Issues Guidance on Best Practices for Facial Recognition Technology

The Electronic Frontier Foundation and the Immigration Policy Center last week released an interesting report on law enforcement’s increasing efforts to gather biometric data, and associated risks of data inaccuracy, racial profiling, erroneous deportations, security breaches, and privacy invasions.  The report calls for greater accountability in the biometrics context, including collection and retention limitations; clear rules for collection, use, and sharing; robust security; notice requirements; and independent oversight. 

In recent months, a number of policymakers have raised concerns about both public and private collection of biometric data.  For example,Continue Reading Biometric Data Under the Privacy Microscope

The Article 29 Working Party (WP29) yesterday published an opinion on facial recognition in online and mobile services.  The WP29 states this technology requires “specific attention” as it presents “a range of data protection concerns”. 

The opinion focuses on facial technology being used in three main contexts: identifying people in social networks; authenticating and verifying users to control access to services; and categorising individuals, e.g., in the gaming context to enhance the user experience, allow/deny access to age-related content, or to display in-game targeted advertising. 

The opinion places a heavy emphasis on the need to obtain the informed consent of individuals prior to processing their data in connection with facial recognition technologies.  Perhaps of most interest to social networks and the public, is the conclusion that facial recognition should not be used to automatically suggest names of people who are not registered users of social networks for the purpose of tagging them in photographs.Continue Reading Facial Recognition Opinion Targets Social Networks, Authentication Services and Games Consoles

Following up on its “Face Facts” workshop that brought together a variety of stakeholders to discuss the privacy issues relating to commercial uses of facial recognition technology, the FTC has announced that it is seeking public comment on the issues raised at the workshop.  According to the Commission, these issues include: 

  • What are the current