On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

The AI Report acknowledges the potential opportunities and advantages presented by the use of AI in law enforcement (e.g., FRT, speaker identification, aural surveillance (i.e., gunshot detection algorithms), social media monitoring, etc.), particularly in allowing law enforcement agencies to combat certain types of crimes more efficiently (e.g., financial crime, money laundering and terrorist financing, and cybercrime). However, the AI Report stresses the potential risks posed by AI applications, such as opaque decision-making, bias, intrusion into private lives, and challenges to the protection of personal data, human dignity, and freedom of expression and information.

To balance the opportunities and risks presented by AI, the AI Report sets out some key recommendations including (among others):

  • Calling for a ban on the use of FRT for law enforcement purposes until the technical standards can be considered fully fundamental rights compliant; results derived are non-biased; the legal framework provides strict safeguards against misuse; and there is empirical evidence of the necessity and proportionality for the deployment of FRT.
  • Permanently prohibiting law enforcement from using automated analysis of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioral signals.
  • Subjecting the use of biometric data to remotely identify people for law enforcement purposes (e.g., border control gates that use automated recognition) to additional requirements and safeguards.
  • Banning the use of private FRT databases by law enforcement and intelligence services.
  • Opposing the use of predictive policing based on behavioral and historic data about individuals or groups.
  • Supporting a ban on mass-scale social scoring systems, which seek to rate the trustworthiness of citizens based on their behavior or personality.
  • Emphasizing the need for human supervision and strong legal powers to prevent discrimination (e.g., human operators must always make the final decisions and subjects monitored by AI-powered systems must have access to remedy).

In parallel to the EP’s vote on the AI Report, in April 2021, the European Commission proposed a Regulation laying down Harmonized Rules on Artificial Intelligence (“AI Regulation”) (see our previous blog post here). The proposed AI Regulation contains specific provisions on the use of real time remote biometric identification systems by law enforcement authorities. The concerns raised in the AI Report indicate the positions the EP is likely to take in upcoming negotiations with the European Commission and the Council on the AI Regulation.

The use of AI and FRT by law enforcement is already subject to regulation under data protection law regimes, and is being closely examined by government and data protection authorities around the world. Some developments we have covered in our previous blogs include:

  • The UK’s Information Commissioner’s Office (“ICO”) published its opinion on the use of live FRT by police forces (see our previous blog post here).
  • The French Supervisory Authority (“CNIL”) issued strict guidance on the use of FRT at airports (see our previous blog post here) and general guidance on the use of FRT (see our previous blog post here).
  • Washington state in the U.S. passed a bill that regulates state and local government agencies’ use of facial recognition services (see our previous blog post here).

We will continue to closely monitor the EU’s regulatory and policy developments on AI and FRT and will be updating this site regularly – please watch this space for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.