On June 5, 2025, the UK’s Information Commissioner’s Office (“ICO”) launched its new AI and biometrics strategy. The strategy aims to increase its scrutiny of AI and biometric technologies focusing on three priority situations, namely where: stakes are high; there is clear public concern for the technology; and regulatory clarity can provide immediate impact.

The ICO identified three areas of focus in its strategy:

  1. Transparency and explainability, i.e., when and how the technologies affect people;
  2. Bias and discrimination, particularly where the technologies have been trained on “flawed, incomplete or unrepresentative information”; and
  3. Rights and redress, i.e., making sure that systems are accurate, appropriate safeguards are in place to protect people’s rights, and that there are ways to challenge and correct outcomes that result in harm.

The ICO’s previous work

The ICO’s new strategy builds on its previous work on both AI and biometric technologies. Past work includes, among other things: the ICO’s AI Tools in recruitment audit outcomes report; its consultation series on Generative AI; and its guidance on AI and data protection (discussed in our previous blogpost here). The ICO has also published biometric data guidance and conducted various studies on the privacy implications of biometric technologies.

The ICO’s strategy in more detail

The ICO has set out a “plan of action” to “ensure that organisations can develop and deploy AI and biometric technologies with confidence and that people are safeguarded from harm”. The plan touches on a wide range of AI regulatory issues and includes the following objectives:

  1. Provide certainty on the responsible use of AI and automated decision making (“ADM”) under data protection law. This will include consulting on ADM and profiling guidance by autumn 2025 and developing a statutory code of practice on AI and ADM to address the issues of transparency and explainability, bias and discrimination and rights and redress.
  1. Ensure high standards of ADM in central government. The ICO plans to support scaling up of the responsible use of AI across central government, including through setting out regulatory expectations and learning from early adopters of ADM within the government.
  1. Establish clear expectations for the responsible use of ADM in recruitment. The ICO will increase focus on the use of ADM by major employers and recruiters, particularly with respect to transparency, discrimination and risks to redress, and intends to hold employers and recruiters to account for failure to respect people’s rights, through for example publishing findings.
  1. Scrutinise foundation model developers to ensure they protect people’s information and prevent harm. This involves seeking assurances from developers regarding the safeguarding of personal information, setting clear regulatory expectations to strengthen compliance where needed, and taking action in the event a model causes harm or is at risk of causing harm.
  1. Support and ensure the police’s use of rights-respecting and proportionate Facial Recognition Technology (“FRT”). The ICO’s biometric strategy will focus on the use of FRT by police forces, and in particular it plans to: publish guidance on the governance and use of FRT by police forces in compliance with data protection law, audit police forces using FRT, and advise the government on changes to the law in the area.
  1. Consider emerging AI risks. The ICO will ramp up focus on the data protection implications of agentic AI, including through engaging with industry and publishing a Tech Futures report examining accountability and redress. It also plans to establish a high threshold of lawfulness for AI systems that “infer subjective traits, intentions or emotions based on physical or behavioural characteristics”, continuing to survey use cases and act where systems cause harm.

This blog was drafted with the assistance of Emilia De Rosa, a trainee in the London office.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

Photo of Stacy Young Stacy Young

Stacy Young is an associate in the London office. She advises technology and life sciences companies across a range of privacy and regulatory issues spanning AI, clinical trials, data protection and cybersecurity.

Photo of Emilia De Rosa Emilia De Rosa

Emilia De Rosa is an associate in the firm’s Commercial Litigation Practice Group. Emilia has experience advising companies on commercial disputes and class actions.