Photo of Sam Jungyun Choi

Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam's practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

On March 25, 2026, the UK’s Office of Communications (“Ofcom”) and the Information Commissioner’s Office (“ICO”) published a joint statement setting out their common expectations for age assurance on online services (“Joint Statement”). The Joint Statement is aimed at services likely to be accessed by children that fall within the scope of the Online Safety Act 2023 (“OSA”) and UK data protection legislation, and is designed to help providers comply with both their online safety and data protection obligations when deploying age assurance.

The Joint Statement arrives alongside a broader push from both regulators—including Ofcom’s recent call to action directed at major tech firms, an open letter from the ICO urging platforms to strengthen their age checks, and several enforcement actions by both regulators.

Continue Reading Ofcom and ICO Issue Joint Statement on Age Assurance

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.

Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

On February 19, 2026, the UK Court of Appeal handed down its decision in DSG Retail Limited v The Information Commissioner [2026] EWCA Civ 140. The Court ruled that a controller’s data security duty applies to all personal data for which it acts as controller – irrespective of whether the information would constitute personal data in the hands of a third party (in this case, an attacker). Note that the case is concerned with events before the GDPR came into force, so the legal context is provided by UK Data Protection Act 1998 (“DPA 1998”), although the Court did take into account more recent jurisprudence, including CJEU case law.

The case adds useful colour to ongoing debates surrounding the definition of “personal data.” The Court of Appeal confirmed that a controller’s duty to implement appropriate measures to protect personal data applies to data that is “personal” from the perspective of the controller —even if a third-party attacker could not identify individuals from the exfiltrated dataset. This dovetails with the SRB v EDPS’s clarification that whether data is “personal” can depend on the context, while a controller’s obligations (such as transparency) must be assessed from the controller’s perspective at the relevant time (which, for the transparency principle, is at the time of collection of the data). (For more information on SRB v EDPS, see our prior post here.)

Continue Reading UK Court of Appeal Rules on the Concept of Personal Data in the Context of Data Security

On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the Report). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.

The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)

Continue Reading EDPB Publishes Report on Stakeholder Event on Anonymisation and Pseudonymisation

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).

Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On June 26, 2025, the European Parliament’s Committee on Employment and Social Affairs published a draft report (“Draft Report”) recommending that the Commission initiate the legislative process for an EU Directive on algorithmic management in the workplace.  The Draft Report defines algorithmic management as the use of automated systemsincluding those involving artificial intelligenceto monitor, assess, or make decisions affecting workers and solo self-employed persons.

This Draft Report follows a Commission study published in March 2025 (“Commission Study”), which found that while existing EU legislation, such as the GDPR, addresses some risks to workers from algorithmic management, others remain.  The Commission Study also recognizes that the AI Act does not establish specific rights for workers in the context of AI use, which is noted as a concern.

The Draft Report encloses the proposed text for a new Directive on algorithmic management in the workplace (“Proposed Directive”).  The Draft Report has not yet been endorsed by the European Parliament.

Continue Reading European Parliament Committee Recommends Commission to Propose EU Directive on Algorithmic Management

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.

Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.

Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard), in response to a user prompt. This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.

Continue Reading CJEU Receives Questions on Copyright Rules Applying to AI Chatbot

On June 2, 2025, the Global Cross-Border Privacy Rules (“CBPR”) Forum officially launched the Global CBPR and Privacy Recognition for Processors (“PRP”) certifications.  Building on the existing Asia-Pacific Economic Cooperation (“APEC”) CBPR framework, the Global CBPR and PRP systems aim to extend privacy certifications beyond the APEC region.  They will allow controllers and processors to voluntarily undergo certification for their privacy and data governance measures under a framework that is recognized by many data protection authorities around the world.  The Global CBPR and PRP certifications are also expected to be recognized in multiple jurisdictions as a legitimizing mechanism for cross-border data transfers.

Continue Reading Global CBPR and PRP Certifications Launched: A New International Data Transfer Mechanism