Photo of Sam Jungyun Choi

Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam's practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On June 26, 2025, the European Parliament’s Committee on Employment and Social Affairs published a draft report (“Draft Report”) recommending that the Commission initiate the legislative process for an EU Directive on algorithmic management in the workplace.  The Draft Report defines algorithmic management as the use of automated systemsincluding those involving artificial intelligenceto monitor, assess, or make decisions affecting workers and solo self-employed persons.

This Draft Report follows a Commission study published in March 2025 (“Commission Study”), which found that while existing EU legislation, such as the GDPR, addresses some risks to workers from algorithmic management, others remain.  The Commission Study also recognizes that the AI Act does not establish specific rights for workers in the context of AI use, which is noted as a concern.

The Draft Report encloses the proposed text for a new Directive on algorithmic management in the workplace (“Proposed Directive”).  The Draft Report has not yet been endorsed by the European Parliament.Continue Reading European Parliament Committee Recommends Commission to Propose EU Directive on Algorithmic Management

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard), in response to a user prompt. This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.Continue Reading CJEU Receives Questions on Copyright Rules Applying to AI Chatbot

On June 2, 2025, the Global Cross-Border Privacy Rules (“CBPR”) Forum officially launched the Global CBPR and Privacy Recognition for Processors (“PRP”) certifications.  Building on the existing Asia-Pacific Economic Cooperation (“APEC”) CBPR framework, the Global CBPR and PRP systems aim to extend privacy certifications beyond the APEC region.  They will allow controllers and processors to voluntarily undergo certification for their privacy and data governance measures under a framework that is recognized by many data protection authorities around the world.  The Global CBPR and PRP certifications are also expected to be recognized in multiple jurisdictions as a legitimizing mechanism for cross-border data transfers.Continue Reading Global CBPR and PRP Certifications Launched: A New International Data Transfer Mechanism

AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match.  However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings.  It is still unclear whether and how the European Commission is proposing to address this.  The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026.  This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.Continue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.Continue Reading European Commission Publishes Q&A on AI Literacy

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI