Photo of Jiayen Ong

Jiayen Ong is an associate in the technology regulatory group in London. Her practice focuses regulatory compliance and advisory work on European data protection law, and new policies and legislation relating to innovative technologies, including advising on EU and national initiatives relating to artificial intelligence, cybersecurity and data sharing.

Contact: Email

On April 17, 2023, the UK applied to join the Global Cross-Border Privacy Rules (“CBPR”) Forum as an Associate member. It is the first country to declare its application to participate in the Global CBPR as an Associate member since its inception one-year ago. In addition to its application, the UK co-hosted the Global CBPR Forum workshop “At One Year: Challenges and Opportunities”, which took place between April 17 to April 20, 2023.

Continue Reading Global CBPR Forum: A New International Data Transfer Mechanism

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.

Continue Reading EU AI Policy and Regulation: What to look out for in 2023

On December 9, 2022, the European Commissioner for Justice and Consumer Protection, Didier Reynders, announced that the European Commission will focus its next 2023 mandate on regulating dark patterns, alongside transparency in the online advertising market and cookie fatigue. As part of this mandate, the EU’s Consumer Protection Cooperation (“CPC”) Network, conducted a sweep of 399 retail websites and apps for dark patterns, and found that nearly 40% of online shopping websites rely on manipulative practices to exploit consumers’ vulnerabilities or trick them.

In order to enforce these issues, the EU does not have a single legislation that regulates dark patterns, but there are multiple regulations that discuss dark patterns and that may be used as a tool to protect consumers from dark patterns. This includes the General Data Protection Regulation (“GDPR”), the Digital Services Act (“DSA”), the Digital Markets Act (“DMA”), and the Unfair Commercial Practices Directive (“UCPD”), as well as proposed regulations such as the AI Act and Data Act.

As a result, there are several regulations and guidelines that organizations must consider when assessing whether their practices may be deemed as a dark pattern. In this blog post, we will provide a snapshot of the current EU legislation that regulates dark patterns as well as upcoming legislative updates that will regulate dark patterns alongside the current legal framework.

Continue Reading The EU Stance on Dark Patterns

On October 12, 2022, the UK Information Commissioner’s Office (“ICO”) opened a public consultation seeking feedback on the draft guidance document on employment practices, specifically relating to monitoring at work (the “Monitoring at Work Guidance”). The guidance aims to provide practical guidance and good practices relating to monitoring workers in accordance with data protection legislation.

Continue Reading UK Information Commissioner’s Office released a New Draft Employment Guidance for Monitoring at Work

On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement

On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.

The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.

Continue Reading The UK Government Publishes its AI Strategy

On 2 September 2021, the transition year for the Children’s code (or Age Appropriate Design Code) published by the UK Information Commissioner (“ICO”) ended. The ICO’s Children’s code was first published in September 2020, with a 12-month transition period. In an accompanying blog, the ICO has stated that it will be “proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell [the ICO] how their services are designed in line with the code.”

Over the summer, the ICO has also approved two certification schemes under the UK GDPR. The certification schemes provide organizations with a mechanism to demonstrate their high level of commitment to data protection compliance.

Continue Reading UK ICO’s Children’s Code Transition Year Ends and ICO Approves Related Certification Schemes