On April 4, 2023, the European Commission announced that the EU and Japan had successfully completed the first periodic review of the Japan-EU mutual adequacy arrangement, adopted in 2019.  The mutual adequacy recognition – whereby Japan and the EU each have recognized the other’s data protection regime as adequate to protect personal data – complements the regions’ other bilateral partnerships, such as the EU-Japan Economic Partnership Agreement, the Strategic Partnership Agreement, and the recently launched EU-Japan Digital Partnership (see our previous blogpost here).

The review process led to the adoption of two reports by the Commission and the Personal Information Protection Commission of Japan (“PPC”), each discussing the functioning of their respective adequacy decisions.  According to the Commission’s report, the convergence between the EU and Japan’s data protection frameworks has further increased in recent years, and the mutual adequacy arrangement appears to be functioning well.  We provide below a brief overview of the Commission’s main findings.

Continue Reading European Commission Announces Conclusion of First Review of Japan-EU Adequacy Arrangement

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.

Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  

Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

On April 11, the Indiana legislature passed comprehensive state privacy legislation in the form of S.B. 5. S.B. 5 shares similarities with the state privacy laws in Virginia, Connecticut, Colorado, Utah, and most recently Iowa.  If signed into law, S.B. 5 would take effect on January 1, 2026.  This blog post summarizes the statute’s key takeaways.

Continue Reading Indiana Passes Comprehensive Privacy Statute

Washington’s My Health My Data Act (“HB 1155” or the “Act”), which would expand privacy protections for the health data of Washington consumers, recently passed the state Senate after advancing through the state House of Representatives.  Provided that the House approves the Senate’s amendments, the Act could head to the governor’s desk for signature in the coming days and become law.  The Act was introduced in response to the United States Supreme Court’s Dobbs decision overturning Roe v. Wade.   If enacted, the Act could dramatically affect how companies treat the health data of Washington residents. 

This blog post summarizes a few key takeaways in the statute.

Continue Reading Washington’s My Health My Data Act Passes State Senate

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

This blog post identifies a few highlights of the draft Measures.

Continue Reading China Proposes Draft Measures to Regulate Generative AI

On March 24, 2023, the Italian data protection authority (“Garante”) approved a Code of conduct (“Code”) on telemarketing and telesales activities.  The Code was promoted by various Italian industry and consumer associations, pursuant to Article 40 of GDPR. 

The Garante notes that the Code reflects broad industry consensus, and welcomes it as an important step to ensuring the lawful performance of the covered activities.  The Garante have been historically active in regulating telemarketing and telesales companies, and has applied some of its largest fines to this sector. We provide below an overview of the Code’s key provisions and obligations.

Continue Reading Italian Garante Approves Code of Conduct on Telemarketing and Telesales

The UK Information Commissioner’s Office (“ICO”) recently published detailed draft guidance on what “likely to be accessed” by children means in the context of its Age-Appropriate Design Code (“Code”), which came into force on September 2, 2020. The Code applies to online services “likely to be accessed by children” in the UK. “Children” are individuals under the age of 18. In order to determine whether an online service is “likely to be accessed” by children, companies must assess whether the nature and content of the service has “particular appeal for children” and “the way in which the service was accessed”. This new draft guidance provides further assistance on how to make this assessment, and is undergoing a public consultation until May 19, 2023.

Continue Reading UK ICO Provides Guidance On When A Service Is “Likely To Be Accessed By Children” And Needs To Comply With Its Age-Appropriate Design Code

Regulators in Europe and beyond have been ramping up their efforts related to online safety for minors, through new legislation, guidance, and by promoting self-regulatory tools.  We discuss below recent developments in the EU and UK on age verification online.

Continue Reading Age Verification: State of Play and Key Developments in the EU and UK

On March 15, 2023, the Colorado Attorney General filed final rules implementing the Colorado Privacy Act (“CPA”) with the Secretary of State.  The Attorney General first released proposed draft rules on October 10, 2022 and subsequently released revised draft rules on December 21, 2022 and January 27, 2023 after public comment.  The final rules will