Artificial Intelligence (AI)

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a new bipartisan framework for artificial intelligence (“AI”) legislation.  Senator Blumenthal said, “This bipartisan framework is a milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends.” He also told CTInsider that he hopes to have a “detailed legislative proposal” ready for Congress by the end of this year.

Continue Reading Senators Release Bipartisan Framework for AI Legislation

On August 25, 2023, China’s National Information Security Standardization Technical Committee (“TC260”) released the final version of the Practical Guidelines for Cybersecurity Standards – Method for Tagging Content in Generative Artificial Intelligence Services (《网络安全标准实践指南——生成式人工智能服务内容标识方法》) (“Tagging Standard”) (Chinese version available here), following a draft version circulated earlier this month.

Continue Reading Labeling of AI Generated Content: New Guidelines Released in China

On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August 15, 2023. 

As the first comprehensive AI regulation in China, the Measures cover a wide range of topics touching upon how Generative AI Services are developed and how such services can be offered.  These topics range from AI governance, training data, tagging and labeling to data protection and user rights.  In this blog post, we will spotlight a few most important points that could potentially impact a company’s decision to develop and deploy their Generative AI Services in China.

This final version follows a first draft which was released for public consultation in April 2023 (see our previous post here). Several requirements were removed from the April 2023 draft, including, for example, the prohibition of user profiling, user real-name verification, and the requirement to take measures within three months through model optimization training to prevent illegal content from being generated again.  However, several provisions in the final version remain vague (potentially by design) and leave room to future regulatory guidance as the generative AI landscape continues to evolve.

Continue Reading Key Takeaways from China’s Finalized Generative Artificial Intelligence Measures

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.

Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).

Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).

Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models

On May 1, 2023, the White House Office of Science and Technology Policy (“OSTP”) announced that it will release a Request for Information (“RFI”) to learn more about automated tools used by employers to “surveil, monitor, evaluate, and manage workers.”  The White House will use the insights gained from the RFI to create policy and best practices surrounding the use of AI in the workplace.

Continue Reading White House Issues Request for Comment on Use of Automated Tools with the Workforce

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.

Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI