On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.Continue Reading EU Artificial Intelligence Act Published
Artificial Intelligence (AI)
Council of Europe Adopts International Treaty on Artificial Intelligence
On May 17, 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”). The Convention represents the first international treaty on AI that will be legally binding on the signatories. The Convention will be open for signature on September 5, 2024.
The Convention was drafted by representatives from the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay). The Convention is not directly applicable to businesses – it requires the signatories (the “CoE signatories”) to implement laws or other legal measures to give it effect. The Convention represents an international consensus on the key aspects of AI legislation that are likely to emerge among the CoE signatories.Continue Reading Council of Europe Adopts International Treaty on Artificial Intelligence
Italy Proposes New Artificial Intelligence Law
On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.
The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law.
We provide below an overview of the proposal’s key provisions.Continue Reading Italy Proposes New Artificial Intelligence Law
OMB Issues First Governmentwide AI Policy for Federal Agencies
On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI). The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.Continue Reading OMB Issues First Governmentwide AI Policy for Federal Agencies
French CNIL Opens Public Consultation On Guidance On The Creation Of AI Training Databases
On October 11, 2023, the French data protection authority (“CNIL”) issued a set of “how-to” sheets on artificial intelligence (“AI”) training databases. The sheets are open to consultation until December 15, 2023, and all AI stakeholders (including companies, researchers, NGOs) are encouraged to provide comments. Continue Reading French CNIL Opens Public Consultation On Guidance On The Creation Of AI Training Databases
From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act
On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.
The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act
UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI
On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.
In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI
EU and US Lawmakers Agree to Draft AI Code of Conduct
On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct
EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.
In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
UK’s Competition and Markets Authority Launches Review into AI Foundation Models
On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models