Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft risk assessment regulations. The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year, at which time it will also consider draft regulations covering “automated decisionmaking technology” (ADMT), cybersecurity audits, and revisions to existing regulations. Accordingly, the draft risk assessment regulations are subject to change. Below are the key takeaways:Continue Reading CPPA Releases Draft Risk Assessment Regulations
Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft “automated decisionmaking technology” (ADMT) regulations. The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year. Accordingly, the draft ADMT regulations are subject to change. Below are the key takeaways:Continue Reading CPPA Releases Draft Automated Decisionmaking Technology Regulations
On October 11, 2023, the French data protection authority (“CNIL”) issued a set of “how-to” sheets on artificial intelligence (“AI”) training databases. The sheets are open to consultation until December 15, 2023, and all AI stakeholders (including companies, researchers, NGOs) are encouraged to provide comments. Continue Reading French CNIL Opens Public Consultation On Guidance On The Creation Of AI Training Databases
On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”). The document builds on principles enshrined in the GPDR, national and EU case-law. Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use of AI in the healthcare space more broadly.
We provide below an overview of key takeaways.Continue Reading Italian Garante Issues Guidance on the Use of AI in the Context of National Healthcare Services
On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.
The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act
Earlier today, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”). The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems. According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership. This blog post summarizes these key components.Continue Reading Biden Administration Announces Artificial Intelligence Executive Order
On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act (for a summary of the AI Act, see our EMEA Tech Regulation Toolkit). In line with its National Artificial Intelligence Strategy, Spain has been playing an active role in the development of AI initiatives, including a pilot for the EU’s first AI Regulatory Sandbox and guidelines on AI transparency.
Continue Reading Spain Creates AI Regulator to Enforce the AI Act
On 9 October 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) agreed revised wording to amend the European Commission’s (the “EC”) proposed new Product Liability Directive (the “Directive”). The vote was passed with 33 votes in favour to 2 against. If adopted, the Directive will replace the existing (almost 40-year old) Directive 85/374/EEC on Liability for Defective Products, which imposes a form of strict liability on product manufacturers for harm caused by their defective products.Continue Reading EU Legislative Update on the New Product Liability Directive
On October 3, the Federal Trade Commission (“FTC”) released a blog post titled Consumers Are Voicing Concerns About AI, which discusses consumer concerns that the FTC received via its Consumer Sentinel Network concerning artificial intelligence (“AI”) and priority areas the agency is watching. Although the FTC’s blog post acknowledged that it did not investigate…
This quarterly update summarizes key legislative and regulatory developments in the third quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.Continue Reading U.S. Tech Legislative & Regulatory Update – Third Quarter 2023