On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.
Vestager’s announcement follows a recent meeting of G7 countries, in May 2023, where global leaders pledged to advance “international discussions on inclusive artificial intelligence (AI) governance and interoperability”, including discussions on topics such as copyright, transparency and the threat of disinformation.
The European Commission proposed its AI Act— establishing binding rules on banned and “high-risk” AI systems — in 2021 (see our blog here), however, the law is still being reviewed by lawmakers and is not expected to come into force before 2026. In May 2023, the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final amendments to the Commission’s AI Act, including new requirements for providers of foundation models and generative AI (see our post here for further details on the proposals). After Members of the European Parliament (“MEPs”) formalize their position, the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see here). Vestager said the AI Act is “well on track” but that its effective implementation is some way off, and the EU needs “to act now” in light of recent developments in generative AI.
In 2022, the EU and US began working on developing joint standards for “trustworthy AI” through the TTC and published a Joint Roadmap for Trustworthy AI and Risk Management (see our previous post here for further details). The closing statement of the May 2023 TTC meeting explains that the EU and US are continuing to implement the Roadmap by launching three expert groups on: AI standards; terms required to assess AI risks; and monitoring existing and emerging risks.
The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.