Life Sciences & Digital Health

Digital health apps are increasingly used in practice. They raise various questions under regulatory and data protection and data security laws. On November 6, 2023, the German Conference of the Independent Data Protection Supervisory Authorities (Datenschutzkonferenz, DSK), a national body which brings together Germany’s federal and regional data protection authorities, issued a paper about the GDPR’s application to cloud-based digital health applications (“health apps”) that are not subject to the German Digital Health Applications Ordinance (Digitale Gesundheitsanwendungen-Verordnung, the “DiGA Regulation”).Continue Reading German Data Protection Authorities Publish Paper on Cloud-Based Digital Health Applications

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On May 1, 2023, the White House Office of Science and Technology Policy (“OSTP”) announced that it will release a Request for Information (“RFI”) to learn more about automated tools used by employers to “surveil, monitor, evaluate, and manage workers.”  The White House will use the insights gained from the RFI to create policy and best practices surrounding the use of AI in the workplace.Continue Reading White House Issues Request for Comment on Use of Automated Tools with the Workforce

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI