technology

On January 19, 2023, the National Institute of Standards and Technology (“NIST”) published a Concept Paper setting out “Potential Significant Updates to the Cybersecurity Framework.”  Originally released in 2014, the NIST Cybersecurity Framework (“CSF” or “Framework”) is a framework designed to assist organizations with developing, aligning, and prioritizing “cybersecurity activities with [] business/mission requirements, risk tolerances, and resources.”  Globally, organizations, industries, and government agencies have increasingly relied upon the Framework to establish cybersecurity programs and measure their maturity.  The NIST CSF was previously updated in 2018, and NIST now seeks public comment on the latest changes outlined in the Concept Paper.

Continue Reading NIST Requests Comments on Potential Significant Updates to the Cybersecurity Framework

On September 16, the Fifth Circuit issued its decision in NetChoice L.L.C. v. Paxton, upholding Texas HB 20, a law that limits the ability of large social media platforms to moderate content and imposes various disclosure and appeal requirements on them.  The Fifth Circuit vacated the district court’s preliminary injunction, which previously blocked the Texas Attorney General from enforcing the law.  NetChoice is likely to ask the U.S. Supreme Court to review the Fifth Circuit’s decision.

Continue Reading Fifth Circuit Upholds Texas Law Restricting Online “Censorship”

On June 26, 2019, the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) announced two important developments: (1) the launch of the pilot phase of the assessment list in its Ethics Guidelines for Trustworthy AI (the “Ethics Guidelines”); and (2) the publication of its Policy and Investment Recommendations for Trustworthy AI (the “Recommendations”).

The AI HLEG is an independent expert group established by the European Commission in June 2018.  The Recommendations are the second deliverable of the AI HLEG; the first was the Group’s Ethics Guidelines of April 2019, which defined the contours of “Trustworthy AI” (see our previous blog post here).  The Recommendations are addressed to policymakers and call for 33 actions to ensure the EU, together with its Member States, enable, develop, and build “Trustworthy AI” – that is, AI systems and technologies that reflect the AI HLEG’s now-established ethics guidelines.  Neither the Ethics Guidelines nor the Recommendations are binding, but together they provide significant insight into how the EU or Member States might regulate AI in the future.

Throughout the remainder of 2019, the AI HLEG will undertake a number of sectoral analyses of “enabling AI ecosystems” — i.e., networks of companies, research institutions and policymakers — to identify the concrete actions that will be most impactful in those sectors where AI can play a strategic role.

Continue Reading Two new developments from the EU High-Level Working Group on AI: launch of pilot phase of Ethics Guidelines and publication of Policy and Investment Recommendations for Trustworthy AI