Photo of Lee Tiedrich

Lee Tiedrich brings together an undergraduate education in electrical engineering and over twenty years of legal experience to assist clients on a broad range of intellectual property and technology transaction matters. Her work spans several industries, including ehealth, life sciences, consumer products, communications and media. She counsels both private and public companies, as well as venture capital firms and corporate venture groups in their investments. Ms. Tiedrich has extensive experience negotiating complex intellectual property acquisition, licensing, and development agreements, and regularly counsels clients on strategic issues, such as developing and maintaining intellectual property portfolios and evaluating and addressing intellectual property-related assets and risks.

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).

Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

In a new post on the Covington Inside Tech Media Blog, our colleagues discuss the National Institute of Standards and Technology’s draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), which seeks to define the principles that capture the fundamental properties of explainable AI systems.  Comments on the draft will be accepted

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.
Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

Trustworthy AI has garnered attention from policymakers and other stakeholders around the globe.  How can organizations operationalize trustworthy AI for Covid-19 and other AI applications, as the legal landscape continues to evolve? Lee Tiedrich and Lala R. Qadir share ten steps in this article with Law360.  For more information about AI, please see our “