Artificial Intelligence (AI)

In a new post on the Covington Inside Tech Media Blog, our colleagues discuss the National Institute of Standards and Technology’s draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), which seeks to define the principles that capture the fundamental properties of explainable AI systems.  Comments on the draft will be accepted

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).

Continue Reading UK ICO publishes guidance on Artificial Intelligence

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.
Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”.  The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases.  This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.
Continue Reading French CNIL Publishes Paper on Algorithmic Discrimination

Trustworthy AI has garnered attention from policymakers and other stakeholders around the globe.  How can organizations operationalize trustworthy AI for Covid-19 and other AI applications, as the legal landscape continues to evolve? Lee Tiedrich and Lala R. Qadir share ten steps in this article with Law360.  For more information about AI, please see our “

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).

Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI

In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.

The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.


Continue Reading European Commission’s Plans for AI and Data: Focus on Digital Health (Part 4 of 4)

In November 2019, the Council of Europe’s* Committee of Experts on Human Rights of Automated Data Processing and Different Forms of Artificial Intelligence (the “Committee”) finalized its draft recommendations on the human rights impacts of algorithmic systems (the “Draft Recommendations’’).  The Draft Recommendations, which are non-binding, set out guidelines on how the Council of Europe member states should legislate to ensure that public and private sector actors appropriately address human rights issues when designing, developing and deploying algorithmic systems.

Continue Reading Algorithmic Systems and Human Rights: The Council of Europe’s Venture into AI Standard Setting

On 19 February 2020, the new European Commission published two Communications relating to its five-year digital strategy: one on shaping Europe’s digital future, and one on its European strategy for data (the Commission also published a white paper proposing its strategy on AI; see our previous blogs here and here).  In both Communications, the Commission sets out a vision of the EU powered by digital solutions that are strongly rooted in European values and EU fundamental rights.  Both Communications also emphasize the intent to strengthen “European technological sovereignty”, which in the Commission’s view will enable the EU to define its own rules and values in the digital age.  The Communications set out the Commission’s plans to achieve this vision.

Continue Reading European Commission’s plans on data and Europe’s digital future (Part 3 of 4)