On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”). The document builds on principles enshrined in the GPDR, national and EU case-law. Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use of AI in the healthcare space more broadly.
We provide below an overview of key takeaways.
Lawfulness of processing
The “substantial public interest” derogation for the processing of health data (Article 9(2)(g) of GDPR) must be grounded in EU or in specific provisions of national law. Moreover, when relying on that ground, profiling and automated decision making may only take place if expressly provided by law.
Accountability, definition of roles and privacy by design and by default
The Garante stresses the importance of the principles of privacy by design and by default, connected with accountability. Controllers should carefully consider the design of systems and appropriate data protection safeguards throughout the entire AI cycle. Additionally, the roles of each stakeholder involved should be determined appropriately.
Data protection impact assessment (“DPIA”)
The Garante unequivocally states that processing of health data to carry out health services at the national level through the use of AI, resulting in a systematic and large-scale processing, qualifies as “high risk”, and therefore requires conducting a DPIA. Among other things, the DPIA should take into account specific risks, such as discrimination, linked to the use of algorithms to identify trends and draw conclusions from certain datasets, and to take automated decisions based on profiling. The DPIA should also carefully outline the role of human intervention in those decision-making processes.
Key principles for performing public interest tasks through AI tools and algorithms
The Garante recalls the application of three key principles, established by recent national case law, when processing personal data by means of AI tools and algorithms in the public interest, namely:
- Transparency: data subjects have a right to know about the existence of decision-making based on automated processing, and to be informed about the logic involved;
- Human intervention: human intervention capable of controlling, confirming, or refuting an automated decision should be guaranteed; and
- Non-discrimination: controllers should ensure that they use reliable AI systems, and implement appropriate measures to reduce opaqueness and errors, and periodically review the systems’ effectiveness, given the potential discriminatory effects that processing of health data may yield.
Quality, integrity and confidentiality of data
Ensuring the accuracy and quality of data processed is paramount in this context, not least to ensure adequate and safe therapeutic assistance. Controllers should therefore evaluate carefully the underlying risks and take appropriate measures to address them.
Moreover, the authority highlights the risks connected with potential biases produced in the development and use of the analyses, and/or the volume of data used, which may result in negative impact on, or discriminatory effects for individuals. Controllers should mitigate risks by taking the following measures: (1) clarify the algorithmic logic used by the AI to generate data and services; (2) keep a record of checks performed to avoid biases and of the implemented measures; and (3) monitor risks.
Transparency and fairness
To ensure transparency and fairness in automated decision-making processes, and in the particular context of national healthcare services, the Garante recommends implementing the following measures:
- ensure clarity, predictability and transparency of the legal basis, including by conducting dedicated information campaigns and ensure effective methods for data subjects to exercise their rights;
- consult stakeholders and data subjects in the context of conducting a DPIA, and publish at least an excerpt of the DPIA;
- inform data subjects in clear, concise and comprehensible terms, not only with regards to the elements prescribed by Articles 13 and 14 of GDPR, but also about (i) whether the processing is performed in the algorithm’s training phase, or in its subsequent application, and describing the logic and characteristics of the processing; (ii) whether any obligations and responsibilities are imposed on healthcare professionals using healthcare systems based on AI; and (iii) the advantages, with regards to diagnostics and therapy, resulting from the use of such technology;
- when used for therapeutic purposes, ensure that data processing based on AI is only executed on the basis of an express request by the healthcare professional, and not automatically; and
- regulate the healthcare practitioner’s professional responsibility.
Human supervision
The Garante highlights the potential risks for individuals’ rights and freedoms of exclusively automated decision-making, and endorses effective human intervention, through highly skilled supervision. The authority recommends ensuring a central role for human supervision in the training phase of the algorithm, and in particular, of the healthcare professional.
Principles relating to human dignity and personal identity
The Guidance concludes with some general considerations on the role of ethics in the future development of AI systems in the health space, in order to safeguard human dignity and personal identity, especially with regards to vulnerable subjects. The Garante recommends to carefully select and engage reliable suppliers of AI services, by verifying preliminarily documentation, such as an AI impact assessment (for more information on AI impact assessments, see our previous blog post here).
***
Covington’s Data Privacy and Cybersecurity Team regularly advises clients on the laws surrounding AI and continues to monitor developments in the field of AI. If you have any questions about AI in the healthcare space, our team and Covington’s Life Sciences Team would be happy to assist.