With the growing use of AI systems and the increasing complexity of the legal framework relating to such use, the need for appropriate methods and tools to audit AI systems is becoming more pressing both for professionals and for regulators. The French Supervisory Authority (“CNIL”) has recently tested tools that could potentially help its auditors understand the functioning of an AI system.

Overview of the tools tested by the CNIL

The CNIL tested two different tools, IBEX and Algocate.  While IBEX aims at explaining an AI system, Algocate seeks to justify the decisions made by a AI system by checking the decision against specific standards. Both tools enable “black box” audits, meaning that they focus on the ins and outs of an AI system rather than on its internal functioning. The tools also rely on local explanatory methods, which provide an explanation for a decision related to a particular data input in the system; not on global explanatory methods which would attempt to explain all possible decisions simultaneously.

Test and conclusions

The CNIL asked some of its agents to use these tools in a theoretical scenario and consider the following questions:

  • Were the explanations provided by the tool helpful to understand the functioning of the AI system?
  • Were such explanations understandable by the participants?
  • Would these tools facilitate the work of the CNIL’s auditors?

The CNIL agents noted some challenges for each tool, in particular in relation to real-life use and the complexity of the tools.  The CNIL’s experiment also showed that some users would have preferred an explanation of the generic functioning of the system rather than local analyses. 

It therefore seems the tools will require some further improvement before they can be effectively used by regulators.  Other French public initiatives are looking into different audit models relying, for example, on global explicative methods (e.g., Pôle d’expertise de la régulation numérique’s study on methodologies for auditing content recommendation algorithms – available in French here). 

We will keep monitoring this topic moving forward, and relay any updates from the CNIL relating to auditing tools for AI systems.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.

Photo of Alix Bertrand Alix Bertrand

Alix advises clients on EU data protection and technology law, with a particular focus on French privacy and data protection requirements. She regularly assists clients in relation to international data transfers, direct marketing rules as well as IT and data protection contracts. Alix…

Alix advises clients on EU data protection and technology law, with a particular focus on French privacy and data protection requirements. She regularly assists clients in relation to international data transfers, direct marketing rules as well as IT and data protection contracts. Alix is a member of the Paris and Brussels Bars.