On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”. The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases. This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.
According to this new paper, each stage of the development and deployment of an algorithmic system is potentially susceptible to bias – indeed, even the maintenance of such a system can be vulnerable to this problem. Biases are often the result of the data that is fed into a system, which may itself be skewed or contain information already affected by biases. The paper gives the example a facial recognition system whose algorithms were trained mainly with data relating to white men. Alternatively, while the data fed into a system may be “neutral” and representative, the combination of various data types may lead to discriminatory effects later on. Here, the paper gives the example of a university that uses applicants’ place of residence as a criteria to discriminate against applicants of immigrant origin.
The paper concludes that automated systems “tend to stigmatize members of already disadvantaged and dominated social groups”; moreover, the developers of algorithms and the companies using them are currently “not vigilant enough to avoid this invisible form of automated discrimination”. The paper advocates for companies to implement measures that will help ensure that algorithmic biases are identified and that individuals applying discriminatory decisions be sanctioned. Finally, the paper lists the following recommendations to help effect change in this area:
- training and raising awareness among professionals who create and use algorithmic systems;
- supporting research to develop studies on bias and methodologies to prevent it;
- imposing stricter transparency obligations which reinforce the need to explain the logic behind algorithms (and allow third parties, and not only those affected by an automated decision, to access the criteria used by the algorithms); and
- conducting impact assessment studies to anticipate the discriminatory effects of algorithms (g., similar to the Algorithmic Impact Assessment platform recently implemented by the Canadian Federal government).