On December 1, 2022, a committee of the Brazilian Senate presented a report (currently available only in Portuguese) with research on the regulation of artificial intelligence (“AI”) and a draft AI law (see pages 15-58) (“Draft AI Law”) that will serve as the starting point for deliberations by the Senate on new AI legislation.  When preparing the 900+ page report and Draft AI Law, the Senate committee drew inspiration from earlier proposals for regulating AI in Brazil and its research into how OECD countries are regulating (or planning to regulate) in this area, as well as inputs received during a public hearing and in the form of written comments from stakeholders.  This blog posts highlights 13 key aspects of the Draft AI Law.

(1) Principles

The Draft AI Law says that the development, implementation and use of AI in Brazil must adhere to the principle of good faith, as well as (among others): self-determination and freedom of choice; transparency, explainability, intelligibility, traceability, and auditability (to avoid risks of both intentional and unintentional uses); human participation in (and supervision of) the “AI life cycle”; non-discrimination, justice, equity, and inclusion; legal process, contestability, and compensatory damages; reliability and robustness of AI and information security; and proportionality/efficacy when using AI. 

(2) Definition of an “AI System” 

The Draft AI Law defines an “AI system” as a computational system with varying degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, via input data from machines or humans, with the goal of producing predictions, recommendations, or decisions that can influence the virtual or real environment.  This definition aligns, at least partly, with the OECD definition of the same term, which other regimes have also adopted or drawn inspiration from when formulating their own AI legislative proposals. 

(3) Risk Assessment

Providers and users of AI systems must conduct and document a risk assessment prior to placing any AI system on the market.

(4) High-Risk AI Systems 

The Draft AI Law offers an enumerated list of “high-risk” AI systems, which include AI systems used in the following contexts: securing the operation of critical infrastructure; education and vocational training; recruiting; credit scoring; use of autonomous vehicles (if such use could cause physical harm to natural persons); and biometric identification.  Notably, the Draft AI Law also classifies health applications (e.g., medical devices) as high-risk AI systems.  The competent authority (see “Enforcement” below) is responsible for periodically updating the list in accordance with a number of criteria set out in the Draft AI Law.

(5) Public Database of High-Risk AI Systems

The competent authority is also tasked with creating and maintaining a publicly accessible database of high-risk AI systems, which will contain (among other information) the completed risk assessments of providers and users of such systems.  Such assessments will be protected under applicable intellectual property and trade secret laws.    

(6) Prohibited AI Systems

Brazil’s Draft AI Law imposes a prohibition on AI systems that (i) deploy subliminal techniques, or (ii) exploit the vulnerabilities of specific groups of natural persons, whenever such techniques or exploitation is intended or has the effect of being harmful to the health or safety of the end user.  Similarly, Brazil’s Draft AI Law also prohibits public authorities from conducting social scoring and the use of biometric identification systems in publicly accessible spaces, unless there is a specific law or court order that expressly authorizes the use of such systems (e.g., for the prosecution of crimes). 

(7) Rights of Individuals 

The Draft AI Law grants persons affected by AI systems the following rights vis-à-vis “providers” and “users” of AI systems, regardless of the risk-classification of the AI system:

  • Right to information about their interactions with an AI system prior to using it – in particular, by making available information that discloses (among other things): the use of AI, including a description of its role, any human involvement, and the decision(s)/ recommendation(s)/ prediction(s) it is used for (and their consequences); identity of the provider of the AI system and governance measures adopted; categories of personal data used; and measures implemented to ensure security, non-discrimination, and reliability;
  • Right to an explanation about a decision, recommendation, or prediction made by an AI system within 15 days of the request – in particular, information about the criteria and procedures used, and the main factors affecting the particular forecast or decision (e.g., rationale and logic of the system, how much it affected the decision made, and so forth);
  • Right to challenge decisions or predictions of AI systems that produce legal effects or significantly impact the interests of the affected party;
  • Right to human intervention in decisions made solely by AI systems, taking into account the context and the state of the art of technological development;
  • Right to non-discrimination and the correction of discriminatory bias, particularly where it results from the use of sensitive personal data leading to (a) a disproportionate impact arising from protected personal characteristics, or (b) disadvantages/ vulnerabilities for people belonging to a specific group, even when apparently neutral criteria are used; and
  • Right to privacy and the protection of personal data, in accordance with the Brazilian General Data Protection Law (“LGPD”).

(8) Governance and Codes of Conduct

Providers and users of all AI systems must establish governance structures, and internal processes capable of ensuring security of such systems and facilitating the rights of affected individuals, including (among others) testing and privacy-by-design measures. 

Providers and users of “high-risk” AI systems must implement heightened measures, such as: conducting an algorithmic impact assessment that must be made publicly available, which may need to be periodically repeated; designating a team to ensure the AI system is informed by diverse viewpoints; implementing technical measures to assist with explainability. 

Further, providers and users of AI systems may also draw up codes of conduct and governance to support the practical implementation of the Draft AI Law’s requirements.

(9) Serious Security Incidents

Providers and users of AI systems must notify the competent authority of the occurrence of serious security incidents, including where there is risk to human life or the physical integrity of people, interruption of critical infrastructure operations, serious damage to property or the environment, as well as any other serious violations of fundamental human rights.

(10) Civil liability 

Providers and users of AI systems are responsible for the damage(s) caused by the AI system, regardless of the degree of autonomy of the system.  Further, providers and users of “high-risk” AI system are strictly liable to the extent of their participation in the damage, and their fault in causing the damage is presumed.

(11) Copyright 

The automated use of existing works – such as their extraction, reproduction, storage and transformation in data and text-mining processes – by AI systems for activities carried out by research organizations and institutions, journalists, museums, archives and libraries, will not necessarily constitute a copyright infringement under certain scenarios listed in the Draft AI Law.

(12) Sandboxes 

The Draft AI Law provides that the competent authority may regulate testing environments to support the development of innovative AI systems.

(13) Enforcement

The Brazilian Government must designate a competent authority to oversee the implementation and enforcement of the Draft AI Law.  Depending on the violation, administrative fines may be imposed of up to 50 million Reais (approximately 9 million Euros) or 2% of a company’s turnover.

Next steps

The Senate will use the Draft AI Law as a basis for drafting and approving a bill, which will then be discussed in the Chamber of Deputies.

*                      *                      *

Covington regularly advises the world’s top technology companies on their most challenging regulatory and compliance issues in the U.S., Europe, and other major markets. If you have questions about the regulation of Artificial Intelligence, or other tech regulatory matters, please do not hesitate to contact us.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.  She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).  Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.  Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Nicholas Shepherd Nicholas Shepherd

Nicholas Shepherd is an associate in Covington’s Washington, DC office, where he is a member of the Data Privacy and Cybersecurity Practice Group, advising clients on compliance with all aspects of the European General Data Protection Regulation (GDPR), ePrivacy Directive, European direct marketing…

Nicholas Shepherd is an associate in Covington’s Washington, DC office, where he is a member of the Data Privacy and Cybersecurity Practice Group, advising clients on compliance with all aspects of the European General Data Protection Regulation (GDPR), ePrivacy Directive, European direct marketing laws, and other privacy and cybersecurity laws worldwide. Nick counsels on topics that include adtech, anonymization, children’s privacy, cross-border transfer restrictions, and much more, providing advice tailored to product- and service-specific contexts to help clients apply a risk-based approach in addressing requirements in relation to transparency, consent, lawful processing, data sharing, and others.

A U.S.-trained and qualified lawyer with 7 years of working experience in Europe, Nick leverages his multi-faceted legal background and international experience to provide clear and pragmatic advice to help organizations address their privacy compliance obligations across jurisdictions.

Nicholas is a member of the Bar of Texas and Brussels Bar (Dutch Section, B-List). District of Columbia bar application pending; supervised by principals of the firm.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.