ICO’s Call for Input on Bias and Discrimination in AI systems

On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out below.

The ICO explains in its blog how flaws in training data can result in algorithms that perpetuate or magnify unfair biases. The ICO identifies three broad approaches to mitigate this risk in machine learning models:

  1. Anti-classification: making sure that algorithms do not make judgments based on protected characteristics such as sex, race or age, or on proxies for protected characteristics (e.g., occupation or post code);
  2. Outcome and error parity: comparing how the model treats different groups. Outcome parity means all groups should have equal numbers of positive and negative outcomes. Error parity means all groups should have equal numbers of errors (such as false positives or negatives). A model is fair if it achieves outcome parity and error parity across members of different protected groups.
  3. Equal calibration: comparing the model’s estimate of the likelihood of an event and the actual frequency of said event for different groups. A model is fair if it is equally calibrated between members of different protected groups.

The guidance stresses the importance of appropriate governance measures to manage the risks of discrimination in AI systems. Organizations may take different approaches depending on the purpose of the algorithm, but they should document the approach adopted from start to finish. The ICO also recommends that organizations adopt clear, effective policies and practices for collecting representative training data to reduce discrimination risk; that organizations’ governing bodies should be involved in approving anti-discrimination approaches; and that organizations continually monitor algorithms by testing them regularly to identify unfair biases. Organizations should also consider using a diverse team when implementing AI systems, which can provide additional perspectives that may help to spot areas of potential discrimination.

The ICO seeks input from industry stakeholders on two questions:

  • If your organisation is already applying measures to detect and prevent discrimination in AI, what measures are you using or have you considered using?
  • In some cases, if an organisation wishes to test the performance of their ML model on different protected groups, it may need access to test data containing labels for protected characteristics. In these cases, what are the best practices for balancing non-discrimination and privacy requirements?

The ICO also continues to seek input from industry on the development of an auditing framework for AI; organizations should contact the ICO if they wish to provide feedback.

UK Government’s Guide to Using AI in the Public Sector

On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”).  The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.

The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems.  Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.

The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.

Continue Reading

Privacy Shield Ombudsperson Confirmed by the Senate

On June 20, 2019, Keith Krach was confirmed by the U.S. Senate to become the Trump administration’s first permanent Privacy Shield Ombudsperson at the State Department.  The role of the Privacy Shield Ombudsperson is to act as an additional redress avenue for all EU data subjects whose data is transferred from the EU or Switzerland to the U.S. under the EU-U.S. and the Swiss-U.S. Privacy Shield Framework, respectively.

As Ombudsperson, Krach will be responsible for dealing with complaints and requests from individuals in the EU and Switzerland, including in relation to U.S. national security access to data transmitted from the EU or Switzerland to the U.S.  The Ombudsperson works with other Government officials and independent oversight bodies to review and respond to requests.  Krach’s role as Ombudsperson forms part of his duties as the Under Secretary for Economic Growth, Energy and the Environment.  The Under Secretary is independent from the intelligence services and reports directly to the Secretary of State.

The formal approval of a permanent Privacy Shield Ombudsperson will be welcomed at EU level.  As we have previously reported, the European Data Protection Board praised the appointment of a permanent Ombudsperson in its January report regarding the second annual review of the Privacy Shield.  In addition, the Commission has emphasized that the Ombudsperson is “an important mechanism that ensures complaints concerning access to personal data by U.S. authorities are addressed.”  This appointment comes at a time when both the EU-U.S. Privacy Shield and the Standard Contractual Clauses are under scrutiny in the European courts.

Legislation Seeks to Regulate Privacy and Security of Wearables and Genetic Testing Kits

Last week, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the Protecting Personal Health Data Act (S. 1842), which would provide new privacy and security rules from the Department of Health and Human Services (“HHS”) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies. Specifically, the legislation would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software. These new regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA.

Continue Reading

NIST Announces and Seeks Public Comment on 800-171 Update and Related Documents

Today, Susan Cassidy, Ashden Fein, Moriah Daugherty, and Melinda Lewis posted an article on Inside Government Contracts about the June 19, 2019 announcement by the National Institute of Standards and Technology (“NIST”) of the long-awaited update to Special Publication (“SP”) 800-171 Rev. 1, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations.

The update includes separate but related documents: SP 800-171 Rev. 2, which includes minor editorial updates to SP 800-171 Rev. 1; SP 800-171B, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations: Enhanced Security Requirements for Critical Programs and High Value Assets, which contains recommended enhanced security requirements designed to protect against advanced persistent threats (“APTs”); and the a DoD cost estimate for complying with the requirements of SP 800-171B, Request for Comments on Draft NIST Special Publication (SP) 800-171B, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations – Enhanced Security Requirements for Critical Programs and High Value Assets.

The article can be read here.

Thailand Adopts Personal Data Protection Act

On May 27, 2019, the Thai government published the Personal Data Protection Act B.E. 2562 (2019) (the “PDPA”) in its official gazette, meaning the law now takes effect and companies have a 1-year period to bring their practices into compliance by May 27, 2020.

Notably, the PDPA adopts a broad definition of “personal data” (essentially, any information which directly or indirectly identifies an individual) and an extraterritorial scope that extends its obligations to organizations outside of Thailand who either (i) offer products and services to individuals in Thailand, or (ii) monitor the behavior of individuals in Thailand. The PDPA also adopts the concepts of “controller” and “processor” consistent with various other privacy regimes.

The PDPA requires, among other things, organizations to:

  • have a legal basis to collect and use personal information (in some cases requiring consent);
  • respect heightened requirements for sensitive personal data;
  • implement appropriate security measures and notify data breaches; and,
  • facilitate the exercise of rights of individuals relating to their personal data.

Organizations which meet certain criteria may also be required to appoint a data protection officer (“DPO”) and/or a local representative in Thailand.

The PDPA establishes the Personal Data Protection Committee (“PDPC”), which will enforce the law and publish guidance to help organizations ensure compliant practices.  Violations of the PDPA may result in administrative fines, civil damages (including punitive damages), and the possibility for criminal prosecution.

Additional legislation will be published in the near future to further specify certain requirements of the PDPA, as well as to align national legislation appropriately.

ICO Publishes Report on Impact of GDPR

On 30 May 2019, the United Kingdom’s ICO released a report, “GDPR: One Year On”, discussing the impact of the GDPR and its associated learnings after one year following its implementation (the “Report”), which provides valuable insight into the enforcement practices, EU-wide cooperation, support functions, innovative practices and further growth plans of the ICO. The contents of the Report will likely prove useful in helping to map out the direction the ICO will take during the course of the coming year and beyond.

Continue Reading

AI/IoT Update:  Congress Considers Measures to Support AI and IoT Technologies

As policymakers weigh the implications of artificial intelligence (“AI”) and the Internet of Things (“IoT”), members of Congress have introduced a handful of measures focusing on Government support for and adoption of these emerging technologies.

In May, Senators Deb Fischer (R-NE), Brian Schatz (D-HI), Cory Gardner (R-CO), and Cory Booker (D-NJ) reintroduced the Developing and Growing the Internet of Things (“DIGIT”) Act.  An earlier version of the legislation passed the Senate last year, but stalled in the House.

As reintroduced, the DIGIT Act would convene a working group of federal entities that would consult with private sector stakeholders to provide Congress with recommendations to encourage the growth of Internet of Things (“IoT”) technologies.  Specifically, and among other measures, the bill would require the working group to:

  • identify governmental activities that inhibit or could inhibit the growth of IoT
  • consider policies or programs that encourage and improve coordination among federal agencies relevant to IoT
  • examine how federal agencies can benefit from IoT, the IoT technologies currently used by agencies, and how prepared agencies are to adopt new IoT technologies
  • consider any additional security measures federal agencies may need to take to safely and securely use IoT and enhance the resiliency of federal systems against cyber threats to IoT

The working group would include governmental entities, who would be directed to consult with non-governmental stakeholders, including industry representatives from non-technology companies, in the transportation, energy, agriculture, or health care sectors.  The DIGIT Act would also create a steering group of private entities to advise the working group.  The working group would be required to submit a report to Congress within 18 months of the Act’s enactment.

The DIGIT Act would also require the Federal Communications Commission (“FCC”) to study and provide a report to Congress on the spectrum needs to support an IoT ecosystem.

Two other new federal bills would also support new uses of AI technologies.  The AI in Government Act of 2019 (H.R. 2575), sponsored by Rep. Jerry McNerney (D-CA-9), would create an AI Center of Excellence to advise and promote efforts to develop innovative uses of  AI by the federal Government. In the Senate, the Artificial Intelligence Initiative Act (S. 1558), sponsored by Sen. Martin Heinrich (D-NM), would establish a coordinated federal initiative to accelerate research and development of AI.

China Seeks Public Comments on Draft Measures related to the Cross-border Transfer of Personal Information

On June 13, 2019, the Cyberspace Administration of China (“CAC”) issued the draft Measures on Security Assessment of the Cross-border Transfer of Personal Information (“Draft Measures”) for public comment. (The official Chinese version of the Draft Measures is available here, and an unofficial English translation is available here.) The comment period ends on July 13, 2019.

The issuance of the Draft Measures marks another major development in the implementation of China’s Cybersecurity Law (“CSL”) over the past month, aiming to create a cross-border data transfer mechanism that would govern all of the transfers of personal information conducted by network operators (defined as “owners and managers of networks, as well as network service providers”).

CAC has previously released two earlier versions of its draft Measures on Security Assessment of Cross-border Transfer of Personal Information and Important Data back in 2017, which imposed security assessment obligations on network operators when they transfer both personal information and important data outside of China (See Covington’s previous alert here). The latest and long-anticipated Draft Measures only focus on the cross-border transfer of personal information (the cross-border transfer of important data will be subject to a separate approval mechanism introduced by the draft Measures for Data Security Management released by CAC on May 28, 2019) and also set out new requirements that bear resemblance to the Standard Contractual Clauses under the EU’s General Data Protection Regulation (“GDPR”).

We discuss the key requirements of the Draft Measures in a greater detail below.

Continue Reading

Nevada’s New Consumer Privacy Law Departs Significantly From The California CCPA

On May 29, 2019, the Governor of Nevada signed into law Senate Bill 220 (“SB 220”), an act relating to Internet privacy and amending Nevada’s existing law requiring websites and online services to post a privacy notice.  In short, Nevada’s law will require operators of Internet websites and online services to follow a consumer’s direction not to sell his or her personal data.  The Nevada law differs from the California Consumer Privacy Act (“CCPA”) enacted last year in notable ways, and could signal the coming of a patchwork of fifty-plus different data privacy standards across the country, much like the state data breach notification laws.

Unlike the CCPA (which applies to both online and offline business operations), SB 220 applies only to operators of Internet websites and online services, and defines “operators” as people who (1) own or operate an Internet website or online service for commercial purposes; (2) collect and maintain covered information from consumers who reside in Nevada and use or visit the Internet website or online service; and (3) engage in any activity that constitutes a sufficient nexus with Nevada to satisfy the requirements of the United States Constitution.  Such activity includes purposefully directing activities toward Nevada, consummating a transaction with Nevada or a Nevada resident, or purposefully taking advantage of the privilege of conducting activity in Nevada.  SB 220 does not apply to the following entities: an entity that is regulated by the Gramm-Leach-Bliley Act or the Health Insurance Portability and Accountability Act; a service provider to an operator; or a manufacturer of a motor vehicle or a person who services a motor vehicle who processes covered information that is either (1) retrieved from a motor vehicle in connection with a technology or service related to the motor vehicle, or (2) provided by a consumer in connection with a subscription or registration for a technology or service related to the motor vehicle. Continue Reading

LexBlog