China Issues Draft Regulations on Protecting Minors in Cyberspace

China’s top internet regulator, the Cyberspace Administration of China (“CAC”), continues to show interest in setting more stringent rules governing the protection of minors in the context of online activities and data privacy. Immediately prior to the October holiday, CAC released for public comment new draft regulations aimed at protecting minors on the Internet, the Regulations on the Protection of Minors in Cyberspace (“Draft Regulations”), which contain significant provisions addressing minors’ data privacy. Note that the scope of this new regulation is broader than the US’s Children’s Online Privacy Protection Act (“COPPA”), which focuses primarily on children’s privacy issues. Continue Reading

Luxembourg Bill Amending the Data Protection Act with regard to the Authorization Regime

On August 31, 2016, a bill was presented to the Luxembourg Parliament (the “Bill”) to amend the Law of August 2, 2002, on the Protection of Persons with regard to the Processing of Personal Data.

The Bill aims to reduce the current administrative burden and anticipates the application of the General Data Protection Regulation (“GDPR”) on May 25, 2018, by abolishing certain authorization requirements to the Luxembourg data protection authority (“CNPD”).

Should this bill be passed, companies will no longer need to obtain an authorization from the CNPD for any international data transfers based on Model Transfer Clauses approved by the European Commission, or Binding Corporate Rules approved by the competent data protection authorities of other EU Member States.

Additionally, the need to obtain an authorization from the CNPD for certain processing operations will also be dropped. This will be the case for (i) processing operations for supervision purposes (including supervision at the workplace); (ii) combination of data and (iii) processing relating to the credit status and solvency of the data subjects.

A notification to the CNPD will still be required until the GDPR applies.

CJEU Confirms Dynamic IP Addresses To Be Personal Data

On Wednesday October 19, 2016 the Court of Justice of European Union (“CJEU”) issued its judgment in Case C-582/14, Patrick Breyer v Germany. 

The CJEU held that a “dynamic” IP address constitutes personal data (agreeing with the Opinion of the Advocate General from May this year).  Dynamic IP addresses qualify as personal data, even if the website operator in question cannot identify the user behind the IP address, since the users’ internet service or access providers (“ISPs”) have data that, in combination with the IP address, can identify the users in question.

The CJEU concluded that domestic law — in this case, German law — could not adopt a more restrictive interpretation of the “legitimate interests” legal basis for processing than is set out under the EU Data Protection Directive.  In that vein, the continued processing of personal data, without the user’s consent, may be justified as falling within a legitimate interest — e.g., ensuring the continued security or functioning of those websites including to protect against cyberattacks. Continue Reading

Digital Advertising Alliance Will Begin Enforcing its Cross-Device Guidance February 1, 2017

The Digital Advertising Alliance (DAA), a consortium of the nation’s largest media and marketing associations that has established self-regulatory standards for online behavioral advertising, announced on October 13 that the Council of Better Business Bureaus and the Direct Marketing Association will begin enforcement of the Application of the DAA Principles of Transparency and Control to Data Used Across Devices  (DAA Cross-Device Guidance) on February 1, 2017.

The DAA Cross-Device Guidance explains how the existing Transparency and Consumer Control principles contained in the DAA’s Self-Regulatory Principles for Online Behavioral Advertising and Multi-Site Data and Guidance on the Application of Self-Regulatory Principles to the Mobile Environment apply to practices that utilize data collected across devices.  Under the Guidance, a user’s choice to opt out of online behavioral advertising on a particular browser or device prevents: (1) data collected from that browser or device from being used on other linked devices for online behavioral advertising, (2) data collected from other linked devices from being used for behavioral advertising on the opted-out browser or device, and (3) the transfer of data collected from the opted-out browser or device for online behavioral advertising.

As discussed previously on InsidePrivacy, the FTC held its cross-device tracking workshop in November of 2015. At the workshop, self-regulatory efforts like the DAA Principles were highlighted, as was the necessity for companies to be mindful of their representations in the context of cross-device linking.

FTC Seeks Rehearing of Ninth Circuit Dismissal of Throttling Suit

Last week, the Federal Trade Commission (“FTC”) filed a petition for en banc (full court) review of a Ninth Circuit opinion dismissing the FTC’s lawsuit against AT&T for violating Section 5 of the FTC Act due to its throttling practices.

As we previously reported, in October 2014, the FTC challenged AT&T’s practice of reducing—or “throttling”—the data speeds for its unlimited data plan customers once they reached a certain data usage threshold as both unfair and deceptive under Section 5.  In March 2015, the district court denied AT&T’s motion to dismiss the FTC’s action.  The district court held that Section 5, which exempts, among others, “common carriers subject to the Acts to regulate commerce,” applies only when the entity has the status of a common carrier and is engaged in common carriage activity.  In August 2016, a three-judge panel of the Ninth Circuit reversed the district court’s ruling on the grounds that AT&T was a common carrier and therefore exempt from the FTC Act.  The panel reasoned that the common carrier exemption in Section 5 is based on the company’s status, as AT&T argued, not on the company’s activities, as the FTC argued.

In deciding whether to grant the FTC’s petition for rehearing, the Ninth Circuit will consider whether en banc review “is necessary to secure or maintain uniformity of the court’s decisions” or “the proceeding involves a question of exceptional importance.”  If a majority of active judges answer either of these questions in the affirmative, a full panel will consider whether the common carrier exemption in Section 5 is status-based, such that an entity is exempt from regulation as long as it has the status of a common carrier under the “Acts to regulate commerce,” or is activity-based, such that an entity with the status of a common carrier is exempt only when the activity the FTC is attempting to regulate is a common carrier activity.

G-7 Publishes Fundamental Elements of Cybersecurity for the Financial Sector

On October 11, 2016, the finance ministers and central bank governors of the Group of 7 (G-7) countries announced the publication of the Fundamental Elements of Cybersecurity for the Financial Sector, a non-binding guidance document for financial sector entities.  The publication  describes eight fundamental “elements” of effective cybersecurity risk management to guide public and private sector entities in designing cyber security programs based on their specific risk profile and culture.  The goal of the G-7 is to provide a common framework for the financial sector to develop security programs that will “help bolster the overall cybersecurity and resiliency of the international financial system.”

The eight elements describe the core components of a comprehensive cybersecurity program, while leaving the strategic and operational details to each entity.  The publication is not intended to serve as a binding, one-size-fits-all set of requirements; rather, it describes high-level programmatic “building blocks” that each entity can customize to its own security strategy and operating structure.  Each entity should tailor its application of the elements based on an evaluation of its “operational and threat landscape, role in the sector, and legal and regulatory requirements,” and be informed by its specific “approach to risk-management and culture.”

Continue Reading

White House Releases Report on the Future of Artificial Intelligence

On October 12, 2016, the White House released a report entitled Preparing for the Future of Artificial Intelligence.  The report surveys the current state of Artificial Intelligence (AI), its existing and potential applications, and the questions that progress in AI raises for society and public policy.  The publication of the report follows a series of public outreach activities conducted by the White House Office of Science and Technology Policy (OSTP), including public workshops on AI topics and requests for input from the public.

The report identifies many ways in which AI could contribute to economic growth and enhance social welfare.  It describes AI’s potential to solve some of society’s greatest challenges and inefficiencies by opening up new opportunities for progress in critical areas such as healthcare, education, transportation, criminal justice, economic inclusion, energy, and the environment.  In light of these potential benefits, the report calls for increased funding for AI for many federal agencies, particularly those agencies working on poverty reduction and issues related to economic inequality.  The OSTP cautions, however, that the AI industry must manage the risks and challenges of AI technology in order to ensure that all members of the public have the opportunity to help build and benefit from AI-enhanced society.  The report also identifies ways in which the regulatory system might adapt to AI in order to lower barriers to innovation without adversely impacting safety or market fairness.

The report identifies several policy concerns raised by AI, including its potential impact on privacy, fairness, safety, jobs, and the economy:

  • Privacy of Personal Information. The report identifies concerns with the ability to guarantee transparency in AI-related data collection.  The report suggests that transparency is challenging to achieve in AI in part because AI data and algorithms can be opaque.  Furthermore, it is challenging to obtain explanations for AI-based determinations, the report notes, as “there are inherent challenges in trying to understand and predict the behavior of advanced AI systems.”
  • The report emphasizes that widespread adoption of AI might have the capacity to compromise fairness.  AI, on its own, can lack a built-in understanding of relevant historical context.  As AI replaces decisions traditionally made by human-driven bureaucratic processes, therefore, the report notes concerns about how to ensure justice, fairness, and accountability.  For example, if a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could perpetuate past bias.
  • The report details the safety and control issues inherent in building AI systems.  It underscores the importance of identifying methods to safely transition AI from the “closed world” of the government laboratory into the outside “open world” where a system is likely to encounter unpredictable objects and situations.  The report directs AI practitioners to learn more about verification and validation, managing risk, and communicating with stakeholders about that risk.
  • Jobs and the Economy. The report indicates that AI’s chief short-term economic effect will be on the automation of tasks that could not previously be automated.  While this will increase productivity and contribute to economic growth in the long run, it may also reduce demand for certain lower-wage jobs that require automatable skills.

 The report offers a number of recommendations for policymakers and players in the AI industry.  It broadly encourages private and public institutions to examine whether and how they can leverage AI and machine learning in order to benefit society.  It also urges academic institutions to include as an integral part of their AI, machine learning, and computer science curricula the privacy, ethics, security, and safety implications of AI.

The White House also released a companion report entitled National Artificial Intelligence Research and Development Strategic Plan, which sets forth a plan for federally funded research and development of AI.  In the coming months, the White House intends to release a follow-up report that will explain in further detail the effect of AI-driven automation on jobs and the economy.

DoD Finalizes Rule on Policies for Cyber Incident Reporting

Today, our colleagues Susan Cassidy, Ashden Fein, and John Sorrenti posted an article on Inside Government Contracts about the Department of Defense (DoD) issuing a Final Rule implementing mandatory cyber incident reporting requirements for DoD contractors and subcontractors. The article can be read here.

Inherited Infrastructure, Outdated Software, And Other Failings That Led To TalkTalk’s Record Fine

On October 5, 2016, the UK Information Commissioner’s Office (“ICO”) fined telecoms company TalkTalk a record £400,000 for failing to put in place appropriate data security measures and allowing a cyber-attacker to access TalkTalk customer data “with ease.”  The ICO highlighted several  technical and organizational deficiencies as justification for issuing its largest fine to-date.  Many of these failings are unlikely to be unique to TalkTalk; organizations across all sectors should take note.


Between October 15 and 21, 2015, a cyber-attacker took advantage of technical weaknesses in three of TalkTalk’s webpages.  As is often the case with weaknesses in cyber defences, the relevant infrastructure had been inherited as part of a previous acquisition.

The attacker accessed the personal data of over 150,000 customers, including their names, addresses, dates of birth, phone numbers and email addresses.  The attacker also accessed bank account details and sort codes in over 15,000 cases.

The attack has been subject to widespread media and even led to a Parliamentary inquiry and report.  TalkTalk decided to go public early.  Its CEO, Baroness Dido Harding, appeared on major news outlets globally, including the BBC’s flagship evening program, to warn customers about the potential attack.  (This was a risky strategy: Baroness Harding initially suggested the attack may have impacted over 4,000,000 customers — this turned out to be a 95% over-estimation — and came under fire for not knowing whether the data had been encrypted.) Continue Reading

UK Telco Loses Appeal; Should Have Reported Data Breach Within 24 Hours Of Customer Complaint, Not Fuller Investigation

By Phil Bradley-Schmieg and Gemma Nash

On August 30, 2016, a major UK telecoms company (TalkTalk) lost its appeal against a fine imposed on it for failing to report a personal data breach to the UK national data protection authority (the Information Commissioner) within 24 hours of its receipt of a customer’s complaint.

Commission Regulation No 611/2013 (“the Notification Regulation”) and the UK’s Privacy and Electronic Communications (EC Directive) Regulations 2003 (“PECR”), require telecommunication service providers to report personal data breaches within 24 hours of their “detection.”  TalkTalk’s appeal focused on the extent to which an internal investigation can take place before it is deemed to have “detected” a breach.

Continue Reading