FTC Enters Into COPPA Settlement With Online Talent Search Company

On Monday, the Federal Trade Commission (FTC) entered into a settlement with Nevada-based Prime Sites, Inc., doing business as Explore Talent, related to charges that Explore Talent violated the Children’s Online Privacy Protection Act (COPPA).  Explore Talent, an online talent search company, will pay $235,000 in civil penalties.

According to the FTC’s complaint, Explore Talent violated COPPA by collecting and disclosing children’s personal information without obtaining parental consent and by failing to represent accurately its collection, use, and disclosure practices.  Specifically, the FTC alleged that Explore Talent required that users—including children under 13—submit personal information, such as their names, email addresses, and telephone numbers and further requested that users provide mailing addresses and photographs.  Much of the personal information users provided became publicly available on users’ profiles on ExploreTalent.com.  Explore Talent did not provide notice to parents or obtain verifiable parental consent prior to such collection and disclosure.  In addition, Explore Talent did not place any restrictions on users who indicated they were under 13, nor did it take any steps to verify whether a profile was being created by a legal guardian, notwithstanding the instruction in its Privacy Policy that users under 13 must have a parent or legal guardian create their account.  As a result, the complaint alleged that Explore Talent falsely stated in its Privacy Policy that it did not knowingly collect personal information from children under the age of 13.

Separately, the complaint also alleged violations of the FTC Act related to Explore Talent’s claims regarding its premium services.

President Trump Nominates Four New Commissioners to FTC

Last week, President Trump nominated four new commissioners to the Federal Trade Commission (“FTC”):  Joseph J. Simons, an antitrust attorney, as Chairman; Noah Joshua Phillips, chief counsel for Senate Majority Whip John Cornyn (R-Texas), for the second Republican seat; Christine Wilson, an executive for Delta Air Lines, for the third Republican seat; and Rohit Chopra, a senior fellow at the Consumer Federation of America, for a Democratic seat.  By statute, no more than three commissioners may be members of the same political party.  The fifth spot on the Commission would remain vacant pending an additional nomination by the President.

If confirmed by the Senate, these four nominees would establish a Republican majority at the FTC.  Since early last year, the agency has been operating with just one Commissioner from each party – Acting Chairman Maureen Ohlhausen and Democratic Commissioner Terrell McSweeny.  Earlier in the week, President Trump also announced his intent to nominate Acting Chairman Ohlhausen for a seat on the U.S. Court of Federal Claims.  Therefore, these new nominations would completely change the composition of the Commission.

Continue Reading

CJEU Rejects Consumer Privacy Class Action

By Dan Cooper, Joseph Jones, and Ruth Scoles Mitchell

On January 25, 2018, the Court of Justice of the European Union (“CJEU”) handed down a ruling permitting consumer privacy actions to be brought in the consumer’s home jurisdiction — as opposed to the jurisdiction in which the defendant data controller has its main establishment — but not permitting consumer privacy class actions to be brought in a consumer’s home jurisdiction.

Background

Maximilian Schrems (“Schrems”) — an Austrian resident, lawyer and privacy activist (best known for his involvement in litigation relating to the EU-U.S. Safe Harbor and the EU Model Clauses) — brought a class action against Facebook’s Irish-registered office, before the Austrian courts.  Schrems’ action alleges various breaches of Austrian, Irish, and EU data privacy rules, and includes claims for damages arising from these alleged breaches.

Schrems, a Facebook user of ten years, initially registered with Facebook under a false name for personal purposes only, engaging in typical private uses of the site such as to share photos and posts with his 250 or so Facebook Friends.  Then, in 2011, Schrems created a Facebook page to report on his legal proceedings against Facebook Ireland, reference his lectures and media appearances, advertise his books and solicit public donations.

The Austrian Supreme Court sought a preliminary ruling from the CJEU on two points.

  • Whether Schrems is a “consumer” as defined and interpreted under EU law (namely Article 15 of Regulation No. 44/2001 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters), in relation to his Facebook account, specifically the use of his Facebook page (“the Consumer Issue”).
  • Whether Schrems could bring his action alongside and on behalf other consumers in contractual relationships with Facebook, those consumers numbering more than 25,000 and residing in Austria, other Member States, and outside the EU (“the Class Action Issue”).

Continue Reading

China Issues New Personal Information Protection Standard

On January 2, 2018, the Standardization Administration of China (“SAC”) released the final version of the national standard on personal information protection, officially entitled GB/T 35273-2017 Information Technology – Personal Information Security Specification (GB/T 35273-2017 信息安全技术 个人信息安全规范) (hereinafter “the Standard”).  The Standard will come into effect on May 1, 2018.

As highlighted in our previous coverage of drafts of the Standard (see here and here), although it is nominally a voluntary framework, the Standard effectively sets out the best practices that will be expected by regulators auditing companies and enforcing China’s existing (but typically more generally-worded) data protection rules, most notably the 2016 Cybersecurity Law.  Drafts of the Standard — even prior its finalization — have also in some cases been the basis for non-compliance remediation plans and undertakings agreed between companies and the Cyberspace Administration of China (“CAC”) following CAC audits, as we reported here.

The Standard applies to “personal information controllers,” namely any private or public organization that has “the power to decide the purpose and method” of processing personal information.  This is seemingly modelled on European law’s “data controller” concept.

The Standard regulates the use of “personal information” by these controllers, a term largely aligned with strict conceptualizations of “personal data” under the EU’s General Data Protection Regulation (“GDPR”).  Examples of “personal information” listed in an annex to the Standard include device hardware serial codes, IP addresses, website tracking records, and unique device identifiers, among other things.  The definition of “sensitive personal information,” however, takes a different approach to the GDPR: rather than applying only to specific types of data, the Standard takes a risk-based approach, defining “sensitive” personal information as any personal information which, if lost or misused, is capable of endangering persons or property, easily harming personal reputation and mental and physical health, or leading to discriminatory treatment.  According to the Standard, this could for example include national identification card numbers, login credentials, banking and credit details, a person’s accurate location, information on a person’s real estate holdings, and information about a minor (under 14 years old).

Similar to general principles of most data protection laws, the Standard requires transparency, specificity and fairness of processing purpose, proportionality (use and retention of only the minimum information necessary to achieve the stated purpose), security, risk assessment, and the respect of individuals’ rights to control the processing of information about them.  It also requires either consent from individuals, or reliance on a limited range of exceptions set out in the Standard, for the purpose of collection and processing of personal information.

This article looks at some of these aspects in more detail, including some of their key divergences from European data protection law, including the GDPR.  (Please note that this is not an exhaustive description of the Standard, nor is it a detailed comparison with the GDPR.) Continue Reading

Voice Technologies, Meet the EU E-Privacy Regulation

On January 12, the International Consumer Electronics Show (CES) in Las Vegas closed its doors for another year.  Each CES raises a new set of technology themes, ranging from robots to smart fridges — and this year, the winner was voice technologies.  Such technologies, while not entirely new, are now becoming mainstream:  sales of smart speakers like Amazon’s Echo more than tripled in 2017, and it is now estimated that one in six Americans own a smart speaker.  It is always difficult to predict the future, but voice enabled cars, home appliances, and other devices are all either on the way or already on the market, and the potential for voice interfaces to become new “platforms” — supporting third party services just like smartphones supported apps — is now clear to us all.

On the other side of the Atlantic, however, policymakers are going in another direction.  The European Union’s Council, made up of representatives of the 28 EU Member State governments, has been hard at work negotiating its preferred version of the next EU privacy law beyond the General Data Protection Regulation, known as the E-Privacy Regulation (EPR).  Just as the GDPR is built upon a predecessor law, the Data Protection Directive, so too is the EPR envisaged as an update to an existing law, the E-Privacy Directive.

The EPR is still a draft, and subject to further revision.  However, many of its key features are already largely clear.  One of the main purposes of the EPR is to “level the playing field” between traditional (e.g., copper, fibre, mobile and satellite -based) telecommunications providers and new upstart technology providers (for example, those offering instant messaging or VoIP communication services), so that all market players are bound by the same privacy rules.  In practice, this likely means that rules previously limiting how telecommunications companies can use certain types of communication data will be expanded to cover a much greater range of technology provider.  As a result, many technology providers previously outside the scope of the legacy E-Privacy Directive may well find themselves regulated by the EPR.

Where the EPR applies, it is likely to significantly limit how voice communications data can be used.

  • The EPR proposes a broad prohibition on the processing of electronic communications data in Article 5, except for use by end-users of communication systems, or where otherwise permitted in the EPR.
  • The EPR then sets out grounds for processing in Article 6.  These grounds are far more limited, however, than the array of options provided for in Article 6 of the GDPR.  For example, the legitimate interest grounds; the grounds of processing that is necessary for performance of a contract; and even the grounds of processing where necessary for the vital interests of an individual, are all absent.  Instead, in many cases, the only grounds available to providers to process voice communications will be consent — of either one end-user, or, in some cases, both.  Even where consent applies, additional requirements — such as prior consultation with a data protection authority — may also apply.
  • The EPR also sets out strict limits on the retention of electronic communications data in Article 7 (although deletion of covered data does not appear to be required where grounds under Article 6 continue to apply).

The upshot is that the EPR may, if adopted as drafted, set out significant limits on the ability of providers to collect and/or use electronic communications data — including many voice communications — for purposes such as product research, design, refinement, and development.  Providers, hard at work generating new products and features, should look up and take note.

FTC Releases 2017 Privacy and Data Security Report

On January 18, the Federal Trade Commission released its annual Privacy and Data Security Update, highlighting its enforcement efforts in 2017.  The report discusses significant enforcement efforts in the areas of privacy, data security, credit reporting and financial privacy, international enforcement, children’s privacy, and telemarketing.  The report also highlights the FTC’s efforts in advocacy, rulemaking, guidance, reports, workshops, and international engagement.

The report indicates that the FTC remains extremely active in its enforcement efforts, across various sectors. The report highlights enforcement actions against large high-profile companies, small businesses, and individuals.  As for workshops and guidance, FTC seemed most active in the areas of internet of things, financial privacy, and artificial intelligence.

Finally, the report notes that, in 2017, the FTC brought its first actions to enforce the EU-U.S. Privacy Shield framework, which became operational in August 2016.

California Bill Would Mandate Expedient Software Updates for Credit Bureaus

Following the Equifax data breach in 2017, there has been heightened awareness surrounding how credit reporting agencies handle consumers’ personal information. At the same time, recent high-profile attacks, such as the “WannaCry” ransomware attacks, have focused media and regulatory attention on vulnerabilities associated with unpatched systems. In response to these two concerns, on January 10, a bill was introduced in the California legislature that would amend existing law regulating the cybersecurity practices of consumer credit reporting agencies (CRAs) specifically as they relate to vulnerability patching.

AB 1859 would add provisions requiring CRAs to update software vulnerabilities in certain circumstances.  Namely, if the CRA knows or reasonably should know that one of its computer systems is subject to a vulnerability and knows or reasonably should know that a software update is available to address that vulnerability, the CRA must apply the software update expediently, “in keeping with industry best practices,” but in any case within 10 days after becoming aware of the vulnerability and the available software update.

The bill would also create a private right of action for California residents whose personal information was acquired by a breach caused, in whole or in part, by a violation of the software update provisions described above.  Moreover, it would allow residents to recover civil penalties for “willful, intentional, or reckless” violations of the software update provisions.

At first blush, by mandating a particular security practice in one specific industry sector, the language of AB 1859 appears to be a departure from the traditional risk-based regulatory approach that encourages organizations to adopt “reasonable” security best practices tailored to their cyber risks without mandating more specific cybersecurity requirements. Notably, however, in 2016 the California Office of the Attorney General adopted a more prescriptive approach to regulating the cybersecurity practices of companies doing business in California. Specifically, in its 2016 Data Breach Report, the Attorney General stated that the list of twenty Critical Security Controls (“CSC”) developed by the Center for Internet Security (“CIS”) “define a minimum level of information security” that all organizations that collect or maintain personal information about California residents should meet.  Most importantly, in light of the requirement under California law to implement and maintain reasonable security practices, the report stated that a “failure to implement all the [c]ontrols that apply to an organization’s environment constitutes a lack of reasonable security.”  See 2016 Data Breach Report (emphasis added). Included among the CSC controls is Control 4, “Continuous Vulnerability Assessment and Remediation,” which requires regular scanning for vulnerabilities and the adoption of proactive patching processes. Moreover, California law already provides a private cause of action for damages by customers injured by a company’s failure to “implement and maintain reasonable security procedures and practices” in violation of California Civil Code Section 1798.81.5.

The bill is currently set for hearing in committee on February 10.

 

House Passes Cyber Vulnerability Disclosure Reporting Act

On January 9, the House of Representatives passed the Cyber Vulnerability Disclosure Reporting Act by voice vote.  The Act directs the Secretary of the U.S. Department of Homeland Security (“DHS”) to prepare a report describing the policies and procedures that DHS developed to coordinate the cyber vulnerability disclosures.  Under the Homeland Security Act of 2002 and the Cybersecurity Information Sharing Act of 2015 (“CISA”), DHS is responsible for working with industry to develop DHS policies and procedures for coordinating the disclosure of cyber vulnerabilities.

Continue Reading

CBP Revises Rules for Border Searches of Electronic Devices

Last week, U.S. Customs and Border Protection (“CBP”) released a revised Directive governing searches of electronic devices at the border.  These are the first official revisions CBP has made to its guidelines and procedures for devices since its 2009 Directive.  The new Directive is intended to reflect the evolution of technology over the intervening decade, and CBP’s corresponding need to update its investigative techniques.

Notably (and as in previous CBP Directives), the new Directive does not require officials to obtain a warrant before conducting searches of travelers’ devices—even if the traveler being searched is an American—based on CBP’s position that searches and seizures at the border are exempt from the Fourth Amendment’s “probable cause” requirement.  CBP nevertheless acknowledges that its searches must still meet the Fourth Amendment’s “reasonableness” requirement, which the self-imposed restrictions contained in the Directive are meant to achieve.  Continue Reading

UK Government Consults on EU Cybersecurity Plans

As we summarized last fall, the EU Commission published a new Cybersecurity Communication in September that, among other things, sets out proposals for an EU cybersecurity certification framework as part of ‎an EU “Cybersecurity Act” (see our post here and a more detailed summary here).  Just before the holidays, on December 20, 2017, the UK Government published a consultation on these proposals, which the UK Government will use‎ to help develop its position.  Key elements of the proposals that the UK Government is consulting on include:

  • Harmonizing the existing cybersecurity certification landscape to reduce costs and administrative burdens for companies by establishing a common “European Cybersecurity Certification Framework for ICT products and services.”
  • Further specifying and publishing best practices relating to incident reporting and security obligations for some digital service providers under the NIS Directive (see our reports here and ‎here).
  • Changes to the tasks and functions of ENISA, including providing ENISA with a strengthened and permanent mandate.

The UK Government also welcomes views from stakeholders on the impact of the proposals with respect to the UK’s exit from the EU.  The consultation closes on February 13, 2018.  Before then, and by January 20, 2018, the UK Government has been asked by the UK Parliament to clarify issues relating to the proposals, including on issues relating to the “Cybersecurity Act” and cybersecurity certification.

LexBlog