Is The Hutchins Indictment Over Malware Unconstitutional?

By Alex Berengaut

[This article also was published in Law360.]

In May 2017, the “WannaCry” malware was used to launch a worldwide ransomware cyberattack. WannaCry encrypted files on victim computers and demanded a ransom payable in bitcoin to provide the encryption key. The attack was stopped when a British security researcher, Marcus Hutchins, accidentally discovered and activated a “kill switch” in the malware.

In a dramatic turn of events, Hutchins was arrested earlier this month by the FBI in Las Vegas as he was returning home from a cybersecurity conference. He wasn’t charged for anything to do with WannaCry; rather, the government alleged that he had created and conspired to sell a different piece of malware, the “Kronos Banking trojan,” a piece of software that recorded and stole user credentials and other personal identifying information. On Aug. 14, 2017, he pleaded not guilty to the charges against him.

Since Hutchins’ indictment, commentators have questioned whether the creation and selling of malware—without actually using the malware—violates the two statutes under which Hutchins was charged: the Computer Fraud and Abuse Act and the Wiretap Act.[1] It is likely that these issues will be litigated as the case unfolds.

But there is another question raised by the indictment: whether it violates Hutchins’ constitutional rights to charge him for his alleged conduct under any statute in this country. Several circuits—including the Seventh Circuit, where Hutchins’ case will be heard—have recognized that the federal government cannot charge anyone, anywhere in the world irrespective of their connections to the United States.[2] As the Second Circuit has put it, “[i]n order to apply extraterritorially a federal criminal statute to a defendant consistently with due process, there must be a sufficient nexus between the defendant and the United States so that such application would not be arbitrary and fundamentally unfair.”[3] Continue Reading

NIST Releases Fifth Revision of Special Publication 800-53

By Susan Cassidy, Jenny Martin, and Catlin Meade

The National Institute of Standards and Technology (“NIST”) released on August 15, 2017 its proposed update to Special Publication (“SP”) 800-53.  NIST SP 800-53, which was last revised in 2014, provides information security standards and guidelines, including baseline control requirements, for implementation on federal information systems under the Federal Information Systems Management Act of 2002 (“FISMA”).  The revised version will still apply only to federal systems when finalized, but one of the stated objectives of the revised version is to make the cybersecurity and privacy standards and guidelines accessible to non-federal and private sector organizations for voluntary use on their systems.

In its announcement of the draft revision, NIST explains that the update “responds to the need by embarking on a proactive and systemic approach to develop and make available to a broad base of public and private sector organizations, a comprehensive set of safeguarding measures for all types of computing platforms, including general purpose computing systems, cyber-physical systems, cloud and mobile systems, industrial/process control systems, and Internet of Things (IoT) devices.”  In particular, a key purpose of the update process was to assess the relevance and appropriateness of the current security controls and control enhancements designated for each baseline (low, moderate, and high) to ensure that protections are commensurate with the harm that would result from a compromise of applicable government data and systems.  In addition, the revised guidelines recognize the need to secure a much broader universe of “systems,” including industrial control systems, IoT devices, and other cyber physical systems, than the “information systems” that were the focus of the prior iterations of SP 800-53.  Relatedly, the revised publication also identifies those controls that are both security and privacy controls, as well as those controls that are the primary responsibility of privacy programs. Continue Reading

New ECPA Reform Legislation Introduced in the Senate

By Lauren Moxley

In late July, three bipartisan bills to reform the Electronic Communications Privacy Act of 1986 (“ECPA”) were introduced in the Senate. Each of the bills propose different updates to ECPA, which governs law enforcement access to consumer information stored with service providers. As we have discussed here, here, here, and here, the 1986 law has been criticized for being outdated in today’s digital world.

Senator Mike Lee (R-UT) introduced the Email Privacy Act (S.B. 1654), along with a coalition of seven co-sponsors, including Senator Patrick Leahy (D-VT) and Senator Steve Daines (R-MT), Senator Richard Blumenthal (D-CT), Senator Dean Heller (R-NV), Senator Jeanne Shaheen (D-NH), Senator Cory Gardner (R-CO), and Senator Al Franken (D-MN). As ECPA currently stands, law enforcement need only a subpoena to access emails that have been stored for more than 180 days. Among other changes, this bill would require law enforcement to obtain a warrant before accessing the contents of communications, regardless of how long those communications have been stored.

Senator Lee also introduced the ECPA Modernization Act of 2017 (S.B. 1657), along with co-sponsors Senator Leahy and Senator Daines. Like the Email Privacy Act, this bill includes a warrant requirement for access to consumer communications. It also includes a number of more comprehensive reforms, including a particularity requirement, rules governing law enforcement access to geolocation information, and a new set of obligations to notify a customer whose material has been accessed.

Senator Orrin Hatch (R-UT) re-introduced the International Communications Privacy Act (S.B. 1671), which is designed to address law enforcement access to data stored abroad. The bill has two co-sponsors, Senator Heller and Senator Christopher Coons (D-DE). More information about this bill is expected soon.


A Summary of the Recently Introduced “Internet of Things (IoT) Cybersecurity Improvement Act of 2017”

On August 1, 2017, a bipartisan group of Senators introduced legislation (fact sheet) that would establish minimum cybersecurity standards for Internet of Things (“IoT”) devices sold to the U.S. Government.  As Internet-connected devices become increasingly ubiquitous and susceptible to evolving and complex cyber threats, the proposed bill attempts to safeguard the security of executive agencies’ IoT devices by directing executive agencies to include specified clauses in contracts for the acquisition of Internet-connected devices.

The bill’s provisions leverage federal purchasing power to improve the security of IoT devices by requiring, among other things, IoT device, software, and firmware providers to certify compliance with specified security controls and requirements relating to vulnerability patching and notification, unless such contractors otherwise satisfy one of three waiver requirements.

The bill also directs the Department of Homeland Security (“DHS”) to issue vulnerability disclosure guidance for government contractors; to amend federal statutes, specifically the Computer Fraud and Abuse Act (“CFAA”) and Digital Millennium Copyright Act (“DMCA”), to exempt certain “good faith” activities by cybersecurity researchers; and require all executive branch agencies to maintain an inventory of IoT devices active on their networks.

In addition, the statute would require the Director of the Office of Management and Budget (“OMB”) to issue guidelines to federal agencies consistent with the bill within 180 days of enactment.

The bill is summarized below.

Obligations for Contractors

If passed, the bill will require the OMB Director, in consultation with other executive departments and agencies, to issue guidelines requiring each agency to include the below key clauses in future “contract[s] . . . for the acquisition of Internet-connected devices.”

– Written certification from the contractor that its devices:

  • Do not contain components with any known security vulnerabilities or defects listed in the National Institute of Standards and Technology’s (“NIST”) National Vulnerability Database or a similar database identified by the OMB Director;
  • Include components that are capable of receiving “properly authenticated and trusted” patches from vendors;
  • Utilize industry-standard technology and components for communication, encryption, and interconnection with peripherals; and
  • Do not include “fixed or hard-coded passwords” to receive updates or enable remote access.

– A requirement to notify the purchasing agency of any “known security vulnerabilities or defects subsequently disclosed to the vendor by a security researcher” or when a vendor becomes aware of such a vulnerability during the life of a federal contract.

– Update, replace, or remove, in a timely manner, vulnerabilities in software and firmware components in a properly authenticated and secure manner.  This includes a requirement to provide information to the purchasing agency regarding the manner for such updates, as well as a timeline and formal notice when ending security support.

One potential issue contractors should consider is how broadly the proposed bill defines an “Internet-connected device,” specifically, as a “physical object that is capable of connecting to and is in regular connection with the Internet; and has computer processing capabilities that can collect, send, or receive data.”  As a result of this expansive definition, contractors ought to consider the bill’s (and its implementing regulations’) impact on, among other things, end items and components that are connected to an Internet-connected device.

Waiver of Contract Clause Requirements

The measure also includes several exceptions.  Contractors may submit an application for a waiver from certain prescribed contract clause requirements if they disclose known vulnerabilities in IoT devices marketed to the government.  Executive agencies may also seek a waiver if procurement of IoT devices in compliance with required contract certification clauses would be “unfeasible or economically impractical.”

The proposed statute also permits executive agencies to procure IoT devices compliant with other existing security standards.  Specifically, executive agencies would be permitted to purchase devices that comply with existing security standards set by a third-party or the purchasing agency if the standard provides an equivalent or greater level of security than those prescribed by the bill’s required contract clauses.  For these purposes, NIST would develop third-party accreditation standards and ensure that an agency’s existing standards provide appropriate security protections.

Disclosure of Security Vulnerabilities and Defects

The legislation would require DHS’s National Protection and Programs Directorate to issue guidelines regarding “cybersecurity coordinated disclosure requirements” that contractors will be required to comply with if they sell IoT devices to the government.  The guidelines will outline:

– Policies and procedures for research relating to the security of an IoT device based on Standard 29147 of the International Organization for Standardization or any comparable standard; and

– Requirements for researching and testing the security of an IoT device, including a provision that the same class, model, and type of device be used for research and testing purposes.

Amendments to Federal Statutes

The legislation would amend the CFAA and DMCA to exempt cybersecurity researchers and experts from liability who (1) “in good faith” engaged in researching the security of an IoT device of the same “class, model, or type” procured by a federal agency, and (2) complied with future DHS-issued guidelines for vulnerability disclosure that the contractor adopted.

IoT Device Inventory

The bill will require each executive agency to establish an inventory of Internet-connected devices within 180 days following the passage of the legislation.  In support of this effort, the OMB Director, in consultation with the DHS Secretary, will issue guidelines 30 days after enactment detailing the organization and management of agency IoT device databases.  The legislation would also require the OMB Director to create publicly accessible databases listing manufacturers and IoT devices that are afforded liability protections and manufacturers that have formally notified the government that support services for a particular device have been terminated.  In addition to maintaining the databases, the OMB Director must also ensure the databases are updated at least once every 30 days.

Finally, the bill directs NIST to ensure that it establishes and uses best practices in identifying and tracking vulnerabilities for purposes of maintaining the NIST National Vulnerability Database.


Contractors should keep an eye on this this proposed bill because, if it becomes law, it will impose new, potentially onerous obligations on contractors.

D.C. Circuit: Data Breach Plaintiffs Plausibly Allege ‘Substantial Risk’ of
ID Theft Sufficient to Support Standing

Customers’ allegations that they face a substantial risk of identity theft as a result of a 2014 data breach are sufficiently plausible to allow their suit against health insurer CareFirst to proceed, the U.S. Court of Appeals for the D.C. Circuit held in an August 1 decision.

CareFirst discovered in April 2015 — and announced a month later — that an unknown intruder had gained access in June 2014 to a database containing personal information about CareFirst’s customers.  Seven customers then brought a class-action lawsuit against CareFirst in the federal district court in Washington, D.C., alleging among other things that CareFirst was negligent in protecting customer data, and that customers as a result faced an increased risk of identity theft.

The district court dismissed the suit, finding that the plaintiffs had not alleged that hackers had accessed the plaintiffs’ social security numbers or credit card information, and that the risk of hackers stealing the plaintiffs’ identities without such information was too speculative to satisfy the requirements of Article III of the U.S. Constitution, which requires that federal courts hear only actual “cases or controversies.”  The Supreme Court has held that this requirement bars lawsuits where the plaintiffs have not alleged that they have suffered or imminently will suffer a concrete injury. Continue Reading

Department of Justice Releases Guidance for Vulnerability Disclosure Programs

Last week, the U.S. Department of Justice (“DOJ”) released a voluntary framework for organizations to use in the development of a formal program to receive reports of network, software, and system vulnerabilities, and to disclose vulnerabilities identified in other organizations’ environments.  This framework provides private entities a series of steps to establish a formal program that balances the need to enhance organizations’ cybersecurity with potential legal risks associated with identifying, testing, and disclosing vulnerabilities.  While the framework does not prescribe specific requirements, it does provide guidance that an organization should consider whether it is developing a new disclosure program or already has an established program.  The framework also appears consistent with previous U.S. Government guidance on vulnerability disclosure — such as the policy or guidance published by the U.S. Department of Defense, General Services Administration 18F Office, and National Telecommunications & Information Administration.

In sum, the four-step framework recommends an organization consider the following:

Step 1: Design the vulnerability disclosure program.

  • Whether to apply the disclosure program across its entire enterprise or specifically focus on certain portions of its network, applications, or data types.
  • When choosing to include sensitive data (or systems that process or store sensitive data), an organization should “seriously weigh the risks and consequences of exposing [sensitive] information that it has a legal duty to protect and . . . consider consulting with legal counsel when making its scoping decisions.”
  • Establish a program that focuses on certain types of vulnerabilities rather than all vulnerabilities — for example, a program may focus on software flaws, weak password management practices, outdated and poorly configured systems that are susceptible to exploitation, and/or inadequate security training.
  • Assess whether any third-party interests may be involved (such as a cloud service provider storing the organization’s data or hosting its infrastructure) and account for those interests; otherwise, the program may lack the appropriate authorization to access the third-party’s systems and subject the organization to heightened legal risk.

Step 2: Plan for administering the vulnerability disclosure program.

  • Establish a process for vulnerability reporting that includes authenticating the accuracy of the vulnerability.
  • If the program includes sensitive data, limit access, processing, and retention of sensitive data by testing and reporting entities.
  • Identify key points-of-contact to receive and process vulnerability reports, and “[i]dentify personnel who can authoritatively answer questions about conduct that the [program] does and does not authorize.”
  • Decide how to handle “accidental, good faith violations” and “intentional, malicious violations” of the program.

Step 3: Draft a vulnerability disclosure policy that accurately and unambiguously captures the organization’s intent.

  • Describe what type of conduct is authorized and unauthorized, including, but not limited to, specific techniques, use of the organization’s data, deletion or alteration of data, and denying access to systems.
  • Identify what portions of an organization’s network, applications, or data types are in scope.
  • Establish program controls to protect sensitive data and systems that process or store sensitive data.
  • Outline the potential consequences for complying (and not complying) with the disclosure program.

Step 4: Implementing the vulnerability disclosure program.

  • Ensure an organization’s vulnerability disclosure policy is “easily accessible and widely available.”  Some examples include advertising the program and prominently displaying the policy on an organization’s website.
  • Consider requiring anyone who performs related activities to do so under the established program.


California Bill Poised to Change Regime Governing the Internet of Things

A bill pending in the California legislature, if passed, would create new obligations for manufacturers of “connected devices.” S.B. 327 (also known as the “Teddy Bear and Toaster Act”) would operate somewhat differently than existing laws, such as the California Online Privacy Protection Act (“CalOPPA”).

Security obligations. Manufacturers of connected devices that sell those devices in California would be required to equip the device with “reasonable security features appropriate to the nature of the device and the information it may collect, contain, or transmit, that protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.”

Notice obligations. Connected devices would be required to provide notice about information the device is capable of collecting “through the use of words or icons on the device’s packaging, or on the product’s, or on the manufacturer’s Internet Web site.” The notice itself would describe whether the device is capable of collecting certain information (compared to CalOPPA, which requires notice of what personally identifiable information the operator “collects”). It also would describe the process for collecting that information, the frequency of the collection, and if and how the consumer can obtain information about security patches and feature updates. The notice requirement contrasts with a prior version of the bill, which would have required devices to indicate “through visual, auditory, or other means” when they are collecting information.

Consent obligations. Manufacturers that sell connected devices to California consumers would be required to “obtain consumer consent” before collecting or transmitting “information beyond what is necessary in order to fulfill a user transaction or for the stated functionality of the connected device.” The bill does not specify whether this consent is opt-in or opt-out consent, but it does note that the consent shall remain in effect until the consumer revokes it.

Exceptions. The bill seems to exempt from consent requirements manufacturers’ collection or use of “deidentified information” collected from a connected device for certain purposes, such as developing, diagnosing, or improving the device, among others. Notably, “deidentified information” is defined as information that does not contain “any link or connection to the consumer or user of the device.” And, in order for information to be deidentified, the bill sets forth a three-part test that must be satisfied, including among other things that deidentification procedures occur locally on the device.

The bill’s author, Senator Hannah-Beth Jackson, tabled the bill until the next legislative year. Thus, as a so-called “two-year bill,” consideration and debate will resume in January 2018.

Chinese Agencies Announce Plan to Audit Privacy Policies of Ten Popular Online Services

On July 26, four Chinese agencies, the Cyberspace Administration of China (“CAC”), the Ministry of Industry and Information Technology (“MIIT”), the Ministry of Public Security (“MoPS”), and the National Standards Committee, announced their plan to begin the government’s campaign to improve the protection of personal information, according to Xinhua News Agency (link is in Chinese).  The campaign, called “Action Plan to Improve Personal Information Protection,” will start with the audit of privacy policies of the ten most popular online services in China.

Officials from CAC’s Cybersecurity Coordination Bureau indicated that the privacy policy audit is an important step to implement China’s new Cybersecurity Law, which took effect on June 1, 2017.  Through this process, the regulators will balance the protection of personal information with the use of data to improve services for Chinese users.

This development signals the government agencies’ increased focus on companies’ data protection practices.  Companies operating in China should consider reviewing their privacy policies and data practices in country to conform with legal requirements and best practices. Continue Reading

FCC Fines Calling Platform $2.88 Million for TCPA Violations

Last week, the FCC issued a forfeiture order against Dialing Services, LLC (“Dialing Services”) $2,880,000, finding that Dialing Services made automated calls to wireless phones without prior express consent, in violation of the Telephone Consumer Protection Act (“TCPA”).  Dialing Services is a platform that offers automated calling services to its customers, and this Order is the culmination of the FCC’s investigation of the company dating back to 2012.

In 2012, FCC staff determined that Dialing Services had made more than 4.7 million calls to wireless phones in violation of the TCPA during a three-month period.  The Enforcement Bureau (“Bureau”) issued a citation in March 2013, directing the company to certify that it had stopped making calls in violation of the TCPA.  During a follow-up investigation, the staff determined that Dialing Services had continued placing calls after the citation, including 184 additional unauthorized calls to wireless phones in May 2013.  As a result, the FCC issued a Notice of Apparent Liability (“NAL”) in May 2014, proposing a $2.94 million fine.  (The ultimate forfeiture order reduced this amount to $2.88 million based on evidence that some of the calls were made with consent.)

In response to the NAL, Dialing Services asserted (among other things) that unlike its customers, it was merely a platform and therefore did not “make” or “initiate” the calls at issue under the TCPA.  The FCC applied its test for determining whether a party “initiated” or “made” a call for TCPA purposes from the 2013 Dish Network declaratory ruling:  whether the party “takes the steps necessary to physically place a telephone call” or, alternatively, is “so involved in the placing of a specific telephone call as to be directly liable for making it.”  Continue Reading

CJEU: EU-Canada proposed agreement on the transfer of Passenger Name Record data does not conform to EU data protection law standards

By Dan Cooper and Rosie Klement

On July 26, 2017, the Court of Justice of the EU (CJEU) published Opinion 1-15 (the “Opinion”) on the proposed agreement between the European Union and Canada on the transfer and processing of passenger name record (“PNR”) data (the “Agreement”).  The Agreement was signed in 2014, but the CJEU was asked to determine whether it was compatible with EU data protection law before it is approved by the European Parliament.

The Opinion concluded that a number of provisions relating to the transfer of PNR data – particularly sensitive data – are incompatible with the EU Data Protection Directive (Directive 95/46) and the fundamental rights to privacy and data protection, and the protection against discrimination, under Articles 7, 8 and 21 of the EU Charter of Fundamental Rights (the “Charter”), meaning the Agreement must be renegotiated before it enters into force.

Notably, the CJEU’s opinion was consistent with its recent judgments concerning data transfers to “third countries” (outside the EEA) in Schrems and Tele2/WatsonContinue Reading