Today, 34 global technology and security companies announced that they have signed a Cybersecurity Tech Accord, which publicly commits them “to protect and empower civilians online and to improve the security, stability and resilience of cyberspace.” The signatories include Cisco, Dell, Facebook, HP, Intuit, and Microsoft.
IAB Europe opened the registration process for vendors and consent management providers (“CMPs”) to apply for approved status under IAB Europe’s Transparency and Consent Framework (“Framework”).
The Framework intends to provide publishers that have decided that the interest-based advertising products available on their platforms require user consent to deploy a standardized framework to (1) disclose to visitors the companies the publisher wishes to allow to access visitors’ browsers and devices and (2) to disclose these choices to other parties in the online advertising ecosystem. In essence, it aims to serve as a “common language” for communicating consumer consent for delivery of relevant online ads and content, where required under the GDPR or applicable law. A document setting out answers to commonly asked questions about the Framework is available here.
Vendors (including SSPs, DSPs, ad exchanges, etc.) can now apply for the Global Vendor List (“List”). The List enables publishers using the Framework to disclose those vendors; it also enables vendors to receive consent signals from publishers. The registration form for the List is available here.
CMPs are companies that capture and store publishers’ preferred vendors and purposes and the consent status of the user to transmit that data throughout the online advertising ecosystem. With the new registration process, CMPs may apply for “approved CMP” status, which allows a company to be a CMP used within the Framework. As publishers can choose which CMPs they wish to work with – and because CMPs can develop their own APIs and user interfaces – this potentially allows for more flexibility. The application for approved CMP status is available here.
Once companies have submitted their application and received approval, they must pay an annual fee. Vendors will be issued an ID and published in the Framework; CMPs will receive an ID and sub-domain, and they will be listed on advertisingconsent.eu.
By Alyson Sandler
On April 10, Senators Richard Blumenthal (D-CT) and Ed Markey (D-MA) introduced new privacy legislation titled the Customer Online Notification for Stopping Edge-provider Network Transgressions (CONSENT) Act. In a statement published on his website, Senator Markey referred to the legislation as a “privacy bill of rights” and explained that “[t]he avalanche of privacy violations by Facebook and other online companies has reached a critical threshold, and we need legislation that makes consent the law of the land.”
The CONSENT Act directs the Federal Trade Commission (FTC) to “establish privacy protections for customers of online edge providers.” These protections include requiring edge providers to notify customers about the collection and use of “sensitive customer proprietary information,” which the Act defines to include, among other things, financial and health information, the content of communications, and web browsing and application usage history. Customers must also be notified about the types of sensitive customer proprietary information that the edge provider collects, how the information will be used and shared, and the types of entities the edge provider will share the information with.
The centerpiece of the CONSENT Act is its “opt-in” requirement for edge providers to obtain consent from customers for the use of “sensitive information.” This differs from the model currently employed by most online companies, under which customers may opt out of data collection. The Act also prohibits an edge provider from refusing to serve customers who do not consent to the use and sharing of their sensitive proprietary information for commercial purposes. Continue Reading
Last August, the Department of Justice arrested and indicted Marcus Hutchins, the security researcher who accidentally discovered the “kill switch” that stopped the “WannaCry” malware attack. Hutchins was not charged for anything to do with WannaCry, but rather for creating and conspiring to sell a different piece of malware, the “Kronos Banking trojan.” Apart from the dramatic circumstances surrounding his arrest, Hutchins’ indictment was notable for its lack of allegations connecting him to the United States. As previously discussed on this blog, the indictment raised the question whether it would violate Hutchins’ constitutional rights to charge him in this country.
Last week, Hutchins’ legal team filed a motion formally challenging the constitutionality of the indictment. The motion relies on the “sufficient nexus” doctrine — the principle that there must be a meaningful connection between a foreign defendant and the United States to satisfy due process. Since the government has not alleged that Hutchins aimed his conduct at the United States, he argues, no “sufficient nexus” has been shown.
The government has until April 18 to respond to Hutchins’ motion. Now that the sufficient nexus issue has been raised, one option for them would be to file a superseding indictment with additional allegations connecting Hutchins to the United States. Indeed, Hutchins has also filed a motion for a bill of particulars — a request for additional information on the government’s charges — which might invite the government to supplement its charges.
If the government does not address the motion factually, they will likely need to join issue legally. In that circumstance, the government is likely to argue that the alleged effects of Hutchins’ conduct on the United States satisfy the “sufficient nexus” test. This argument has some issues and remains largely untested. Most fundamentally, it requires the court to disregard caselaw from the analogous context of civil personal jurisdiction, where diffuse effects of conduct aimed “worldwide” are insufficient to satisfy due process. The government will be hardpressed to explain why the due process standard in the criminal context should be any less demanding.
[This article was originally published in Law360]
Last week, South Dakota became the 49th U.S. state to enact a data breach notification law with the passage of S.B. 62, which sets forth requirements for notifying state residents, the state attorney general, and major consumer reporting agencies in the event of a breach. The law, which will take effect on July 1, 2018, parallels many recently passed or amended state data breach notification laws through its inclusion of an expansive definition of “personally identifiable information” and an explicit deadline for notifying affected residents. However, a few elements of the law push further than comparable laws from other states and have the potential to shift companies’ data breach notification practices.
Under the new law, any person or business conducting business in South Dakota that owns or licenses computerized “personal or protected information” of South Dakota residents must provide notice of the breach unless certain exceptions apply. A “breach” occurs when personal or protected information was, or is reasonably believed to have been, acquired by an unauthorized person. Notably, the law defines an “unauthorized person” to include not only individuals who are not authorized to acquire or disclose personal information, but also individuals who are authorized to do so but have acquired or disclosed personal information “outside the guidelines for access o[r] disclosure established by the information holder.” This specific addition to the law could impact decision-making processes for businesses who encounter potential data security incidents that parallel the characteristics set forth in the statute.
Henriette Tielemans, co-chair of Covington’s global Data Privacy and Cybersecurity practice, has today received the IAPP Privacy Vanguard Award, the industry’s top honor, for her lifelong services to the data privacy community.
The International Association of Privacy Professionals (IAPP) is the world’s largest and most comprehensive global information privacy community. Each year, the IAPP names the people and organizations making a difference in the world of privacy. “We are proud to present Jetty with the coveted Vanguard Award, honoring her fearless leadership and deep expertise in the field of privacy,” said J. Trevor Hughes, CIPP, President and CEO of the IAPP.
Ms. Tielemans has been practicing data protection law for over 15 years. She focuses on international data transfers, binding corporate rules, big data, cloud computing, GDPR related issues, and e-discovery. She previously served on the IAPP’s Board of Directors and Executive Committee, and on a European Commission-designated five-member expert group discussing revising the 1995 Data Protection Framework Directive.
“Jetty was present at the very inception of the EU privacy laws, and has remained at the leading edge of new developments over the course of her career,” said Kurt Wimmer, co-chair of the firm’s global Data Privacy and Cybersecurity practice. “She is respected by policymakers and clients alike, and is a generous and creative colleague. I think everyone at Covington will agree that the global growth of our group is a testament to Jetty’s leadership, vision, and character.”
On March 23, 2018, Congress passed, and President Trump signed into law, the Clarifying Lawful Overseas Use of Data (“CLOUD”) Act, which creates a new framework for government access to data held by technology companies worldwide.
The CLOUD Act, enacted as part of the Consolidated Appropriations Act, has two components.
Part I: Extraterritorial Reach of U.S. Orders and Comity Rights for Providers
The first part of the CLOUD Act provides that orders issued pursuant to the Electronic Communications Privacy Act (“ECPA”) can reach data regardless of where that data is stored. This portion of the law addresses the question at the heart of United States v. Microsoft, the Supreme Court case that was argued on February 27.*
Part I of the Act also creates a new statutory mechanism by which technology companies can challenge warrants based on the material risk of a conflict with the laws of qualified foreign countries—specifically, those countries that enter into bilateral agreements of the type contemplated in Part II of the Act and that afford reciprocal comity rights to the United States (referred to as a “qualifying foreign governments”). The CLOUD Act also preserves the common law rights of providers to bring comity challenges based on conflicts of laws with other countries (i.e., those that are not “qualifying foreign governments” under the Act).
Under this new statutory comity framework, a provider may file a motion to modify or quash U.S. legal process if it reasonably believes: (1) the customer or subscriber is not a U.S. person and does not reside in the United States, and (2) the required disclosure would create a material risk of violating the laws of a qualifying foreign government.
In any such challenge, a court may modify or quash the legal process upon finding that: (1) the required disclosure would violate the qualifying foreign government’s law, and (2) the interests of justice dictate that the legal process should be modified or quashed. In conducting this second inquiry, courts are to consider a series of comity factors set out in the statute. During the pendency of such a challenge, the provider may notify the qualifying foreign government of the existence of the legal process and thereby allow the foreign government to raise any concerns directly with the U.S. Government.
Part II: Framework for Bilateral Agreements on Cross-Border Data Requests
The second part of the CLOUD Act creates a framework for new bilateral agreements with foreign governments for cross-border data requests. Under these bilateral agreements, the United States and participating foreign governments would remove legal restrictions that otherwise prohibit technology providers from complying with the other country’s legal requests.
Previously, governments had to invoke mutual legal assistance treaties (“MLATs”) to obtain evidence stored in another country. Under the MLAT process, a foreign government seeking information from a U.S. provider would ask the U.S. Department of Justice to obtain a U.S. court order for that information. Part II of the CLOUD Act creates a new framework that instead allows foreign governments to serve legal process directly on U.S. providers, without the necessity of first making an MLAT request to the U.S. Department of Justice.
Because the CLOUD Act has no effect on a foreign government’s jurisdiction over U.S. companies, any obligation by a provider to comply with a foreign order issued pursuant to such an agreement must arise under the foreign law. In other words, the CLOUD Act removes barriers that might otherwise prohibit a U.S. provider from complying with a foreign government’s order, but the CLOUD Act does not compel a U.S. provider to comply with any foreign order.
Not all governments can enter into bilateral agreements under the CLOUD Act. Before a country may do so, the Attorney General must submit certain written certifications to Congress regarding the foreign country. Those certifications must find that the country meets specific criteria establishing that its domestic law affords robust substantive and procedural protections for privacy and civil liberties. Additionally, the foreign government must adopt procedures to minimize the acquisition and retention of information about U.S. persons and cannot impose a decryption obligation on providers through the agreement.
Bilateral agreements must also contain a number of limits on the types of orders that may be submitted by the foreign government directly to a U.S. provider, including:
- Orders must be for the purpose of obtaining information relating to a serious crime, including terrorism.
- Orders must identify a specific person, account, address, device, or other identifier.
- Orders must comply with the foreign government’s domestic law.
- Orders must be based on requirements for a reasonable justification based on articulable and credible facts.
- Orders must be subject to judicial review prior to, or in enforcement proceedings regarding, enforcement of the order.
- Orders for interceptions must be for a fixed and limited time, may not last longer than reasonably necessary to accomplish the order’s purposes, and may only be issued if the same information could not be obtained by a less obtrusive method.
- Orders may not be used to infringe freedom of speech.
Foreign governments that enter such bilateral agreements must also agree to periodic compliance reviews by the U.S. Government.
Finally, the CLOUD Act contains specific provisions addressing how these bilateral agreements will be entered into and renewed. Under those provisions, once the Attorney General certifies a new agreement, it is to be considered by Congress. The agreement will enter into force unless Congress enacts a joint resolution of disapproval within 180 days. Every five years, the Attorney General is to review his or her determination that a foreign country meets the requirements for entering into a bilateral agreement. If he renews that determination, he is to submit a report to Congress containing the reasons for the renewal, any substantive changes to the agreement or to foreign law, and how the agreement has been implemented and what problems or controversies, if any, have arisen.
* Covington represents Microsoft Corporation in United States v. Microsoft, No. 17-2.
Artificial intelligence promises to be a paradigm shift for many applications from manufacturing to finance, and from defense to education. Given the vast potential, focus on AI has sharpened around the world, including in China. Decision makers in Beijing and around the country are paying attention and have begun shaping a legal and policy regime that favors the development of AI.
Research and investment in AI on both sides of the Pacific has led to cross-border collaboration – both in terms of talent and capital. Last December, Google announced that it will open an AI research center in Beijing, in part to leverage AI talent there. A month earlier, San Diego-based Qualcomm announced a strategic investment in SenseTime, a Chinese company specializing in facial-recognition software. China’s technology giants, including Tencent and Baidu, already have AI research labs in the US. And Didi Chuxing, China’s leader in ride-hailing technology and which has a lab in Silicon Valley, on January 26 officially launched its “AI Labs” research initiative, boasting a team of over 200 AI scientists and engineers.
But how does the Chinese legal and regulatory environment affect the development of these technologies?
Last summer, the State Council released “A Next Generation Artificial Intelligence Development Plan” (“Plan”), which sets the goal of having China become the world leader in AI by 2030. The Plan divides China’s AI goals into three “Strategic Objectives” to be met by 2020, 2025, and 2030, respectively. By 2020, the Plan aims to bring China’s AI up to global standards, with important achievements in AI applications and theory, as well as a “core AI industry” of at least 150 billion RMB. By 2025, it aims to begin the establishment of AI laws and regulations, as well as a core AI industry of at least 400 billion RMB, including sectors such as intelligent manufacturing, medicine, agriculture, and urban planning. Finally, by 2030, the Plan aims for China to become the world’s leading AI developer, with AI deeply embedded in daily life and a core industry exceeding one trillion yuan.
To accomplish these quantitative goals, the Plan outlines a number of “focus tasks” that touch on the application of AI to social, economic, and national security challenges. The Plan also lays out several “guarantee measures” intended to support and guide the development and application of AI, such as necessary laws and regulations, ethical frameworks, and resource allocation principles. While the Plan is scant on concrete details, its ambitious agenda and discrete policy tasks point toward significant industry, legal, and regulatory developments in the near future.
Building on the State Council’s Plan, on December 13, 2017 the Ministry of Industry and Information Technology (“MIIT”) released the “Three-Year Action Plan to Promote the Development of a New Generation of the Artificial Intelligence Industry (2018-2020)” (“Action Plan”). The Action Plan encourages efforts in key areas, including autonomous vehicles, intelligent service robots, intelligent unmanned aerial vehicles, medical image diagnosis assistance systems, video and imaging identification systems, intelligent voice interactive systems, intelligent translation systems, and smart home products. It also calls for making breakthroughs in “core foundational” technologies, including intelligent sensors, neural network chips, and open source platforms. Finally, the Action Plan calls on the government and the financial industry to support AI initiatives.
Even at this early stage, there are signs that these initiatives are moving forward. Bloomberg reported last October that Megvii Inc., a Chinese facial recognition company, had set a new record for the largest single-round investment in an AI company, raising $460 million from investors, including one of China’s largest state-backed venture funds. In early January, the city of Beijing announced plans to build a $2.12 billion (13.8 billion RMB) AI development park and also released plans for a dedicated zone to test autonomous vehicles. And the Nieman Foundation reported that China’s state news agency, Xinhua, will be rebuilding its newsroom to integrate AI into the newsmaking process.
At the same time, the Government is attempting to reconcile an apparent tension between citizens’ increased privacy awareness with flexible policy frameworks that allow AI to flourish. Because access to data is a critical resource for developing AI, including by machine learning, efforts in data protection and cross-border transfers bear on these developments. See our post here on recently issued legal frameworks to protect citizens’ information.
With official encouragement like this, China’s AI prowess is advancing. Upcoming posts in this series will cover how the Chinese Government develops local regulations and national “standards” that serve as experiments for policy innovation in China.
The U.S. Court of Appeals for the D.C. Circuit on Friday issued a long-awaited ruling in a lawsuit challenging the Federal Communications Commission’s interpretations of key terms under the Telephone Consumer Protection Act of 1991 (“TCPA”), holding that the FCC in 2015 had adopted an unreasonably broad definition of the type of calling equipment subject to special restrictions under the TCPA — a definition so broad it would include any modern smartphone — and had failed to adequately justify its approach regarding liability for calls placed to cell phone numbers that have been reassigned to a new user.
The court upheld the FCC’s ruling that a party who has consented to receive calls may revoke that consent “through any reasonable means clearly expressing a desire to receive no further messages from the caller.” The court also upheld the FCC’s decision to exempt from the TCPA’s consent requirements certain calls communicating urgent healthcare messages.
The D.C. Circuit’s unanimous decision addresses a consolidated set of petitions by various companies and trade associations — first filed in the summer and fall of 2015 and argued before the D.C. Circuit in 2016 — seeking review of a declaratory ruling released by the FCC in July 2015 (the “Omnibus Ruling”). In the Omnibus Ruling, the FCC ruled on a total of 21 petitions seeking “clarification or other actions” regarding the TCPA, principally in connection with automated calls and text messages.
Petitioners sought court review of four aspects of the Omnibus Ruling: Continue Reading
By Bruce Bennett, Carlo Kostka, Charlotte Hill, Craig Pollack, Dan Cooper, Gemma Nash, Kristof Van Quathem, Mark Young, and Sophie Bertin
The EU Payment Services Directive (PSD2), which took effect on January 13, 2018, puts an obligation on banks to give Third Party Providers (TPPs) access to a customer’s payment account data, provided the customer expressly consents to such disclosure. The new legislation is intended to improve competition and innovation in the EU market for payment services. The General Data Protection Regulation (GDPR), which is due to take effect from May 25, 2018, enhances individuals’ rights when it comes to protecting their personal data. The interaction between PSD2, aimed at increasing the seamless sharing of data, and the GDPR, aimed at regulating such sharing, raises complicated compliance concerns.
For example, where banks refrain from providing TPPs access to customer payment data for fear of breaching the privacy rights of their customers under the GDPR, competition authorities may consider this a breach of competition law. This concern is already becoming a reality for banks – on October 3, 2017, the European Commission carried out dawn raids on banking associations in Poland and the Netherlands following complaints from fintech rivals that the associations were not providing them with what they considered legitimate access to customer payment data. Continue Reading