On the International Association of Privacy Professionals’ Privacy Perspectives blog, Covington privacy associate Jeff Kosseff shares his thoughts about the potential newsgathering uses of drones, and the dangers of overregulating them.
By Gabriel Slater and Kurt Wimmer
As reported in the press, President Obama plans to issue an Executive Order authorizing the Commerce Department’s National Telecommunications and Information Administration (“NTIA”) to coordinate the development of privacy guidelines for commercial drone operations. More specifically, we understand that NTIA would coordinate a “multi-stakeholder process” — a procedure used previously to address privacy issues not being sufficiently addressed elsewhere in the government, including for mobile apps and facial recognition technology.
It is not clear when the Executive Order will be issued. It is also not known whether it will be coordinated with the FAA’s separate development of rules for integrating commercial drone operations into U.S. airspace.
We understand that the NTIA process will be intended to develop an industry code of conduct, rather than binding rules. However, adherence to any resulting code of conduct could be enforced by the Federal Trade Commission under Section 5 of the FTC Act, which permits it to take action against deceptive trade practices.
For this reason, industry stakeholders must approach these processes and any resulting agreements very carefully.
As an indicator of the continuing focus of government authorities on cybersecurity breaches and potential notification requirements, certain contractors for the federal government may soon face new rapid reporting requirements for successful network penetrations. Specifically, President Obama signed the 2014 Intelligence Authorization Act (“2014 IAA”) into law on July 7, 2014, starting a 90-day clock under Section 325 of the Act for the Director of National Intelligence (“DNI”) to promulgate regulations for “cleared intelligence contractors” to report the successful penetration of their networks and information systems.
Section 325 defines a cleared intelligence community (“IC”) contractor as “a private entity granted clearance . . . to access, receive, or store classified information for the purpose of bidding for a contract or conducting activities in support of [the IC].” The new regulations will apply to “covered” networks and information systems that “contain or process information created by or for an element of the [IC] with respect to which such contractor is required to apply enhanced protection.”
At last week’s second annual South by Southwest (“SXSW”) V2V conference, described as “an extension and re-imagining of the legendary SXSW experience with an emphasis on the creative spark that drives entrepreneurial innovation,” three days of educational programming covered various topics of relevance to the legal/privacy community, including net neutrality, data privacy, competitive design, and emerging technologies using big data. Although geared towards startups and innovators, and in particular, taking their ideas to the next level, an entire workshop among those offered was dedicated to privacy.
The general topics of privacy and big data dominated the larger SXSW Interactive conference held this past March in Austin, TX, but the intimate setting offered by SXSW V2V allowed for a deeper dive. Specifically, the bootcamp-style privacy workshop explored key principles and specific strategies and metrics that companies big and small can use to improve their privacy practices. Starting from the basic premises that privacy is “far from dead” and “the right thing to do,” Privacy Identity Innovation co-founder Natalie Fonseca presented the “Lean Privacy” approach to improving data-privacy management for startups. Lean Privacy subscribes to the theory that being a responsible data steward is good for business — “privacy is the new green.” It is primarily intended to help entrepreneurs distinguish their products and services at conception by using privacy as a mechanism to build trust with users. The following core principles were presented as the four main tenets of Lean Privacy:
- Respect Matters. Lean Privacy is about ethics. Respect users, and “do the right thing.”
- Personal Data Means People. While it may be widely promoted today that “big data is the new oil,” thus making data “our competitive advantage,” personal data is still about consumers’ individual information. Big data usually ties back to small data. It requires empathy.
- Users Have Expectations. Don’t be the “Jack in the Box” who shocks users for the first time. Avoid surprising users in negative ways.
- Trust Is Built Over Time. “Say what you do; do what you say.” Trust takes time, but it can be lost in an instant.
Ms. Fonseca noted that there are big risks associated with not caring about privacy, whereas companies can find tremendous business value in demonstrating an understanding that privacy matters. Entrepreneurs weighing considerations about whether to make privacy a primary part of their startup’s business model especially were reminded that privacy is not just the right thing to do, but it’s usually “the easy thing to do” too. Moreover, although data is an asset, it also can become a liability. Lean Privacy therefore helps businesses to strategically consider the risks and not just become consumed with collecting everything they possibly can.
The workshop concluded with clear steps that companies can take to improve their privacy practices. Ms. Fonseca stressed that the one critical thing that every startup must do to begin improving privacy is to establish accountability. To do this, she suggested empowering a “Privacy Champion” or “Privacy Master,” ideally a founder, who is responsible for thinking about the company’s data-privacy practices. “That person doesn’t need to be a legal expert,” she said, “What matters is that the Privacy Champion have the authority and the support to be an effective advocate for building privacy into the company from the ground up.”
The FTC staff has posted revisions to three Frequently Asked Questions (“FAQs”) related to obtaining verifiable parental consent under its COPPA Rule. For a comparison of the old and new FAQs, click here.
Although the changes (which include a new FAQ H.16) may appear substantial, they mostly reaffirm the FTC’s longstanding position that the agency’s list of approved verifiable parental consent mechanisms is not exhaustive and that companies can implement different methods as long as they meet the statutory standard of amounting to a “reasonable effort (taking into consideration available technology) . . . to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.” 15 U.S.C. § 6501(9).
Specifically, the revisions:
- Confirm that a credit or debit card need not be charged to obtain parental consent if the collection of the card number is combined with “other safeguards.” In the revised COPPA Rule, the FTC reaffirmed its informal policy of requiring that, under the approved verifiable parental consent method for credit cards, the credit or debit card be charged so that the parent has a record of the transaction through the monthly credit card statement. This policy previously had been embodied in the informal COPPA FAQs. The update to COPPA FAQ H.5 does not change the FTC’s position that the collection of a credit or debit card number alone is insufficient under COPPA unless the credit card is charged. But it clarifies that the collection of a credit card number in connection with a transaction is not the only way in which credit or debit cards can be used to obtain verifiable parental consent. While there are a variety of other safeguards that should meet the statutory verifiable parental consent standard, the FTC staff lists as one option “supplement[ing] the request for credit card information with special questions to which only parents would know the answer and find[ing] supplemental ways to contact the parent.”
- Reiterate that a mobile app developer can rely on an app store to obtain parental consent on its behalf. The new COPPA FAQ retains the staff’s prior guidance that the entry of a parent’s app store account number or password is not itself sufficient to meet the verifiable parental consent standard, but that a parent’s app store account can be used as a COPPA-compliant parental consent method when coupled with other indicia of reliability and meets COPPA’s other requirements (such as the direct notice requirement). The revisions make it clearer that, in such circumstances, a third party (i.e., the app store) obtains consent on the mobile app developer’s behalf.
- Reiterate that third-party platforms, such as app stores, can develop “multiple-operator” parental consent solutions for the applications that run on top of the platform, while clarifying that such offerings do not expose platforms to legal liability under COPPA. In its revised COPPA Rule, the FTC declined to add “platform” or “multiple-operator” methods to the list of approved parental consent methods, but spoke favorably of these types of common consent mechanisms and concluded that “nothing forecloses operators from using a common consent mechanism so long as it meets the Rule’s basic notice and consent requirements.” 78 Fed. Reg. 3972, 3990 (2013). The revised COPPA Rule also made clear that “marketplace platforms” do not become subject to COPPA solely by enabling app developers to offer child-directed apps on the platform. Id. at 3976. New COPPA FAQ H.16 clarifies that, similarly, third-party platforms will not be exposed to legal liability under COPPA solely for developing and offering “platform” or “multiple-operator” parental consent solutions.
By: Nora Diamond
The Federal Trade Commission (“FTC”) brought suit last week against Amazon.com for allegedly collecting unauthorized in-app charges in connection with children’s apps. The FTC alleges that, by failing to require the account holder to enter a password before allowing a charge, Amazon unfairly billed parents for millions of dollars in unauthorized purchases.
Although Amazon changed its system to require a password for any in-app purchases in June 2014, it previously employed a variety of systems regarding in-app purchases. Prior to March 2012 there was no password requirement for in-app purchases; Amazon then responded to customer complaints by requiring a password for individual charges over $20. Customers complained again, however, and in early 2013 Amazon instituted a system which would request initial parental approval of a purchase but would then allow additional purchases without approval during an undisclosed window lasting between 15 minutes and an hour.
Under Section 5 of the FTC Act the FTC can investigate and challenge unfair practices; to show that Amazon’s billing schemes were unfair, the FTC must prove that they caused or were likely to cause substantial injury that consumers could not reasonably avoid and that was not outweighed by countervailing benefits to either consumers or competition.
As we previously reported, the FTC brought a similar complaint against Apple in January. Unlike Amazon, Apple settled and signed a consent order with the FTC. The Apple system, like the most recent Amazon system, required passwords initially but then allowed additional purchases without parental consent in a 15 minute window following. Apple agreed to pay $32.5 million; payments are to be made first to customers whose accounts were charged for unauthorized purchases, and any remaining amount will be paid to the FTC. The FTC is seeking a similar resolution with Amazon.
On 2 July 2014, the European Commission issued a Communication titled “Towards a thriving data-driven economy”, which describes the features of such economy and sets out some operational conclusions. The Communication responds to the European Council’s conclusions of October last year which called for EU action to provide the right framework conditions for a single market for big data and cloud computing.
The Communication follows in the wake of the White House’s comprehensive review of big-data and privacy issues and the parallel report of the President’s Council of Advisors on Science and Technology (“PCAST”) in May (see here) as well as the European Data Protection Supervisor’s preliminary opinion on Privacy and Competitiveness in the age of Big Data (see here) published in April.
The European Commission recognizes that “the European digital economy has been slow in embracing the data revolution compared to the USA” and has identified various obstacles which as reasons, including:
- a lack of appropriate funding for research and innovation;
- a shortage of experts;
- the complexity of the legal environment;
- the concerns and reduced trust in the digital economy among individuals and organisations; and
- data location requirements limiting the cross-border flow of information and creating a barrier to a single market for cloud computing and big data.
The Commission is aware of the many opportunities that big data creates, realizing that “data is at the centre of the future knowledge and economy” and proposes an action plan to bring about the data-driven EU economy of the future.
From a data protection law perspective the following points in particular are noteworthy:
- The European Commission emphasizes that big data processing has to comply with applicable data protection rules when it involves personal data. But the Commission demonstrates its willingness to enact effective data protection and network security rules, whereby the legal framework and the policies should be data-friendly. The European Commission calls for the legislative process on the reform of the EU data protection network and information security to be rapidly concluded as it hopes that this will foster trust and confidence and increase legal certainty. The reform should be complemented by adequate guidance on issues such as data anonymization and pseudonymization, data minimization, personal data risk analysis and tools and initiatives enhancing consumer awareness.
- The Communication addresses the actions taken under the European Cloud Strategy, including the work on trusted cloud computing, standards, certification and fair contract terms and conditions for cloud users. These are described in more detail in the Report on the Implementation of the Communication ‘Unleashing the Potential of Cloud Computing in Europe’, a Commission Staff Working Document, which was also published on July 2 and can be downloaded here. The Commission will also launch a consultation on “personal data spaces” (basically user-controlled cloud-based technologies for storage and use of personal data) and study barriers created through data location requirements.
- The Commission envisages funding for a series of projects related to the Internet of Things (essentially data gathered through smart connected objects) as well as support for R&I for privacy-enhancing ‘by design’ technical solutions, such as tools to assist users in selecting appropriate data sharing policies or reducing personal data breaches.
- The Commission also wants to explore the security risks relating to big data and intends to propose risk management and mitigation measures, including guidelines on good practices.
The Commission will further consult with Parliament, Council, Member States and relevant stakeholders to draw up a more detailed action plan.
A New York state trial court last week rejected a publicly traded company’s request to obtain the identity of an individual who anonymously wrote negative comments about the company on an online financial bulletin board.
In February 2014, an individual with the pseudonym “Pump Terminator” posted an article about Nanoviricides, Inc. on www.seekingalpha.com, a financial website. The article was titled “NanoViricides: House of Cards with -80% Downside, ‘Strong Sell’ Recommendation,” and directly below the article, the author wrote “Disclosure: I am short NNVC. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.”
The article provides a lengthy critique of the company’s business practices, calling it “the worst US reverse merger we have ever seen” and comparing it to the China RTO frauds.” The article links to a shareholder complaint filed against the company’s CEO and president.
Nanoviricides filed a pre-action discovery proceeding, seeking the disclosure of the identity of “Pump Terminator” so that it could bring a libel claim against the author. Among the statements that the company alleged are defamatory:
- With multiple questionable stock promoters NNVC has pumped the stock +330% while heavily diluting shareholders and stealing NNVC out from under public investors as insiders siphoned off millions of dollars.
- Anil hires his wife as CFO while Auditor and Internal Financial Controls are failing.
On June 26, Judge Cynthia S. Kern of the Supreme Court of New York denied the discovery request, concluding that the company failed to demonstrate that it has a meritorious cause of action for defamation. Federal and New York courts have long held that statements of pure opinion — rather than factual assertions — cannot be the basis for a defamation claim. Judge Kern concluded that, when considered as a whole, the article conveys the author’s opinion.
Important to her conclusion were both the disclaimer that the article is the author’s opinion, and phrases such as “we believe” or “it seems to us” that appear in the article more than 15 times. Moreover, Judge Kern concluded, the financial news website’s tagline, “Read. Decide. Invest” clearly gives the impression “that the website is designed to give people a place to express their opinions and for the reader to then form his or her own assumptions based on the posted articles.”
Particularly noteworthy is Judge Kern’s finding that New York courts should protect against the use of subpoenas that stifle the free exchange of ideas online. “Clearly the article herein at issue does not cast the petitioner in a positive light and the court can sympathize with the filing of the instant petition,” Judge Kern wrote. “However, it is paramount in an open and free society that we protect the anonymity of those whose ‘publication is prompted by the desire to question, challenge and criticize the practices of those in power without incurring adverse consequences.”
Discovery requests for the identities of anonymous Internet commenters often arise in defamation cases that involve negative comments that were posted on websites and online bulletin boards. Judge Kern’s decision is noteworthy for its fairly broad interpretation of what constitutes “opinions” that are protected from defamation claims and discovery.
Last Friday, Florida’s governor signed into law the Florida Information Protection Act of 2014 (“FIPA”), a bill repealing Florida’s existing data security breach notice law and replacing it with what will be one of the nation’s most stringent breach notice laws. This post summarizes the key aspects of the new law, which becomes effective July 1, 2014.
The Definition of “Personal Information” Now Includes Online Account Credentials
FIPA broadly defines that type of information that, if breached, could require a company to provide notice to consumers and (as discussed below) regulators (“personal information”). Going beyond the narrow scope of information protected by most state data breach laws, FIPA’s definition of personal information includes “a user name or e-mail, in combination with a password or security question and answer that would permit access to an online account.” (California’s breach notice law also defines covered information to include online account credentials.)
Notice to Individuals Must Now Be Provided Within 30 Days of the Incident
The new law states that any required notices to individuals generally must be provided “no later than 30 days after the determination of a breach or reason to believe a breach occurred.” This represents a shortening of Florida’s existing 45-day notice requirement.
U.S. District Court Judge Esther Salas ruled on Monday that the U.S. Court of Appeals for the Third Circuit can review her conclusion that Section 5 of the Federal Trade Commission Act provides the FTC with authority to bring actions arising from companies’ data security violations.
In April of this year, Judge Salas denied Wyndham Hotels and Resorts’ motion to dismiss a FTC lawsuit that alleges that Wyndham violated the FTC Act’s prohibition against “unfair practices” by failing to provide reasonable security for its customers’ personal information. Although her order is not a final ruling and is not binding on any other judge, it received considerable attention because it was the first time that a court has weighed in on the scope of the FTC’s authority over data security and privacy matters.
Denials of motions to dismiss ordinarily are not immediately appealable, absent permission from both the district court and the court of appeals. In her ruling on Monday, Judge Salas granted Wyndham’s motion to appeal her order to the Third Circuit. Judge Salas reasoned that there is substantial grounds for differences of opinion on two issues: (1) whether the FTC can bring a Section 5 unfairness claim involving data security; and (2) whether the FTC must formally promulgate regulations before bringing its unfairness claim.
If the Third Circuit grants Wyndham’s Petition to Appeal, the appellate court will review the legal conclusions in Judge Salas’s April order. If the Third Circuit denies the petition, the case will proceed in the district court. Even if the Third Circuit denies this petition for review, it ultimately may hear an appeal of the outcome of summary judgment proceedings or a trial in this case.