On the heels of the Federal Trade Commission’s (“FTC”) third annual “PrivacyCon,” the Future of Privacy Forum hosted its eighth annual “Privacy Papers for Policymakers” event on Capitol Hill—a gathering in which academics present their original scholarly works on privacy-related topics to D.C. policy wonks who may have a hand in shaping laws and regulations at the local, federal, and international level. The goal of the event is, in part, to foster academic-industry collaboration in addressing the world’s current and emerging privacy issues.

FTC Commissioner Terrell McSweeny kicked off the program with a reminder of the unique challenge that has always faced the world of tech policy: the rapid acceleration of the Digital Age and the need for consumer rights to catch up. Commissioner McSweeny opined that the challenge may require some solutions that go beyond privacy—such as individual control over personal data, data portability, and governance by design—and pointed out several ways in which the honored papers may help spur the evolution of existing privacy frameworks:

  • “Artificial Intelligence Policy: A Primer and Roadmap”— former Covington attorney Ryan Calo’s paper aims to advance current debates surrounding artificial intelligence (“AI”) by pointing out something that many policymakers may not realize: concerns surrounding AI, the ethics of algorithms, and the possibility that robots could steal all of our jobs have been around for at least half a century (in fact, as his paper mentions, in 1960 President John F. Kennedy was called upon to “regulate automation” through a conference on robots and labor). By putting the current debate into context, the paper provides a roadmap to the major policy questions that AI raises and the many complicating factors that should be considered as AI takes on an ever-increasing presence in consumers’ daily lives.
  • “The Public Information Fallacy”—Unlike the many scholarly works that have already debated the plethora of definitions of “sensitive” or “private” information, Woodrow Hartzog’s paper asks what should be considered “public.” In researching his paper, Hartzog discovered that there are surprisingly few definitions of the term “public information,” despite the fact that the term is often used as a carte blanche for government surveillance and the personal data practices of private companies. Given the consequences that come with the term, the paper proposes that before labeling something as “public” we should first consider the values we want to serve, the outcomes we want to achieve, and the unspoken expectations that people realistically have when they “release” their information.
  • “The Undue Influence of Surveillance Technology Companies on Policing” Elizabeth Joh’s paper discusses the ways in which private companies that produce surveillance technologies (such as “StingRays”, body cameras, or big data software) have an undue, secretive influence on the traditionally public policy decisions made by police departments. This influence may have enormous consequences for civil liberties, yet very little oversight and control is being exercised over these companies, which typically act out of their own, private self-interest. The paper argues that although some automation might result in a fairer result, private companies procured to produce police technologies should be part of the calculus in exercising oversight over police practices.
  • “Health Information Equity” — Many would not dispute the numerous health benefits that may result from collecting and using health information to discover new treatments and overall trends. However, Craig Konnoth’s paper highlights the ways in which data featured in such studies disproportionately belongs to low-income, unwell, and elderly people, and as a result such people bear a disproportionate amount of the security and autonomy harms stemming from health data being used in the public sphere. The paper argues that secondary health research must take into account whether data practices are promoting greater equity in burden distribution, and proposes a variety of ways to achieve this goal.
  • “Designing Against Discrimination in Online Markets” — Karen Levy and Solon Barocas emphasize that with platforms’ great power comes great responsibility—and as a result they must consider the moral and ethical impact of their products. The paper argues that platform providers should take these issues into consideration, and provides a helpful list of ten categories of design and policy choices that are particularly susceptible to this risk.
  • “Transatlantic Data Privacy Law” With the May 2018 General Data Protection Regulation (“GDPR”) deadline looming, Paul Schwartz and Karl-Nikolaus Peifer’s discussion of the United States’ and European Union’s divergent approaches to data privacy is timely. The paper points out that the EU traditionally has viewed data privacy as an issue of personal freedom and human rights, whereas the U.S. has approached the subject from a more marketplace-focused discourse. Despite these differences, the divergent data protection regimes will inevitably come to a head given the borderless, “transatlantic” nature of digital data. The paper proposes that new institutions and processes (such as GDPR and Privacy Shield) require the EU and the U.S. to reconcile their divergent approaches, and as a result will create opportunities for harmonization and cooperation.