The International Association of Privacy Professionals hosted its annual Privacy Academy, at which one panel, “Data Brokers Demystified,” specifically focused on regulation of the data-broker industry. The panelists included Janis Kestenbaum from the Federal Trade Commission, Jennifer Glasgow from Acxiom, and Pam Dixon from the World Privacy Forum. Emilio Cividanes from Venable also participated.
Major Conclusions of the FTC Report (Janis Kestenbaum)
- Data brokers operate with a fundamental lack of transparency. They engage in extensive collection of information about nearly every US consumer, profiles of which are composed of billions of data elements.
- Much data collection occurs without consumer awareness and uses a wide variety of online and offline sources, such as social networks, blogs, individual purchases and transactions with retailers, state and federal governments, events requiring registration, and magazine subscriptions.
- The practice of “onboarding”–where offline data is onboarded onto an online cookie and is used to market to consumers online–is increasingly common.
- Some data collected is sensitive, but even non-sensitive data is sometimes used to make “sensitive inferences” about (for example) health status, income, education, ethnicity, religion, and political ideology. Consumers are often segmented into “clusters” based on these inferred characteristics.
- For regulators, some of these clusters are concerning. For example, one cluster is entitled “Urban Scramble” and contains high concentrations of low-income ethnic minorities.
- Congress should create a centralized portal where consumers can go online and access individual data brokers’ websites to opt out and access and correct their information. For consumer-facing entities, like retailers, consumers must be given some kind of choice before data is sold to a data broker, and when that data is sensitive, the choice should be in the form of an opt in.
Concerns of Consumer Advocates (Pam Dixon)
- Statistical Parity and Fairness. The World Privacy Forum published a report in April 2014, “The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future,” which examined examples of predictive analytics as they apply to consumers and found hundreds of types of scores, such as consumer-loyalty and health-risk scores. The report illustrated how data has become a commodity that can be bought and sold. Moreover, the price of data continues to drop, and availability is growing, creating a data-rich market. It is important to examine what is being done with data. For example, does race factor into a score? Some scores include ethnic, gender, or marital factors used to make decisions about consumers that would not be allowed in other decision-making contexts, such as mortgage lending.
- Gaps in Regulations. Information originally intended for marketing purposes can also be used to impact individuals’ marketplace opportunities. Medical data is an area of great concern, for example, where lists of multiple diseases are readily available, complete with personally identifiable information. HIPAA doesn’t usually address these issues, so it presents a regulatory gap worth examining. Another example is the Equal Credit Opportunity Act, which is very narrow in its application, creating civil rights gaps. The bottom line is that for big data and statistical analysis, it is very hard to keep race and ethnicity out of the equation, even if measures are taken to take them out. The role of ethical standards, such as the Common Rule, should be explored in addressing these issues.
- Consumer Rights. Consumers should have the right to shape their digital “exhaust” by exercising access, correction, and opt out rights. Something must be done, for example, about prejudicial data like HIV/AIDS status, which can impact insurance rates. Currently, Acxiom is the only data broker that offers granular control for consumers. It is not just an opt out, but also allows consumers to see their segments and very specific categories of information held, so consumers can decide whether to keep or delete those categories.
Recommendations from Acxiom (Jennifer Glasgow)
- Self-regulation. As the ability to collect data expands, self-regulation such as the DMA Code becomes more important. If buying or selling data, be sure to follow industry codes.
- Know Data Sources. Create a data-source screening process designed to ensure quality and that data is collected with appropriate notice and choice. Make sure data is accurate enough for its purpose. Identity-verification data, for example, must be highly accurate, whereas with data for marketing, accuracy isn’t as important
- Classify Data. Sensitive data should be held to a higher standard. Acxiom has different standards for handling different types of data, depending on sensitivity, such as “generally available,” “restricted,” or “prohibited.”
- Asses Privacy Impact. Conduct privacy-impact assessments for all products. If a product uses sensitive data, for instance, devise a way to use it in a limited fashion so as not to preclude economic value while still employing appropriate standards.
- Data-use Restrictions. Make sure that data is used only for the purposes intended and that it is not passed on without controls. Only provide data actually needed by a downstream user; Acxiom receives from sources a lot of data that it doesn’t need, doesn’t want to use, and ultimately throws away. Make sure to pass on data-use restrictions by contract to vetted, legitimate clients.
- AboutTheData.com. There currently are a lot of incorrect assumptions and rhetoric about what data brokers do, so make clear what you do and don’t do. Of the 750k visitors to the Aboutthedata site so far, about 10% actually changed their data; 2% chose to opt out altogether. Transparency is critically important to the industry, and the hope is that other companies will follow suit.