At the International Conference of Data Protection and Privacy Commissioners in Mauritius this week, representatives of the private sector and academia joined together to discuss the positive changes and attendant risks that the internet of things and big data may bring to daily life. Attendees memorialized the observations and conclusions of their discussions in a Declaration on the Internet of Things and a Resolution on Big Data. The documents are not, of course, binding. But, the fact that the Declaration and Resolution drew the consensus of a large gathering of international data protection regulators renders them relevant indicators of direction of data privacy policies and trends.
The Mauritius Declaration on the Internet of Things and the Resolution on Big Data set out principles and recommendations designed to reduce the risks associated with the collection and use of data for players in the connected devices and big data ecosystems. The Declaration and Resolution both begin by acknowledging that connected devices and big data have the capacity to make our lives easier, including by providing benefits such as predicting the spread of epidemics and combatting pollution. But, the documents also acknowledge that the internet of things and big data raise “important concerns with regard to the privacy of the individuals and civil rights, protections against discriminatory outcomes and infringements of the right to equal treatment.” Against this backdrop, the Declaration and Resolution make the following key observations and recommendations:
Mauritius Declaration on the Internet of Things
- Self-determination is an inalienable right for all human beings.
- Data obtained from connected devices is “high in quantity, quality and sensitivity” and, as such, “should be regarded and treated as personal data.”
- Those offering connected devices “should be clear about what data they collect, for what purposes and how long this data is retained.”
- Privacy by design should become a key selling point of innovative technologies.
- Data should be processed locally, on the connected device itself. Where it is not possible to process data locally, companies should ensure end-to-end encryption.
- Data protection and privacy authorities should seek appropriate enforcement action when the law has been breached.
- All actors in the internet of things ecosystem “should engage in a strong, active and constructive debate” on the implications of the internet of things and the choices to be made.
Mauritius Resolution on Big Data
- Implement privacy by design.
- Be transparent about what data is collected, how data is processed, for what purposes data will be used, and whether data will be distributed to third parties.
- Define the purpose of collection at the time of collection and, at all times, limit use of the data to the defined purpose.
- Obtain consent.
- Collect and store only the amount of data necessary for the intended lawful purpose.
- Allow individuals access to data maintained about them, information on the source of the data, key inputs into their profile, and any algorithms used to develop their profile.
- Allow individuals to correct and control their information.
- Conduct a privacy impact assessment.
- Consider data anonymization.
- Limit and carefully control access to personal data.
- Conduct regular reviews to verify if results from profiling are “responsible, fair and ethical and compatible with and proportionate to the purpose for which the profiles are being used.”
- Allow for manual assessments of any algorithmic profiling outcomes with “significant effects to individuals.”
The concerns discussed at the Conference and reflected in the principles outlined in the Mauritius Declaration and Resolution echo those of the White House’s May 2014 Big Data Report (see coverage here and here), which similarly focused on the potential use of big data to discriminate against certain groups. Among other things, the Report cautioned that increased personalization allows for “discrimination in pricing, services, and opportunities,” that “serving up different kinds of information to different groups, ha[s] the potential to cause real harm to individuals,” and that categorization “effectively prevent[s] [people] from encountering information that challenges their biases or assumptions,” thereby cementing and potentially exacerbating existing ideological or cultural segregation.
Thus, while the sentiments underlying the documents and addressed by the data protection officials at the Conference are by no means new, their articulation in the Mauritius Declaration and Resolution reinforces their importance. The adoption of the Declaration and Resolution—beyond the practicalities of their concrete recommendations—serves as a reminder of the importance of ethics in tackling issues of data privacy and the potential risks of a world of big data-driven digital predestination.