On Episode 17 of Covington’s Inside Privacy Audiocast, Dan Cooper, Sam Choi, Danielle Kehl and Nick Shepherd discuss the developments related to children’s privacy, looking at relevant legislation, standards, and guidelines in the UK, the EU, and the U.S., and zooming in on some child-specific topics such as age thresholds and age verification,
Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.
Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.
There has been a substantial increase in the use of the Internet across the African continent, aided by ongoing investment into local digital infrastructure, reduction in the associated costs, and improved user access. This has allowed both individuals, and private and public entities, the ability to access, collect, process and/or disseminate personal data more easily, which has spurred a number of African countries to enact comprehensive data protection laws and establish data protection authorities. There is also a growing perception among African countries that there is a need to protect their citizen’s personal data, to regulate how public and private entities use personal data, and to establish data protection authorities tasked with enforcing these laws.
While countries like Kenya, Rwanda and South Africa now have comprehensive data protection laws, which share some elements found in the European Union’s General Data Protection Regulation (“GDPR”), many of the proposed data protection laws have specific rules that are different from those in other countries in Africa. Consequently, technology companies conducting business in Africa will be required to keep abreast of the evolving regulatory landscape as it relates to data protection on the continent.
On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.
Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement
On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).
The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.
The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.
On 2 September 2021, the transition year for the Children’s code (or Age Appropriate Design Code) published by the UK Information Commissioner (“ICO”) ended. The ICO’s Children’s code was first published in September 2020, with a 12-month transition period. In an accompanying blog, the ICO has stated that it will be “proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell [the ICO] how their services are designed in line with the code.”
Over the summer, the ICO has also approved two certification schemes under the UK GDPR. The certification schemes provide organizations with a mechanism to demonstrate their high level of commitment to data protection compliance.
On February 11, 2021, the European Commission launched a public consultation on its initiative to fight child sexual abuse online (the “Initiative”), which aims to impose obligations on online service providers to detect child sexual abuse online and to report it to public authorities. The consultation is part of the data collection activities announced in the Initiative’s inception impact assessment issued in December last year. The consultation runs until April 15, 2021, and the Commission intends to propose the necessary legislation by the end of the second quarter of 2021.
Continue Reading European Commission Launches Consultation on Initiative to Fight Child Sexual Abuse
On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.
Continue Reading AI Update: The Future of AI Policy in the UK
On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”). The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.
Continue Reading European Commission Conducts Open Consultation on the European Health Data Space Initiative
On December 18, 2020, the Irish Data Protection Commission (“DPC”) published its draft Fundamentals for a Child-Oriented Approach to Data Processing (the “Fundamentals”). The Fundamentals introduce child-specific data protection principles and measures, which are designed to protect children against data processing risks when they access services, both online and off-line. The DPC notes that all organizations collecting and processing children’s data should comply with the Fundamentals. The Fundamentals are open for public consultation until March 31, 2021.
Continue Reading Irish DPC publishes draft Fundamentals for a Child-Oriented Approach to Data Processing
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.