Photo of Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as "a trusted adviser - practical, results-oriented and an expert in the field;" "fast, thorough and responsive;" "extremely pragmatic in advice on risk;" and having "great insight into the regulators."

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.

In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.

The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).

Continue Reading Proposed New EU Cyber Rules Introduce More Onerous Requirements and Extend to More Sectors

On December 15, 2020, the Irish Data Protection Commission (“DPC”) fined Twitter International Company (“TIC”) EUR 450,000 (USD 500,000) following a narrow investigation into TIC’s compliance with obligations to (a) notify a personal data breach within 72 hours under Article 33(1) GDPR; and (b) document the facts of the breach under Article 33(5) GDPR. The process to investigate these points took a little under two years, and resulted in a decision of nearly 200 pages.

This is the first time that the DPC has issued a GDPR fine as a lead supervisory authority (“LSA”) after going through the “cooperation” and “consistency” mechanisms that enable other authorities to raise objections and the EDPB to resolve disagreements. The delay in the process and details in the EDPB binding resolution suggest that this was a somewhat arduous process. Several authorities raised objections in response to the DPC’s draft report – regarding the identity of the controller (Irish entity and/or U.S. parent), the competence of the DPC to be LSA, the scope of the investigation, the size of the fine, and other matters. Following some back and forth — most authorities maintained their objections despite the DPC’s explanations — the DPC referred the matter to the EDPB under the GDPR’s dispute resolution procedure. The EDPB considered the objections and dismissed nearly all of them as not being “relevant and reasoned”, but did require the DPC to reassess the level of the proposed fine.

Process aside, the DPC’s decision contains some interesting points on when a controller is deemed to be “aware” of a personal data breach for the purpose of notifying a breach to a supervisory authority. This may be particularly relevant for companies based in Europe that rely on parent companies in the US and elsewhere to process data on their behalf. The decision also underlines the importance of documenting breaches and what details organizations should include in these internal reports.
Continue Reading Twitter Fine: a View into the Consistency Mechanism, and “Constructive Awareness” of Breaches

On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”).  The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020.  (See our previous blog here for a summary of the Commission’s European Strategy for Data.)  The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.

The proposed Data Governance Act sets out rules relating to the following:

  • Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
  • Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
  • Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
  • Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.


Continue Reading The European Commission publishes a proposal for a Regulation on European Data Governance (the Data Governance Act)

On October 1, 2020, the Hamburg Data Protection Authority (“Hamburg DPA”) fined H&M, the Swedish clothing company, over €35 million for illegally surveilling employees at its service center in Nuremberg.  This fine is the largest financial penalty issued by a German DPA to date for a violation of the European General Data Protection Regulation (“GDPR”), and the second highest in Europe issued by any DPA (although other DPAs have announced their intention to issue other larger fines).
Continue Reading H&M Receives Record-Breaking Fine for Employee Surveillance in Violation of the GDPR

On 10 September 2020, the UK Information Commissioner’s Office (“ICO”) published its beta-phase “Accountability Framework” (“Framework”).  The Framework is designed to assist organisations, of any size and across all sectors, in complying with the accountability principle under the GDPR and in meeting the expectations of the ICO.

The Framework will help those within organisations who are responsible for implementing data protection compliance strategies.  The ICO envisages that organisations will use the Framework in conjunction with other relevant guidance and materials available from the ICO.  The ICO emphasises that each organisation must be mindful of its own circumstances when managing data protection risks, and that a “one size fits all” approach should not be adopted.
Continue Reading UK Information Commissioner’s Office Publishes Draft Accountability Framework Tool

On May 4, 2020, the European Data Protection Board (“EDPB”) updated its guidelines on consent under the GDPR.  An initial version of these guidelines was adopted by the Article 29 Working Party prior to the GDPR coming into effect, and was endorsed by the EDPB on May 25, 2018.

Continue Reading Updated EDPB Guidelines on Consent and Implications for Cookies

On 1 April 2020, the UK Supreme Court handed down its ruling in WM Morrison Supermarkets plc v Various Claimants [2020] UKSC 12.  The Court ruled that Morrisons was not vicariously liable for a data breach deliberately perpetrated by an employee.  The judgment is significant in that it overturned the decisions of the two lower courts (the High Court and Court of Appeal) and provides guidance for employers on when they may be held vicariously liable for data breaches and other violations of the GDPR involving employees, who act as independent controllers in their own right.

Continue Reading UK Supreme Court Rules That Supermarket Is Not Vicariously Liable For Data Breach Committed By Employee

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.

Continue Reading UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles

Back in 2013, we published a blog post entitled, “European Regulators and the Eternal Cookie Debate” about what constitutes “consent” for purposes of complying with the EU’s cookie rules.  The debate continues…  Yesterday, the ICO published new guidance on the use of cookies and a related “myth-busting” blog post.  Some of the