On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an industry letter (the “Guidance”) highlighting the cybersecurity risks arising from the use of artificial intelligence (“AI”) and providing strategies to address these risks.  While the Guidance “does not impose any new requirements,” it clarifies how Covered Entities should address AI-related risks as part of NYDFS’s landmark cybersecurity regulation, codified at 23 NYCRR Part 500 (“Cybersecurity Regulation”).  The Cybersecurity Regulation, as revised in November 2023, requires Covered Entities to implement certain detailed cybersecurity controls, including governance and board oversight requirements.  Covered Entities subject to the Cybersecurity Regulation should pay close attention to the new Guidance not only if they are using or planning on using AI, but also if they could be subject to any of the AI-related risks or attacks described below. 

AI-Related Risks:  The Guidance notes that threat actors have a “lower barrier to entry” to conduct cyber attacks as a result of AI and identifies four (non-exhaustive) cybersecurity risks related to the use of AI, including two risks related to the use of AI by threat actors against Covered Entities and two risks related to the use of, or reliance on, AI by Covered Entities: 

  • AI-Enabled Social Engineering – The Guidance highlights that “AI-enabled social engineering presents one of the most significant threats to the financial services sector.”  For example, the Guidance observes that “threat actors are increasingly using AI to create realistic and interactive audio, video, and text (‘deepfakes’) that allow them to target specific individuals via email (phishing), telephone (vishing), text (SMiShing), videoconferencing, and online postings.”  AI generated audio, video, and text can be used to target individuals to convince employees to divulge sensitive information about themselves or their employer, wire funds to fraudulent accounts, or circumvent biometric verification technology.
  • AI-Enhanced Cybersecurity Attacks – The Guidance also notes that AI can be used by threat actors to amplify the potency, scale, and speed of existing types of cyberattacks by quickly and efficiently identifying and exploiting security vulnerabilities.
  • Risks Related to Vast Amounts of Non-public Information – Covered Entities might maintain large quantities of non-public information, including biometric data, in connection with their deployment or use of AI.  The Guidance notes that, “maintaining non-public information in large quantities poses additional risks for Covered Entities that develop or deploy AI because they need to protect substantially more data, and threat actors have a greater incentive to target these entities in an attempt to extract non-public information for financial gain or other malicious purposes.”
  • Vulnerabilities due to Third-Party, Vendor, and Other Supply Chain Dependencies – Finally, the Guidance flags that acquiring the data needed to power AI tools might require the use of vendors or other third-parties, which expands an entity’s supply chain and could introduce potential security vulnerabilities that could be exploited by threat actors.

Controls and Measures: The Guidance notes that the “Cybersecurity Regulation requires Covered Entities to assess risks and implement minimum cybersecurity standards designed to mitigate cybersecurity threats relevant to their businesses – including those posed by AI” (emphasis added.)  In other words, the Guidance takes out the position that assessment and management of cyber risks related to AI are already required by the Cybersecurity Regulation.  The Guidance then sets out “examples of controls and measures that, especially when used together, help entities to combat AI-related risks.”  Specifically, the Guidance provides recommendations to Covered Entities on how to address AI-related risks in the context of implementing measures to address existing NYDFS requirements under the Cybersecurity Regulation.

  • Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans – Covered Entities should consider the risks posed by AI when developing risk assessments and risk-based programs, policies, procedures, and plans as required in the Cybersecurity Regulation.  While the Cybersecurity Regulation already requires annual updates to Risk Assessments, the Guidance notes that these updates must ensure new risks posed by AI are assessed.  In addition, the Guidance specifies that the incident response, business continuity, and disaster recovery plans required by the Cybersecurity Regulation “should be reasonably designed to address all types of Cybersecurity Events and other disruptions, including those relating to AI.”  Further, the Guidance notes that the “Cybersecurity Regulation requires the Senior Governing Body to have sufficient understanding of cybersecurity risk management, and regularly receive and review management reports about cybersecurity matters,” which should include “reports related to AI.”
  • Third-Party Service Provider and Vendor Management – The Guidance emphasizes that “one of the most important requirements for combatting AI-related risks” is to ensure that all third-party service provider and vendor policies (including those required to comply with the Cybersecurity Regulation) account for the threats faced from the use of AI products and services, require reporting for cybersecurity events related to AI, and consider additional representations and warranties for securing a Covered Entity’s non-public information if a third party service provider is using AI.  
  • Access Controls – Building on the access control requirements in the Cybersecurity Regulation, the Guidance recommends that “Covered Entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks, by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys,” among other steps to defend against AI-related threats.  The Guidance also advises Covered Entities to “consider using an authentication factor that employs technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person.”  Notably, the Guidance recommended, but does not require, Covered Entities to employ “zero trust” principles and, where possible, require authentication to verify identities of authorized users for all access requests.
  • Cybersecurity Training – As part of the annual cybersecurity training requirements under the Cybersecurity Regulation, the Guidance suggests that the required training should address AI-related topics, such as the risks posed by AI, procedures adopted by the entity to mitigate these risks, and responding to social engineering attacks using AI, including the use of deepfakes in phishing attacks.  As part of social engineering training required under the Cybersecurity Regulation, entities should cover procedures for unusual requests, such as urgent money transfers, and the need to verify legitimacy of requests by telephone, video, or email.  Entities that deploy AI directly (or through third party service providers) should also train relevant personnel on how to design, develop, and deploy AI systems securely, while personnel using AI-powered applications should be trained on drafting queries to avoid disclosing non-public information.
  • Monitoring – Building on the requirements in the Cybersecurity Regulation to implement certain monitoring processes, the Guidance notes that Covered Entities that use AI-enabled products or services “should also consider monitoring for unusual query behaviors that might indicate an attempt to extract [non-public information] and blocking queries from personnel that might expose [non-public information] to a public AI product or system.” 
  • Data Management – The Guidance notes that the Cybersecurity Regulation’s data minimization requirements, which require implementation of procedures to dispose of non-public information that is no longer necessary for business purposes, also applies to non-public information used for AI purposes.  Furthermore, while recent amendments to the Cybersecurity Regulation will require Covered Entities to “maintain and update data inventories,” the Guidance recommends that Covered Entities using AI should implement data inventories immediately.  Finally, Covered Entities that use or rely on AI should have controls “in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the AI.” 

Although AI presents some cybersecurity risks, the Guidance notes that there are also substantial benefits “that can be gained by integrating AI into cybersecurity tools, controls, and strategies.”  The Guidance concludes by noting that “it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500.” 

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Ashden Fein Ashden Fein

Ashden Fein is co-chair of Covington’s Data Privacy and Cybersecurity Practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance. Ashden also serves as lead counsel…

Ashden Fein is co-chair of Covington’s Data Privacy and Cybersecurity Practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance. Ashden also serves as lead counsel in criminal, civil, and internal investigations involving cybersecurity, insider risk, and U.S. national security issues.

Ashden regularly counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Ashden frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, extortion and ransomware, and destructive attacks.

Ashden also assists clients from across industries with leading internal investigations and responding to government inquiries related to U.S. national security and insider risks. He frequently represents government contractors in False Claims Act matters involving cybersecurity and national security. Additionally, he advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, FedRAMP, and requirements related to supply chain security.

Before joining Covington, Ashden served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks. Ashden is a retired U.S. Army officer.

Photo of Caleb Skeath Caleb Skeath

Caleb Skeath helps companies manage their most complex and high‑stakes cybersecurity and data security challenges, combining deep regulatory insight, technical fluency, and practical judgment informed by leading incident response matters.

Caleb Skeath advises in‑house legal and security teams on the full lifecycle of…

Caleb Skeath helps companies manage their most complex and high‑stakes cybersecurity and data security challenges, combining deep regulatory insight, technical fluency, and practical judgment informed by leading incident response matters.

Caleb Skeath advises in‑house legal and security teams on the full lifecycle of cybersecurity and privacy risk—from governance and preparedness through incident response, regulatory engagement, and follow‑on litigation. A Certified Information Systems Security Professional (CISSP), he is trusted by clients across highly regulated and technology‑driven sectors to provide clear, practical guidance at moments when legal judgment, technical understanding, and business realities must be aligned.

Caleb has deep experience leading and overseeing responses to complex cybersecurity incidents, including ransomware, data theft and extortion, business email compromise, advanced persistent threats and state-sponsored threat actors, insider threats, and inadvertent data loss. He regularly helps in‑house counsel structure and manage investigations under attorney‑client privilege; coordinate with internal IT, information security, and executive stakeholders; and engage with forensic firms, crisis communications providers, insurers, and law enforcement. A central focus of his practice is advising on notification obligations and strategy, including the application of U.S. federal and state data breach notification laws and requirements along with contractual notification obligations, and helping companies make defensible, risk‑informed decisions about timing, scope, and messaging.

In addition to his work responding to cybersecurity incidents, Caleb works closely with clients’ legal, technical, and compliance teams on cybersecurity governance, regulatory compliance, and pre‑incident planning. He has extensive experience drafting and reviewing cybersecurity policies, incident response plans, and vendor contract provisions; supervising cybersecurity assessments under privilege; and advising on training and tabletop exercises designed to prepare organizations for real‑world incidents. His work frequently involves translating evolving regulatory expectations into actionable guidance for in‑house counsel, including in highly-regulated sectors such as the financial sector (including compliance with NYDFS cybersecurity regulations, the Computer Security Incident Notification Rule, and GLBA guidelines and guidance) and the pharmaceutical and healthcare sector (including compliance with GxP standards, FDA medical device guidance, and HIPAA).

Caleb’s practice also addresses evolving and emerging areas of cybersecurity and data security law, including advising clients on compliance with the Department of Justice’s Data Security Program, CISA‑related security requirements for restricted transactions, and preparation for new regulatory regimes such as the CCPA cybersecurity audit requirements and federal incident reporting obligations. He regularly counsels clients on how artificial intelligence and connected devices intersect with cybersecurity, privacy, and consumer protection risk, and how to support innovation while managing regulatory exposure.

Caleb also has extensive experience helping clients navigate high-stakes cybersecurity-related inquiries from the Federal Trade Commission, state Attorneys General, and other sector-specific regulators, including incident-specific inquiries as well as broader inquiries related to an entity’s cybersecurity practices and the security of product or service offerings. For companies that have entered into cybersecurity-related settlement agreements with regulators, Caleb has helped guide them through compliance with settlement agreement obligations, including navigating required third-party assessments and strategically responding to cybersecurity incidents that can arise while a company is subject to a settlement agreement. Caleb also routinely works hand-in-hand with colleagues in Covington’s class action litigation, commercial litigation, and insurance recovery practices to prepare for and successfully navigate incident-related disputes that can devolve into litigation.

Photo of Matthew Harden Matthew Harden

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He…

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He works with clients across industries, including in the technology, financial services, defense, entertainment and media, life sciences, and healthcare industries.

As part of his cybersecurity practice, Matthew provides strategic advice on cybersecurity and data privacy issues, including cybersecurity investigations, cybersecurity incident response, artificial intelligence, and Internet of Things (IoT). He also assists clients with drafting, designing, and assessing enterprise cybersecurity and information security policies, procedures, and plans.

As part of his litigation and investigations practice, Matthew leverages his cybersecurity experience to advise clients on high-stakes litigation matters and investigations. He also maintains an active pro bono practice focused on veterans’ rights.

Matthew currently serves as a Judge Advocate in the U.S. Coast Guard Reserve.