On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an industry letter (the “Guidance”) highlighting the cybersecurity risks arising from the use of artificial intelligence (“AI”) and providing strategies to address these risks.  While the Guidance “does not impose any new requirements,” it clarifies how Covered Entities should address AI-related risks as part of NYDFS’s landmark cybersecurity regulation, codified at 23 NYCRR Part 500 (“Cybersecurity Regulation”).  The Cybersecurity Regulation, as revised in November 2023, requires Covered Entities to implement certain detailed cybersecurity controls, including governance and board oversight requirements.  Covered Entities subject to the Cybersecurity Regulation should pay close attention to the new Guidance not only if they are using or planning on using AI, but also if they could be subject to any of the AI-related risks or attacks described below. 

AI-Related Risks:  The Guidance notes that threat actors have a “lower barrier to entry” to conduct cyber attacks as a result of AI and identifies four (non-exhaustive) cybersecurity risks related to the use of AI, including two risks related to the use of AI by threat actors against Covered Entities and two risks related to the use of, or reliance on, AI by Covered Entities: 

  • AI-Enabled Social Engineering – The Guidance highlights that “AI-enabled social engineering presents one of the most significant threats to the financial services sector.”  For example, the Guidance observes that “threat actors are increasingly using AI to create realistic and interactive audio, video, and text (‘deepfakes’) that allow them to target specific individuals via email (phishing), telephone (vishing), text (SMiShing), videoconferencing, and online postings.”  AI generated audio, video, and text can be used to target individuals to convince employees to divulge sensitive information about themselves or their employer, wire funds to fraudulent accounts, or circumvent biometric verification technology.
  • AI-Enhanced Cybersecurity Attacks – The Guidance also notes that AI can be used by threat actors to amplify the potency, scale, and speed of existing types of cyberattacks by quickly and efficiently identifying and exploiting security vulnerabilities.
  • Risks Related to Vast Amounts of Non-public Information – Covered Entities might maintain large quantities of non-public information, including biometric data, in connection with their deployment or use of AI.  The Guidance notes that, “maintaining non-public information in large quantities poses additional risks for Covered Entities that develop or deploy AI because they need to protect substantially more data, and threat actors have a greater incentive to target these entities in an attempt to extract non-public information for financial gain or other malicious purposes.”
  • Vulnerabilities due to Third-Party, Vendor, and Other Supply Chain Dependencies – Finally, the Guidance flags that acquiring the data needed to power AI tools might require the use of vendors or other third-parties, which expands an entity’s supply chain and could introduce potential security vulnerabilities that could be exploited by threat actors.

Controls and Measures: The Guidance notes that the “Cybersecurity Regulation requires Covered Entities to assess risks and implement minimum cybersecurity standards designed to mitigate cybersecurity threats relevant to their businesses – including those posed by AI” (emphasis added.)  In other words, the Guidance takes out the position that assessment and management of cyber risks related to AI are already required by the Cybersecurity Regulation.  The Guidance then sets out “examples of controls and measures that, especially when used together, help entities to combat AI-related risks.”  Specifically, the Guidance provides recommendations to Covered Entities on how to address AI-related risks in the context of implementing measures to address existing NYDFS requirements under the Cybersecurity Regulation.

  • Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans – Covered Entities should consider the risks posed by AI when developing risk assessments and risk-based programs, policies, procedures, and plans as required in the Cybersecurity Regulation.  While the Cybersecurity Regulation already requires annual updates to Risk Assessments, the Guidance notes that these updates must ensure new risks posed by AI are assessed.  In addition, the Guidance specifies that the incident response, business continuity, and disaster recovery plans required by the Cybersecurity Regulation “should be reasonably designed to address all types of Cybersecurity Events and other disruptions, including those relating to AI.”  Further, the Guidance notes that the “Cybersecurity Regulation requires the Senior Governing Body to have sufficient understanding of cybersecurity risk management, and regularly receive and review management reports about cybersecurity matters,” which should include “reports related to AI.”
  • Third-Party Service Provider and Vendor Management – The Guidance emphasizes that “one of the most important requirements for combatting AI-related risks” is to ensure that all third-party service provider and vendor policies (including those required to comply with the Cybersecurity Regulation) account for the threats faced from the use of AI products and services, require reporting for cybersecurity events related to AI, and consider additional representations and warranties for securing a Covered Entity’s non-public information if a third party service provider is using AI.  
  • Access Controls – Building on the access control requirements in the Cybersecurity Regulation, the Guidance recommends that “Covered Entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks, by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys,” among other steps to defend against AI-related threats.  The Guidance also advises Covered Entities to “consider using an authentication factor that employs technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person.”  Notably, the Guidance recommended, but does not require, Covered Entities to employ “zero trust” principles and, where possible, require authentication to verify identities of authorized users for all access requests.
  • Cybersecurity Training – As part of the annual cybersecurity training requirements under the Cybersecurity Regulation, the Guidance suggests that the required training should address AI-related topics, such as the risks posed by AI, procedures adopted by the entity to mitigate these risks, and responding to social engineering attacks using AI, including the use of deepfakes in phishing attacks.  As part of social engineering training required under the Cybersecurity Regulation, entities should cover procedures for unusual requests, such as urgent money transfers, and the need to verify legitimacy of requests by telephone, video, or email.  Entities that deploy AI directly (or through third party service providers) should also train relevant personnel on how to design, develop, and deploy AI systems securely, while personnel using AI-powered applications should be trained on drafting queries to avoid disclosing non-public information.
  • Monitoring – Building on the requirements in the Cybersecurity Regulation to implement certain monitoring processes, the Guidance notes that Covered Entities that use AI-enabled products or services “should also consider monitoring for unusual query behaviors that might indicate an attempt to extract [non-public information] and blocking queries from personnel that might expose [non-public information] to a public AI product or system.” 
  • Data Management – The Guidance notes that the Cybersecurity Regulation’s data minimization requirements, which require implementation of procedures to dispose of non-public information that is no longer necessary for business purposes, also applies to non-public information used for AI purposes.  Furthermore, while recent amendments to the Cybersecurity Regulation will require Covered Entities to “maintain and update data inventories,” the Guidance recommends that Covered Entities using AI should implement data inventories immediately.  Finally, Covered Entities that use or rely on AI should have controls “in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the AI.” 

Although AI presents some cybersecurity risks, the Guidance notes that there are also substantial benefits “that can be gained by integrating AI into cybersecurity tools, controls, and strategies.”  The Guidance concludes by noting that “it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500.” 

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Ashden Fein Ashden Fein

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients…

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Ashden frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, extortion and ransomware, and destructive attacks.

Additionally, Ashden assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security and insider risks. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, FedRAMP, and requirements related to supply chain security.

Before joining Covington, Ashden served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Ashden currently serves as a Judge Advocate in the
U.S. Army Reserve.

Photo of Caleb Skeath Caleb Skeath

Caleb Skeath advises clients on a broad range of cybersecurity and privacy issues, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, regulatory inquiries, and defending against class-action litigation. Caleb holds a Certified Information Systems Security Professional (CISSP) certification.

Caleb specializes in assisting…

Caleb Skeath advises clients on a broad range of cybersecurity and privacy issues, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, regulatory inquiries, and defending against class-action litigation. Caleb holds a Certified Information Systems Security Professional (CISSP) certification.

Caleb specializes in assisting clients in responding to a wide variety of cybersecurity incidents, ranging from advanced persistent threats to theft or misuse of personal information or attacks utilizing destructive malware. Such assistance may include protecting the response to, and investigation of an incident under the attorney-client privilege, supervising response or investigation activities and interfacing with IT or information security personnel, and advising on engagement with internal stakeholders, vendors, and other third parties to maximize privilege protections, including the negotiation of appropriate contractual terms. Caleb has also advised numerous clients on assessing post-incident notification obligations under applicable state and federal law, developing communications strategies for internal and external stakeholders, and assessing and protecting against potential litigation or regulatory risk following an incident. In addition, he has advised several clients on responding to post-incident regulatory inquiries, including inquiries from the Federal Trade Commission and state Attorneys General.

In addition to advising clients following cybersecurity incidents, Caleb also assists clients with pre-incident cybersecurity compliance and preparation activities. He reviews and drafts cybersecurity policies and procedures on behalf of clients, including drafting incident response plans and advising on training and tabletop exercises for such plans. Caleb also routinely advises clients on compliance with cybersecurity guidance and best practices, including “reasonable” security practices.

Caleb also maintains an active privacy practice, focusing on advising technology, education, financial, and other clients on compliance with generally applicable and sector-specific federal and state privacy laws, including FERPA, FCRA, GLBA, TCPA, and COPPA. He has assisted clients in drafting and reviewing privacy policies and terms of service, designing products and services to comply with applicable privacy laws while maximizing utility and user experience, and drafting and reviewing contracts or other agreements for potential privacy issues.

Photo of Matthew Harden Matthew Harden

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He…

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He works with clients across industries, including in the technology, financial services, defense, entertainment and media, life sciences, and healthcare industries.

As part of his cybersecurity practice, Matthew provides strategic advice on cybersecurity and data privacy issues, including cybersecurity investigations, cybersecurity incident response, artificial intelligence, and Internet of Things (IoT). He also assists clients with drafting, designing, and assessing enterprise cybersecurity and information security policies, procedures, and plans.

As part of his litigation and investigations practice, Matthew leverages his cybersecurity experience to advise clients on high-stakes litigation matters and investigations. He also maintains an active pro bono practice focused on veterans’ rights.

Matthew currently serves as a Judge Advocate in the U.S. Coast Guard Reserve.