Keypoint: The New York Department of Financial Services (NYDFS) circulated an industry letter offering guidance to NYDFS “Covered Entities” for assessing and managing AI-related cybersecurity risks, including threats malicious actors using AI and the risks associated with a Covered Entity’s own AI systems.
The NYDFS industry letter (“Letter”) recognizes that Covered Entities can leverage AI to enhance their cybersecurity posture. The department contends that doing so would bolster entities’ compliance with NYDFS cybersecurity regulation 23 NYCRR Part 500 (“Part 500”).
The Letter does not revise Part 500, but notes that integrating AI into cybersecurity frameworks can improve Covered Entities’ risk assessments, incident response strategies, and overall security action plans. However, the Letter warns that the deployment of AI systems requires careful evaluation of the risks to the business, and evaluation of the security controls used to manage those risks.
As a reminder to our readers, portions of the NYDFS amendments to Part 500 published in November 2023 take effect on November 1, 2024. These amendments introduce enhanced reporting requirements for Chief Information Security Officers, new responsibilities for senior governing bodies, mandatory encryption of all nonpublic information in transit, and updated incident response and disaster recovery plans. The amendments also introduce new exemption categories targeting small businesses. Covered Entities should review the guidance in the Letter while aligning their cybersecurity frameworks to the amended requirements. The Letter describes the following risks and mitigation strategies:
Cybersecurity Risks of AI
- AI-Enabled Social Engineering: AI has significantly improved the ability of threat actors to create personalized and convincing social engineering attacks. These include deepfakes—realistic audio, video, and text content that can deceive individuals into divulging sensitive information or taking unauthorized actions. These sophisticated attacks can lead to significant financial losses and damage to an organization’s reputation.
- AI-Enhanced Cybersecurity Attacks: AI enables threat actors to amplify the potency and speed of cyberattacks. By scanning and analyzing vast amounts of information quickly, AI can identify and exploit vulnerabilities, and/or find sensitive data in less time. This increased efficiency lowers the barrier of entry for less-skilled cybercriminals, potentially leading to a surge in cyberattacks. This is particularly dangerous to the financial services sector, where sensitive nonpublic information (“NPI”) is a prime target.
- Exposure or Theft of NPI: AI systems often require substantial data, including NPI, increasing the risk of data breaches. Additionally, the storage of biometric data poses further risks, as stolen biometric data can be used to bypass security measures.
- Supply Chain Vulnerabilities: AI-powered tools depend on vast amounts of data, often involving third-party service providers. Each link in this supply chain introduces potential vulnerabilities that can be exploited.
Strategies for Mitigating Cybersecurity Risks of AI
The Letter provides Covered Entities with guidance to understand and manage AI-enabled cybersecurity risks. This guidance emphasizes the importance of conducting thorough risk assessments and implementing robust cybersecurity programs, policies, and procedures based on these assessments. The guidance states that Part 500 requires Covered Entities implement:
- Risk Assessments and Risk-Based Programs: Covered Entities must conduct regular risk assessments to identify and mitigate AI-related threats. These assessments should address the use of AI within the organization and by third-party service providers. Additionally, organizations should develop comprehensive incident response best practices, business continuity, and disaster recovery plans that account for such threats. The guidance underscores that senior leadership must prioritize cybersecurity and ensure that the organization’s cybersecurity strategy aligns with overall business objectives.
- Third-Party Service Provider Management: Covered Entities should maintain robust policies for third-party service providers as AI systems frequently depend on such external vendors and service providers. Covered Entities should conduct due diligence on these third parties to ensure their adherence to security standards and ensure they can protect against AI-related threats.
- Access Controls: Covered Entities need to implement multi-factor authentication and other access controls that can prevent unauthorized access to information systems. Given the risks posed by AI-manipulated deepfakes, organizations should additionally consider using authentication methods that are resilient to such attacks.
- Cybersecurity Training: Covered Entities must employ regular training for all personnel, including senior executives, to raise awareness of AI-related risks and prepare for potential attacks. Training should include simulated exercises to prepare employees for potential AI-driven social engineering attacks.
- Monitoring and Data Management: Covered Entities should integrate effective data management best practices such as data minimization and maintaining accurate data inventories, which can limit the impact of data breaches. Organizations must ensure the protection of AI systems any data used for AI purposes.
Takeaway
While the mitigation strategies are guidance and aspirational, the Letter highlights the importance of proactive measures in AI cybersecurity. NYDFS has historically been at the forefront of cybersecurity regulations for the financial sector. In the absence of federal regulations (comprehensive or sector specific) it is reasonable to assume that proactive state legislatures and agencies will fill that void.