Last week, the New York Department of Financial Services (“DFS”) issued guidance addressed to executives and information security personnel of entities regulated by DFS to assist them in understanding and assessing cybersecurity risks associated with the use of artificial intelligence (“AI”), and implementing appropriate controls to mitigate such risks (the “Guidance”).[1] In particular, and to address inquiries received by DFS regarding AI’s impact on cyber risk, the Guidance is intended is to explain how the framework set forth in DFS’ Cybersecurity Regulation (23 NYCRR Part 500) should be used to assess and address such risks.
Below, we provide a high-level overview of the cyber risks identified by DFS related to the use of AI as well as the mitigating controls DFS recommends covered entities adopt to minimize the likelihood and impact of such risks. Even for entities that are not regulated by DFS, the Guidance provides a roadmap for how other regulators may view AI-related cyber risks.
Cybersecurity Risks Related to the Use of AI. The Guidance identifies two categories of risks specific to cybersecurity posed by an organization’s deployment of AI:
- Risks caused by threat actors’ use of AI (e.g., AI-enabled social engineering and AI-enhanced cybersecurity attacks):
AI has enabled threat actors to create highly personalized and sophisticated social engineering attacks that are more convincing, and therefore more successful. In particular, threat actors are using AI to create audio, video and text “deepfakes” that target specific individuals, convincing employees to disclose sensitive information about themselves and their employers or share credentials enabling access to their organization’s information systems and nonpublic information. Deepfakes have also been used to mimic an individual’s appearance or voice to circumvent IT verification procedures as well as biometric verification technology.
AI has also allowed threat actors to amplify the “potency, scale, and speed of existing types of cyberattacks.” For example, AI can be used to more efficiently identify and exploit security vulnerabilities, allowing broader access to protected information and systems at a faster rate. It can also accelerate the development of new malware variants and enhance ransomware such that it can bypass defensive security controls, evading detection. Even threat actors who are not technically skilled may now be able to launch attacks using AI products and services, resulting in a potential increase in the number and severity of cyberattacks.
- Risks caused by a covered entity’s use or reliance upon AI.
Products that use AI require the collection and processing of substantial amounts of data, including non-public information (“NPI”). Covered entities that develop or deploy AI are at risk because threat actors have a greater incentive to target these entities to extract NPI for malicious purposes and/or financial gain. AI tools that require storage of biometric data, like facial and fingerprint recognition, pose a great risk as stolen biometric data can be used to generate deepfakes, imitate authorized users, bypass multi-factor authentication (“MFA”) and gain access to NPI.
Working with third party vendors in gathering data for AI-powered tools exposes organizations to additional vulnerabilities. For example, if a covered entities’ vendors or suppliers are compromised in a cybersecurity incident, its NPI could be exposed and become a gateway for broader attacks on its network.
Measures to Mitigate AI-related Threats
Using its Cybersecurity Regulation as a framework, DFS suggests a number of controls and measures to help entities combat the aforementioned AI-related cybersecurity risks. Such controls include:
- Designing cybersecurity risk assessments that account for AI-related risks in the use of AI by the covered entity and its vendors and suppliers;
- Applying robust access controls to combat deepfakes and other AI-enhanced social engineering attacks;[2]
- Maintaining defensive cybersecurity programs to protect against deepfakes and other AI threats;
- Implementing third party vendor and supplier policies and management procedures that include due diligence on threats facing such vendors and suppliers from the use of AI and how such threats, if exploited, could impact the covered entity;
- Enforcing data minimization policies to limit NPI a threat actor can access in case MFA fails; and
- Training AI development personnel on securing and defending AI systems as well as other personnel on drafting queries to avoid disclosing NPI.
Conclusion
As AI continues to evolve, so too will AI-related cybersecurity risks, meaning it is of critical importance that all companies are proactive in identifying, assessing and mitigating the risks applicable to its business. To ensure speedy detection of, and response to, such threats, and attempt to avoid regulatory scrutiny or enforcement, covered entities should review, and where necessary update, its existing cybersecurity policies and procedures and implement mitigating controls using the Cybersecurity Regulation as a framework in line with DFS’ Guidance.
[1] A copy of the DFS Guidance can be found here.
[2] Notably, DFS encourages entities to consider using authentication factors that can withstand AI-manipulated deepfakes, and other AI-enhanced attacks by avoiding authentication via SMS text, voice or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys. Additionally, DFS recommends using technology with liveness detection or texture analysis, or requiring authentication via more than one biometric modality at the same time to protect against AI impersonation.