On December 16, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (“Cyber AI Profile” or “Profile”). According to the draft, the Cyber AI Profile is intended to “provide guidelines for managing cybersecurity risk related to AI systems [and] identify[] opportunities for using AI to enhance cybersecurity capabilities.” The draft Profile uses the existing voluntary NIST Cybersecurity Framework (“CSF”) 2.0 — which “provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks” — and overlays three AI Focus Areas (Secure, Detect, Thwart) on top of the CSF’s outcomes (Functions, Categories, and Subcategories) to suggest considerations for organizations to prioritize when securing AI implementations, using AI to enhance cybersecurity defenses, or defending against adversarial uses of AI. This draft guidance will likely be familiar to organizations that already leverage the CSF 2.0 in their cybersecurity programs and might be complimentary to existing frameworks that organizations already have in place. Even so, the outcomes are designed to be flexible such that a range of organizations (with mature or novel programs) can leverage the guidance to help manage AI-related cybersecurity risk.
For entities or stakeholders that might be interested in offering feedback on the preliminary draft, NIST is planning to host a workshop on January 14, 2026, to discuss the draft. The Profile is also open for comment until January 30, 2026. Below, we briefly summarize the Profile’s organizational structure, as well as areas on which NIST is seeking public comment.
Focus Areas
The Cyber AI Profile is organized into three Focus Areas that address AI-related cybersecurity risk from different but overlapping angles.
- Secure – “[F]ocuses on managing cybersecurity challenges when” organizations integrate an AI system into their environment. Examples of the use of AI that fall within the scope of Secure include the use of AI by “[p]ower grids to balance loads” and “[c]ustomer service organizations to perform initial interactions with customers.”
- Defend – Aims to identify opportunities for the uses of AI that support cybersecurity processes and activities. For example, AI can enhance cyber defense capabilities related to mission assurance, proactive risk management, predictive maintenance and risk forecasting, “[a]dvanced threat detection and analysis,” adversarial training and simulation, and automated incident response.
- Thwart – Emphasizes building resilience to protect against AI-enabled threats. For example, AI-enabled spear-phishing attacks exploit users through more realistic manipulation using deepfakes and generative AI. These AI-enabled attacks help stress the need to update training for personnel and to have automated defenses that bolster security measures.

Cybersecurity Framework 2.0 Core
As indicated above, the Cyber AI Profile is divided into the six CSF functions: Govern, Identify, Protect, Detect, Respond, and Recover. Within each CSF function, the Cyber AI Profile offers sample focus area considerations. Like the CSF 2.0, the Cyber AI Profile does not offer specific instructions for how to achieve the recommended outcomes but instead provides references for how an organization could consider achieving the recommended outcomes — example informative references include sources such as technical papers, the European Union Agency for Cybersecurity’s (“ENISA”) Threat Landscape 2025 paper, NIST Special Publication (“SP”) 800-218, and Databricks AI Security Framework Version 2.0. The Cyber AI Profile also offers recommended priorities to help determine which Subcategory organizations should prioritize: “1” for High Priority, “2” for Moderate Priority, and “3” for Foundational Priority.
Govern
The Govern profile captures the establishment, communication, and monitoring of “[t]he organization’s cybersecurity risk management strategy, expectations, and policy.” This profile is further divided into the CSF categories of (1) organization context, (2) risk management strategy, (3) roles, responsibilities, and authorities, (4) policy, (5) oversight, and (6) cybersecurity supply chain risk management. AI considerations under these categories include, for example:
- Secure/Defend: Identifying and communicating AI dependencies.
- Secure/Defend: “[I]ntegrat[ing] AI-specific risks into the organization’s formal risk appetite and tolerance statements.”
- Defend/Thwart: Developing AI-specific threat information sharing channels.
- Defend: Augmenting cybersecurity teams with AI agents.
- Defend: Using AI to conduct governance checks to identify operational conflicts.
- Defend: Conducting threat detection of supplier-provided AI models.
Identify
The Identify profile focuses on ensuring that “[t]he organization’s cybersecurity risks are understood.” This profile includes the CSF categories of (1) asset management, (2) risk assessment, and (3) improvement. AI considerations under these categories include, for example:
- Defend: Inventorying software and systems to include “AI models, APIs, keys, agents, data . . . and their integrations and permissions.”
- Defend/Thwart: Incorporating AI-specific attacks as part of one’s vulnerability management program.
- Defend: “Defin[ing] conditions for disabling AI autonomy during risk response.”
- Defend: Integrating “AI-specific procedures for containment” during incident response.
Protect
The Protect profile serves as a framework to ensure that “[s]afeguards to manage the organization’s cybersecurity risks are used” and is divided into sections focused on (1) identifying management, authentication, and access control, (2) awareness and training, (3) data security, (4) platform security, and (5) technology infrastructure resilience. AI considerations under these categories include, for example:
- Secure: Issuing AI systems unique identities and credentials.
- Secure/Defend/Thwart: Developing AI-related awareness and training for personnel.
- Secure/Defend: “Maintain[ing] protected, regularly test backups of critical AI assets.”
- Secure: Restricting the execution of arbitrary code by AI agent systems.
- Defend: “Implement[ing] AI-specific resilience mechanisms.”
Detect
The Detect profile aims to provide a standard to ensure that “[p]ossible cybersecurity attacks and compromises are found and analyzed.” This profile is divided up into two categories: (1) continuous monitoring, and (2) adverse event analysis. AI considerations under these categories include, for example:
- Defend: “[F]lagging anomalies, correlating suspicious behaviors, and spotting unusual patterns faster than humans and other automated tools.”
- Thwart: “Personnel may be subject to AI-enabled phishing or deepfake attacks.”
- Thwart: “AI-enabled cyber attacks could identify and exploit” vulnerabilities introduced by “[t]hird-part[ies] . . . [through] updates and/or patches to software and systems.”
- Secure: Determining what “new monitoring is needed to track actions taken by AI.”
Respond
The Respond profile provides guidance to make sure “[a]ctions regarding a detected cybersecurity incident are taken” and includes the CSF categories of (1) incident management, (2) incident analysis, (3) incident response reporting and communication, and (4) incident mitigation. AI considerations under these categories include, for example:
- Secure: “Establish[ing] criteria for triaging and validating AI-related incidents.”
- Defend: “Integrat[ing] AI-driven analytics into incident categorization and prioritization to identify and flag AI-influenced events.”
- Secure: Diagnosing complex attacks with new tools and methods.
- Thwart: “[S]earch[ing] for indicators of adversary AI usage in the incident.”
Recover
The Recover profile covers restoring “[a]ssets and operations affected by a cybersecurity incident” and includes (1) incident recovery plan execution, and (2) incident recovery communication. AI considerations under these categories include, for example:
- Defend: Using AI to “accelerate[] recovery by calculating which systems to restore first, track[] progress, and draft[] clear updates to keep stakeholders informed.”
- Defend: Using AI to “forecast hardware failures and system degradation.”
- Defend: Evaluating “how AI defense systems performed” after an incident.
While in preliminary draft form, the Cyber AI Profile joins a growing list of AI-related guidance such as the NIST AI Risk Management Framework as well as other guidance that is under development such as the NIST SP 800-53 Control Overlays for Securing AI Systems (“COSAiS”).
Request for Public Comment
As noted above, NIST is accepting comments on the draft until January 30, 2026, in addition to planning a workshop on January 14, 2026, to discuss the draft. The preliminary draft states that NIST is specifically seeking public comment on the draft in the following areas:
- Document structure and topics:
- How do you envision using this publication? What changes would you like to see to increase/improve that use?
- How do you expect this publication to influence your future practices and processes?
- Are the proposed topics in this document sufficient to help your organization prioritize cybersecurity outcomes for AI?
- Focus Area descriptions (Section 2.1):
- How well do the Focus Area descriptions reflect the scope and characteristics of AI usage? Are any characteristics missing, and if so, what are they and how should we describe them?
- Profile content (Sections 2.3–2.8):
- When thinking about applying the Cyber AI Profile, how useful (or not) is it for all three Focus Areas to be shown alongside each other (as they are currently reflected)? What value might there be in providing Profile content for each Focus Area separately?
- What format(s) would be useful for providing the information in the Cyber AI Profile (e.g., a spreadsheet/workbook, the NIST Cybersecurity and Privacy Reference Tool (CPRT))?
- How well do the priorities and considerations discussed in Sections 2.3–2.8 relate to existing practices and standards leveraged by your organization? Are there significant gaps between current practices and those that are necessary to address unique characteristics of AI in each Focus Area that this publication should address? How should the AI-specific considerations inform the prioritization of each Subcategory?
- NIST published the Cybersecurity Framework (CSF) 2.0 Informative References and Implementation Examples to show potential ways to achieve the outcome in each Subcategory. This preliminary draft includes examples of Informative References for the Cyber AI Profile. Further literature review is in progress and NIST is seeking more input on Informative References to include. Which additional AI cybersecurity guidelines, standards, best practices, or mappings are you using that you recommend adding as Informative References for the Cyber AI Profile? For any Informative References you recommend, please share with us why you recommend them as well as how and why you would prioritize them for this document.
- Glossary (Appendix B):
- NIST welcomes requests and suggestions for terms that should be added to this document’s Glossary