Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

CISA Releases AI Data Security Guidance

By Susan B. Cassidy, Ashden Fein, Caleb Skeath, Micaela McMurrough, Robert Huffman, Moriah Daugherty, Ryan Burnette, Bolatito Adetula & Grace Howard on June 9, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On May 22, 2025, the Cybersecurity and Infrastructure Security Agency (“CISA”), which sits within the Department of Homeland Security (“DHS”) released guidance for AI system operators regarding managing data security risks.  The associated press release explains that the guidance provides “best practices for system operators to mitigate cyber risks through the artificial intelligence lifecycle, including consideration on securing the data supply chain and protecting data against unauthorized modification by threat actors.”  CISA published the guidance in conjunction with the National Security Agency, the Federal Bureau of Investigation, and cyber agencies from Australia, the United Kingdom, and New Zealand.  This guidance is intended for organizations using AI systems in their operations, including Defense Industrial Bases, National Security Systems owners, federal agencies, and Critical Infrastructure owners and operators. This guidance builds on the Joint Guidance on Deploying AI Systems Security released by CISA and several other U.S. and foreign agencies in April 2024.

The guidance’s stated goals include raising awareness of the potential data security risks of AI systems, providing best practices for securing AI, and establishing a strong foundation for data security in AI systems.  The first part of the guidance outlines a set of cybersecurity best practices for AI systems, after which the guidance provides additional detail on three separate risk categories for AI systems (data supply chain risks, maliciously modified data, and data drift) and describes mitigation recommendations for each risk category.

The guidance outlines ten cybersecurity best practices that are specific to AI systems, and refers to the NIST SP 800-53 “Security and Privacy Controls for Information Systems and Organizations” for additional details on general cybersecurity best practices (though does not specify any particular applicable baseline).  Several of the best practices, such as “source reliable data and track data provenance,” and “verify and maintain data integrity during storage and transport,” align with the data supply chain risks discussed in greater detail further below in the guidance.   Many of the other best practices build on security practices described in NIST SP 800-53 other common security frameworks, such as classifying data, leveraging access controls and trusted infrastructure, encrypting data, and storing and deleting data securely.  The guidance’s best practices also reference leveraging privacy-preserving techniques, such as data depersonalization or differential privacy, and conducting ongoing data security risk assessments.

In the section of the guidance devoted to data supply chain risks, the guidance discusses general risks and identifies three specific risks.  The general risks section warns that “one cannot simply assume that [web-scale] datasets are clean, accurate, and free of malicious content.”  The guidance offers several mitigation strategies, including dataset verification, using content credentials to track the provenance of data, requesting assurances of a foundation model trained by another party, requiring certification from dataset providers, and securely storing data after ingest. 

In addition to this general risk, the guidance identifies “curated web-scale datasets” as the first of three specific data supply chain risks.  The guidance notes that curated AI datasets are vulnerable to a technique known as “split-view poisoning” which can arise when someone purchases an expired domain and manipulates the data.  The second risk is “collected web-scale datasets,” which are vulnerable to “frontrunning poisoning techniques.”  This occurs when malicious examples are injected just before crowd-sourced content is collected from a website.  The third risk is “web-crawled datasets,” which is described as an inherently risky type of dataset because it is less curated.  The guidance provides a variety of mitigation strategies that include broad recommendations like dataset verification to detect abnormalities to more specific recommendations including using raw data hashes with hash verification.

Next, the guidance identifies risks and mitigation strategies for maliciously modified data, explaining that “deliberate manipulation of data can result in inaccurate outcomes, poor

decisions, and compromised security.”  The risks include adversarial machine learning threats, bad data statements, statistical bias, data poisoning from inaccurate information, and data duplications.   The guidance proposes various mitigation strategies to address these risks.  For example, it recommends sanitizing the training data to reduce the impact of outliers and poisoned inputs.  Similarly, it suggests that metadata validation may be helpful to check the completeness and consistency of metadata before it is used for AI training.

Finally, the guidance describes risks associated with data drift.  The guidance explains that data drift occurs naturally over time as the statistical properties of input data become different from those of the original data used to train the model.  The guidance suggests that data drift can be mitigated by “incorporating application-specific data management protocols” including continuous monitoring, retraining a model with new data, and data cleansing.  The mitigation strategies in this section focus on data management which can include continuous monitoring and data cleansing.  Many of the mitigation strategies posed in the earlier sections are good practices that can be applied here as well. Overall, the guidance notes that “organizations can fortify their AI systems against potential threats and safeguard sensitive, proprietary, and mission critical data used in the development and operation of their AI systems” by identifying risks and adopting best practices.  The guidance serves as a reminder to organizations of the importance of data security to maintaining the accuracy, reliability, and integrity of AI, and the unique cybersecurity risks that are applicable to these types of systems.

Susan B. Cassidy

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government…

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government contractors and represents her clients before the Defense Contract Audit Agency (DCAA), Inspectors General (IG), and the Department of Justice with regard to those investigations.  From 2008 to 2012, Ms. Cassidy served as in-house counsel at Northrop Grumman Corporation, one of the world’s largest defense contractors, supporting both defense and intelligence programs. Previously, Ms. Cassidy held an in-house position with Motorola Inc., leading a team of lawyers supporting sales of commercial communications products and services to US government defense and civilian agencies. Prior to going in-house, Ms. Cassidy was a litigation and government contracts partner in an international law firm headquartered in Washington, DC.

Read more about Susan B. Cassidy
Show more Show less
Photo of Ashden Fein Ashden Fein

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing…

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Mr. Fein frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, and destructive attacks.

Additionally, Mr. Fein assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, and requirements related to supply chain security.

Before joining Covington, Mr. Fein served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Mr. Fein currently serves as a Judge Advocate in the U.S. Army Reserve.

Read more about Ashden Fein
Show more Show less
Photo of Caleb Skeath Caleb Skeath

Caleb Skeath advises clients on a broad range of privacy and data security issues, including regulatory inquiries from the Federal Trade Commission, data breach notification obligations, compliance with consumer protection laws, and state and federal laws regarding educational and financial privacy.

Read more about Caleb Skeath
Photo of Micaela McMurrough Micaela McMurrough
Read more about Micaela McMurrough
Photo of Ryan Burnette Ryan Burnette

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the…

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the Cost Accounting Standards.  Prior to joining Covington, Mr. Burnette served in the Office of Federal Procurement Policy in the Executive Office of the President, where he worked on government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.

Read more about Ryan Burnette
Show more Show less
Bolatito Adetula

Tito Adetula is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Government Contracts Practice Group.

Tito also maintains an active pro bono practice focused on data privacy and cybersecurity matters.

Photo of Grace Howard Grace Howard

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters…

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters involving allegations of noncompliance with U.S. government cybersecurity regulations and fraud under the False Claims Act.

Prior to joining the firm, Grace served in the United States Navy as a Surface Warfare Officer and currently serves in the U.S. Navy Reserve.

Read more about Grace Howard
Show more Show less
  • Posted in:
    Privacy & Data Security
  • Blog:
    Inside Privacy
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo