Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

August 2024 Developments Under President Biden’s AI Executive Order

By Robert Huffman, Susan B. Cassidy, Ashden Fein, Ryan Burnette & August Gweon on September 10, 2024
Email this postTweet this postLike this postShare this post on LinkedIn

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023.  The first blog summarized the AI EO’s key provisions and related OMB guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through July 2024.  This blog describes key actions taken to implement the AI EO during August 2024.  It also describes key actions taken by NIST and the California legislature related to the goals and concepts set out by the AI EO.  We will discuss developments during August 2024 to implement President Biden’s 2021 Executive Order on Cybersecurity in a separate post. 

OMB Releases Finalized Guidance for Federal Agency AI Use Case Inventories

On August 14, the White House Office of Management and Budget (“OMB”) released the final version of its Guidance for 2024 Agency Artificial Intelligence Reporting Per EO 14110, following the release of a draft version in March 2024.  The Guidance implements Section 10.1(e) of the AI EO and various sections of the OMB’s March 28 Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”  The Guidance also supersedes the agency AI use case inventory requirements set out in Section 5 of 2020’s EO 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.”   

The Guidance requires federal agencies (excluding the Department of Defense and Intelligence Community) to submit AI use case inventories for 2024 by December 16, 2024, and to post “publicly releasable” AI use cases on their agency websites.  Appendix A of the Guidance lists information agencies must provide for each AI use case, including information on the AI’s intended purpose, expected benefits, outputs, development stage, data and code, and enablement and infrastructure.  Agencies must also address a subset of questions for AI use cases that are determined to be rights- or safety-impacting, as defined in OMB Memo M-24-10, such as whether the agency has complied with OMB Memo M-24-10’s minimum risk management practices for such systems.  For AI use cases that are not subject to individual reporting (including DoD AI use cases and AI use cases whose sharing would be inconsistent with law and governmentwide policy), agencies must report certain “aggregate metrics.”

In addition to AI use case inventories, the Guidance provides mechanisms for agencies to report the following:

  • Agency CAIO determinations of whether agencies’ current and planned AI use cases are safety- or rights-impacting, as defined in Section 5(b) and Appendix I of OMB Memo M-24-10, by December 1, 2024. 
  • Agency CAIO waivers of one or more of OMB Memo M-24-10’s minimum risk management practices for particular AI use cases, including justifications of how the practice(s) would increase risks to rights or safety or unacceptably impede critical agency operations, by December 1, 2024.
  • Agency requests and justifications for one-year extensions to comply with the minimum risk management practices for particular AI use cases, by October 15, 2024.

NIST Releases New Public Draft of Digital Identity Guidelines

As described in our parallel blog on cybersecurity developments, on August 21, the National Institute of Standards and Technology (“NIST”) released the second public draft of its updated Digital Identity Guidelines (Special Publication 800-63) for public comment, following an initial draft released in December 2022.  The requirements, which focus on Enrollment and Identity Proofing, Authentication and Lifecycle Management, Federation and Assertions, also address “distinct risks and potential issues” from the use of AI and ML in identity systems, including disparate outcomes and biased outputs, Section 3.8 on “AI and ML in Identity Systems” would impose the following requirements on government contractors that provide identity proofing services (“Credential Service Providers” or “CSPs”) to the federal government:

  • CSPs must document all uses of AI and ML and communicate those uses to organizations that rely on these systems.
  • CSPs that use AI/ML must provide, to any entities that use their technology, information regarding (1) their AI/ML model training methods and techniques, (2) their training datasets, (3) the frequency of model updates, and (4) results of all testing of their algorithms.
  • CSPs that use AI/ML systems or rely on services that use AI/ML must implement the NIST AI Risk Management Framework to evaluate risks that may arise from the use of AI/ML, and must consult NIST Special Publication 1270, “Towards a Standard for Managing Bias in Artificial Intelligence.”

Public comments on the second public Draft Guidelines are due by October 7, 2024.

U.S. AI Safety Institute Signs Collaboration Agreements with Developers for Pre-Release Access to AI Models

On August 29, the U.S. AI Safety Institute (AISI) announced “first-of-their-kind” Memoranda of Understanding with two U.S. AI companies regarding formal collaboration on AI safety research, testing, and evaluation.  According to the announcement, the agreements will allow AISI to “receive access to major new models from each company prior to and following their public release,” with the goal of enabling “collaborative research on how to evaluate capabilities and safety risks” and “methods to mitigate those risks.”  The U.S. AISI also intends collaborate with the U.K. AI Safety Institute to provide feedback on model safety improvements.

These agreements build on the Voluntary AI Commitments that the White House has received from 16 U.S. AI companies since 2023.

California Legislature Passes First-in-Nation AI Safety Legislation Modeled on AI EO

On August 29, the California legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047).  If signed into law, SB 1047 would impose an expansive set of requirements on developers of “covered [AI] models,” including cybersecurity protections prior to training and deployment, annual third-party audits, reporting of AI “safety incidents” to the California Attorney General, and internal safety and security protocols and testing procedures to prevent unauthorized access or misuse resulting in “critical harms.”  Echoing the AI EO’s definition of “dual-use foundation models,” SB 1047 defines “critical harms” as (1) the creation or use of CBRN weapons by covered models, (2) mass casualties or damages resulting from cyberattacks on critical infrastructure or other unsupervised conduct by an AI model, or (3) other grave and comparable harms to public safety and security caused by covered models. Similar to the AI EO’s computational threshold for AI models subject to Section 4.2(a)’s reporting and AI red-team testing requirements, SB 1047 defines “covered models” in two phases.  First, prior to January 1, 2027, “covered models” are defined as AI models trained using more than 1026 floating-point operations per second (“FLOPS”) of computing power (the cost of which exceeds $100 million), or AI models created by fine-tuning covered models using at least 3 x 1025 FLOPS (the cost of which exceeds $10 million).  Second, after January 1, 2027, SB 1047 authorizes California’s Government Operations Agency to determine the threshold computing power for covered models.  For reference, Section 4.2 of the AI EO requires reporting and red-team testing for dual-use foundation models trained using more than 1026 FLOPS and authorizes the Secretary of Commerce to define and regularly

Susan B. Cassidy

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government…

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government contractors and represents her clients before the Defense Contract Audit Agency (DCAA), Inspectors General (IG), and the Department of Justice with regard to those investigations.  From 2008 to 2012, Ms. Cassidy served as in-house counsel at Northrop Grumman Corporation, one of the world’s largest defense contractors, supporting both defense and intelligence programs. Previously, Ms. Cassidy held an in-house position with Motorola Inc., leading a team of lawyers supporting sales of commercial communications products and services to US government defense and civilian agencies. Prior to going in-house, Ms. Cassidy was a litigation and government contracts partner in an international law firm headquartered in Washington, DC.

Read more about Susan B. Cassidy
Show more Show less
Photo of Ashden Fein Ashden Fein

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing…

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Mr. Fein frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, and destructive attacks.

Additionally, Mr. Fein assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, and requirements related to supply chain security.

Before joining Covington, Mr. Fein served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Mr. Fein currently serves as a Judge Advocate in the U.S. Army Reserve.

Read more about Ashden Fein
Show more Show less
Photo of Ryan Burnette Ryan Burnette

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the…

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the Cost Accounting Standards.  Prior to joining Covington, Mr. Burnette served in the Office of Federal Procurement Policy in the Executive Office of the President, where he worked on government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.

Read more about Ryan Burnette
Show more Show less
August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Read more about August Gweon
Show more Show less
  • Posted in:
    Administrative, Government
  • Blog:
    Inside Government Contracts
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo