Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

September 2024 Developments Under President Biden’s AI Executive Order

By Robert Huffman, Susan B. Cassidy, Ashden Fein, Nooree Lee, Ryan Burnette, August Gweon & Catherine Wettach on October 11, 2024
Email this postTweet this postLike this postShare this post on LinkedIn
imagedig

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023.  The first blog summarized the AI EO’s key provisions and related Office of Management and Budget (“OMB”) guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through August 2024.  This blog describes key actions taken to implement the AI EO during September 2024.  It also describes related developments in California related to the goals and concepts set out by the AI EO.  We will discuss developments during September 2024 to implement President Biden’s 2021 Executive Order on Cybersecurity in a separate post. 

Bureau of Industry and Security Proposes Updated Technical Thresholds for Dual-Use Foundation Model Reporting Requirements

On September 9, 2024, the Department of Commerce’s Bureau of Industry and Security (“BIS”) published a Notice of Proposed Rulemaking (“NPRM”) on updated technical thresholds that would trigger reporting requirements for AI models and computing clusters under the AI EO.  Under Section 4.2(a)(i) of the AI EO, certain developers of dual-use foundation models must regularly provide the Federal government with information related to (1) training, developing, or producing dual-use foundation models, (2) the ownership and possession of model weights and physical and cybersecurity measures to protect those models weights, and (3) the results of AI red-team testing based on upcoming AI red-teaming guidance from the U.S. AI Safety Institute’s (“U.S. AISI”) and associated safety measures.  Covington previously covered U.S. AISI’s draft guidance here.

BIS’s proposed rule, which implements Section 4.2(b)’s requirement that the Department of Commerce define and regularly update the technical thresholds for reporting, would require compliance with the AI EO’s reporting requirements for developers of any “dual-use” AI model trained using a quantity of computing power greater than 1026 floating-point operations per second (“FLOPS”).  This threshold is the same as the interim threshold in Section 4.2(b)(i) of the AI EO.  With this said, the proposed rule modifies the AI EO’s initial threshold for large scale “computing clusters” by removing its physical co-location requirement.  Large-scale computing clusters subject to the reporting requirement are instead defined under the proposed rule as “clusters having a set of machines transitively connected by networking of over 300 Gbit/s and having a theoretical maximum performance greater than 10^20 computational operations (e.g., integer or floating-point operations) per second (OP/s) for Al training, without sparsity.”

GAO Releases Report on Agency Implementation of AI EO Management and Talent Requirements

On September 9, 2024, the U.S. Government Accountability Office (“GAO”) released a report detailing its evaluation of the extent to which several agencies have implemented selected AI management and talent requirements from in accordance with the AI EO.  GAO found that each agency had fully implemented the relevant requirements.  The GAO report determined, among other things, that the:

  • Executive Office of the President organized the AI and Technology Talent Task Force and established the White House AI Council. 
  • Office of Management and Budget has convened and chaired the Chief AI Officer council, issued AI guidance and use case instructions to agencies, and established initial plans for AI talent recruitment.
  • Office of Personnel Management has reviewed hiring and workplace flexibility, considered excepted service appointments, coordinated AI hiring action across federal agencies, and issued related pay guidance.
  • Office of Science and Technology Policy and OMB have identified priority mission areas for increasing AI talent, established the types of talent that are the highest priority to recruit and develop, and identified accelerated hiring pathways.
  • General Services Administration finalized and issued its framework to enable consistent prioritization of authorization of critical emerging AI technologies in a secure cloud environment via the FedRAMP (Federal Risk Authorization Management Program) process.  GSA’s prioritization framework included chat interfaces, code-generation and debugging tools, and prompt-based image generators.

Inaugural Meeting of International Network of AI Safety Institutes to be Hosted in San Francisco

On September 18, 2024, the U.S. Commerce and State Departments announced plans to host the inaugural meeting of the International Network of AI Safety Institutes.  The Network, announced in May by U.S. Secretary of Commerce Gina Raimondo, is intended to bring together technical AI experts from each member’s AI safety institute in order to begin advancing global collaboration and knowledge sharing on AI safety.

The meeting will take place on November 20-21, 2024, in San Francisco.  The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, the United Kingdom, and the United States.

Agencies Publish Plans to Achieve Compliance with OMB AI Guidance

Under Section 3(a)(iii) of OMB Memorandum M-24-10 on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (“OMB AI Memo”), agencies were required to submit and publicly post plans to achieve consistency with the OMB AI Memo’s AI use case inventory requirements, minimum risk management practices, and other agency obligations by September 24, 2024.  As of October 4, by our count, at least 31 federal agencies have published AI compliance plans, including GSA, SEC, EEOC, NASA, and the Departments of Defense, State, Homeland Security, Commerce, and Treasury. 

Although these plans generally contain similar objectives, agencies differ in their plans to assess the risks that may arise from agency AI use cases.  For example, the Department of Treasury’s AI Compliance Plan notes that, while “not required by M-24-10, Treasury will be using additional resources to identify and manage potential risks posed by AI use,” including audit recommendations and potential impacts from supply chain risks in AI software and system environments.

White House Hosts Task Force on AI Datacenter Infrastructure

On September 12, 2024, the White House hosted a roundtable with representatives from AI companies, datacenter operators, and utility companies to discuss strategies to meet clean energy, permitting, and workforce requirements for developing large-scale AI datacenters and power infrastructure needed for advanced AI operations.  After the roundtable, the White House announced that:

  • The White House is launching a new Task Force on AI Datacenter Infrastructure to coordinate policy across government.
  • The Administration will scale up technical assistance to federal, state, and local authorities handling datacenter permitting.
  • The Department of Energy (DOE) is creating an AI datacenter engagement team to leverage programs to support AI data center development.

According to the White House, these actions will enable the construction of more AI datacenters in the U.S.

California Governor Vetoes AI Safety Legislation Modeled on AI EO

On September 29, California Governor Gavin Newsom (D) vetoed the Safe and Secure Innovation for Frontier AI Models Act (SB 1047), an AI safety bill that would have imposed testing, reporting, and security requirements on developers of large AI models.  SB 1047 would have implemented computational thresholds for “covered models” that would have mirrored the AI EO’s thresholds for dual-use foundation model reporting, i.e., AI models trained using more than 1026 FLOPS of computing power (in addition to being valued at more than $100 million).  

Notably, Newsom cited SB 1047’s thresholds as the reason for rejecting the legislation.  In his veto message, Newsom noted that, while “[AI] safety protocols must be adopted,” SB 1047’s cost and computational thresholds “applies stringent standards to even the most basic functions–so long as a large system deploys it,” rather than regulating based on “the system’s actual risks.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.”  Covington previously covered Governor Newsom’s veto of SB 1047 here.

Susan B. Cassidy

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government…

Ms. Cassidy represents clients in the defense, intelligence, and information technologies sectors.  She works with clients to navigate the complex rules and regulations that govern federal procurement and her practice includes both counseling and litigation components.  Ms. Cassidy conducts internal investigations for government contractors and represents her clients before the Defense Contract Audit Agency (DCAA), Inspectors General (IG), and the Department of Justice with regard to those investigations.  From 2008 to 2012, Ms. Cassidy served as in-house counsel at Northrop Grumman Corporation, one of the world’s largest defense contractors, supporting both defense and intelligence programs. Previously, Ms. Cassidy held an in-house position with Motorola Inc., leading a team of lawyers supporting sales of commercial communications products and services to US government defense and civilian agencies. Prior to going in-house, Ms. Cassidy was a litigation and government contracts partner in an international law firm headquartered in Washington, DC.

Read more about Susan B. Cassidy
Show more Show less
Photo of Ashden Fein Ashden Fein

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing…

Ashden Fein advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Mr. Fein counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Mr. Fein frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, and destructive attacks.

Additionally, Mr. Fein assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, and requirements related to supply chain security.

Before joining Covington, Mr. Fein served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Mr. Fein currently serves as a Judge Advocate in the U.S. Army Reserve.

Read more about Ashden Fein
Show more Show less
Photo of Nooree Lee Nooree Lee

Nooree Lee represents government contractors in a wide variety of transactional, litigation, and compliance matters. His primary areas of practice include corporate transactions involving contractors, international contracting and domestic sourcing matters, and grants and cooperative agreements.

Mr. Lee also advises clients in a…

Nooree Lee represents government contractors in a wide variety of transactional, litigation, and compliance matters. His primary areas of practice include corporate transactions involving contractors, international contracting and domestic sourcing matters, and grants and cooperative agreements.

Mr. Lee also advises clients in a wide range of industries on how to best safeguard and leverage their intellectual property. Relatedly, he represents companies seeking to protect their confidential data from disclosure under the federal Freedom of Information Act and state law equivalents.

Read more about Nooree LeeNooree's Linkedin Profile
Show more Show less
Photo of Ryan Burnette Ryan Burnette

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the…

Ryan Burnette advises clients on a range of issues related to government contracting. Mr. Burnette has particular experience with helping companies navigate mergers and acquisitions, FAR and DFARS compliance issues, public policy matters, government investigations, and issues involving government cost accounting and the Cost Accounting Standards.  Prior to joining Covington, Mr. Burnette served in the Office of Federal Procurement Policy in the Executive Office of the President, where he worked on government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.

Read more about Ryan Burnette
Show more Show less
August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Read more about August Gweon
Show more Show less
Photo of Catherine Wettach Catherine Wettach

Catherine Wettach is an associate in the firm’s Washington, DC office. She is a member of the Government Contracts and White Collar Defense and Investigation Practice Groups.

Read more about Catherine Wettach
  • Posted in:
    Administrative, Featured, Government
  • Blog:
    Inside Government Contracts
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo