Skip to content

Editor’s Note: Virginia is poised to become a key player in artificial intelligence (AI) regulation with the impending enactment of the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This legislation represents a growing movement among U.S. states to establish frameworks governing AI’s impact on critical consumer-related decisions. If signed into law, HB 2094 would introduce stringent requirements for AI system developers and deployers, aiming to mitigate algorithmic discrimination and enhance transparency in sectors such as employment, finance, healthcare, and legal services. While the Act aligns with existing regulations in states like Colorado, it introduces unique compliance thresholds that could set a precedent for future AI governance. This article examines the scope, enforcement, implications, and criticisms of the proposed legislation, providing insight into its potential influence on AI oversight at both the state and national levels.

Industry News – Artificial Intelligence Beat

AI Oversight in Virginia: Understanding the High-Risk AI Developer and Deployer Act

ComplexDiscovery Staff

Virginia stands on the cusp of establishing itself as a leader in the regulatory landscape governing high-risk artificial intelligence (AI) systems with the pending enactment of the Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). With the increasing deployment of AI in sectors that significantly impact consumers, Virginia’s approach reflects a growing trend of rigorous state-level AI oversight. The bill, passed by the Virginia legislature on February 20, 2025, positions Virginia as potentially the second U.S. state to implement comprehensive AI regulations, following Colorado’s AI legislation.

If Governor Glenn Youngkin signs HB 2094 into law, it will require AI system developers and deployers to adhere to stringent regulatory protocols to mitigate algorithmic discrimination and enhance transparency. High-risk AI systems, as defined by the Act, are those specifically intended to autonomously make or be a substantial factor in making consequential decisions that have a material legal or similarly significant effect on consumers in areas such as parole, education enrollment, employment opportunities, financial services, healthcare access, housing, insurance, marital status, or legal services. Importantly, these mandates exclude AI applications used in non-high-risk settings, such as systems performing narrow procedural tasks, improving previously completed human activities, anti-fraud technology without facial recognition, cybersecurity tools, and AI-enabled video games. Additionally, the bill does not apply to workers acting in a commercial or employment context, and there are broad exemptions for healthcare and insurance sectors.

The Act’s primary focus is on protecting consumers from prejudicial algorithmic decisions in vital areas of life. Developers must disclose risks, limitations, and intended purposes of high-risk AI systems, along with performance evaluation summaries and measures to mitigate algorithmic discrimination. Deployers must exercise a “reasonable duty of care” and implement risk management policies to prevent algorithmic discrimination. Compliance with established standards like NIST’s AI risk management frameworks or ISO/IEC 42001 is deemed sufficient to meet these requirements.

Similar to the Colorado Act, enforcement falls solely within the purview of the Virginia Attorney General, prohibiting private litigation. Non-compliance with HB 2094 can yield penalties ranging from $1,000 for minor violations to $10,000 for willful infractions, with a discretionary 45-day cure period. Each violation is considered separate, allowing penalties to accumulate quickly if multiple individuals are affected.

While Virginia’s prospective law aligns closely with Colorado’s existing framework, it presents crucial adaptations. Notably, it introduces a “principal basis” criterion, wherein AI influences must be the principal basis for consequential decisions to invoke compliance obligations—a stricter standard than Colorado’s “substantial factor” threshold.

Critically, discussions around Virginia’s legislative move surface broader societal implications, especially in transforming the modalities of employment practices. As AI continues to permeate hiring and operational aspects within firms, the fidelity of these technologies becomes paramount. Employers engaging with high-risk AI-driven hiring technologies must now integrate robust evaluative and oversight mechanisms. Transparency becomes imperative as decisions heavily augmented by AI necessitate disclosures to affected employees, providing opportunities for data correction and appeals against AI-induced determinations.

However, HB 2094 has faced criticism for containing loopholes that may allow companies to self-select out of compliance and for not fully addressing algorithmic discrimination. Despite these challenges, this legislative initiative signifies a broader national shift as more states pursue AI regulatory frameworks. As echoed by the impetus behind laws like the CAIA in Colorado, states are increasingly aligning regulatory directives to match the rapid pace of AI deployment. Even amidst differences, these varied legislative initiatives underscore a shared objective: ensuring AI technologies are both beneficial and equitable.

News Sources


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

The post AI Oversight in Virginia: Understanding the High-Risk AI Developer and Deployer Act appeared first on ComplexDiscovery.

Photo of Alan N. Sutin Alan N. Sutin

Alan N. Sutin is Chair of the firm’s Technology, Media & Telecommunications Practice and Senior Chair of the Global Intellectual Property & Technology Practice. An experienced business lawyer with a principal focus on commercial transactions with intellectual property and technology issues and privacy

Alan N. Sutin is Chair of the firm’s Technology, Media & Telecommunications Practice and Senior Chair of the Global Intellectual Property & Technology Practice. An experienced business lawyer with a principal focus on commercial transactions with intellectual property and technology issues and privacy and cybersecurity matters, he advises clients in connection with transactions involving the development, acquisition, disposition and commercial exploitation of intellectual property with an emphasis on technology-related products and services, and counsels companies on a wide range of issues relating to privacy and cybersecurity. Alan holds the CIPP/US certification from the International Association of Privacy Professionals.

Alan also represents a wide variety of companies in connection with IT and business process outsourcing arrangements, strategic alliance agreements, commercial joint ventures and licensing matters. He has particular experience in Internet and electronic commerce issues and has been involved in many of the major policy issues surrounding the commercial development of the Internet. Alan has advised foreign governments and multinational corporations in connection with these issues and is a frequent speaker at major industry conferences and events around the world.