Virginia governor vetoes AI bill targeting “algorithmic discrimination”
The veto of Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act highlights challenges of state-level AI regulation.
On March 24, 2025, Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act. The Virginia bill sought to protect Virginia residents from “algorithmic discrimination” and ensure accountability and transparency for certain “high-risk” AI systems. Despite being viewed as more business-friendly than Colorado’s Artificial Intelligence Act, Governor Youngkin concluded that the bill would establish a “burdensome” regulatory framework, particularly for smaller companies.
Background
The Virginia bill, which the legislature passed on February 20, 2025, would have imposed obligations on both developers and deployers of “high-risk” AI systems, defined as systems specifically intended to autonomously make, or be a substantial factor in making, a “consequential decision.” A “consequential decision,” in turn, is one with a material effect on the provision or denial to any consumer of access to key services like education, employment, financial services, health care and housing. Both developers and deployers would be subject to a duty of care to mitigate any known or reasonably foreseeable risks of “algorithmic discrimination,” or unlawful differential treatment based on legally protected characteristics.
The bill would have required developers of high-risk AI systems to disclose certain information regarding those systems, and to update these disclosures in the event of substantial modifications. This would have included documentation of the system’s intended benefits and uses, how it should be used (and not used), and any known or reasonable limitations. The bill also would have required developers to ensure, with some caveats, that AI outputs are identifiable as such by consumers at the time the output is generated. Notably, developers complying with the National Institute of Science and Technology’s AI Risk Management Framework would be presumed to be complying with the Virginia bill.
Under the bill, deployers would have been required to complete an AI impact assessment before deployment and implement an AI risk management policy and program. The bill also required certain public disclosures regarding risk mitigation and consumer appeal rights.
The bill provided nuanced exemptions from its requirements for entities that are already subject to state or federal regulation, including (i) certain financial institutions such as banks, credit unions, mortgage lenders and savings institutions; (ii) insurers; and (iii) healthcare entities. However, it did not establish any revenue thresholds or otherwise exempt small businesses.
Balancing regulation and innovation
In his veto message, Governor Youngkin noted his commitment to responsible AI governance and his view that the bill “risks turning back the clock on Virginia’s economic growth, stifling the AI industry as it is taking off.” In particular, he stated that the bill’s “rigid framework” did not account for rapid evolution in the AI space and imposed “an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” Moreover, as Governor Youngkin noted, existing laws relating to discriminatory practices, privacy, data use and libel are already in place to protect consumers.
Governor Youngkin’s reasoning mirrored aspects of the Executive Order President Trump issued during the first week of his administration (Removing Barriers to American Leadership in AI) and the explanation for revoking President Biden’s Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of AI) from October 2023. As discussed in our previous update, President Trump’s AI executive order established that it is U.S. policy “ to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The accompanying fact sheet explained that the revocation of EO 14110 was necessary to eliminate “unnecessarily burdensome requirements for companies developing and deploying AI” and to promote “American development of AI systems … free from ideological bias or engineered social agendas.” More recently, at the Artificial Intelligence Action Summit in Paris, Vice President JD Vance noted the Trump administration’s concern that “excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.”
Fundamentally, the desire to balance harm prevention with the rapid pace of technological innovation also informed California Governor Gavin Newsom’s veto of California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in September 2024. As Governor Newsom explained, “any framework for effectively regulating AI needs to keep pace with the technology itself.”
Key takeaways
It remains to be seen whether state regulation of AI, particularly regulation focused on potential discrimination or bias, will become a partisan issue. Among recently passed AI legislation and the dozens of AI bills introduced in state legislatures are alternative paths to establishing AI guardrails without duplicating, or at least appearing to duplicate, existing federal and state laws. Companies developing or deploying AI technology should continue to closely monitor activity at both the state and federal levels, as well as internationally, for emerging trends.