Senate Moves to Halt State AI Regulation Amid Concerns
Update: On July 1, the Senate voted 99-1 to strike the ban from the budget reconciliation bill.
The Senate is quickly advancing towards a potential 10-year freeze on state laws governing artificial intelligence, lacking any federal regulatory replacements. This proposal, encased within a substantial budget bill under review by Congress, seeks to prevent states from establishing or enforcing laws related to AI models, systems, and automated decision-making technologies for a decade, save for certain exceptions.
According to the Senate Commerce Committee’s draft of this provision, released on Wednesday, states disregarding the freeze could forfeit billions in federal funding earmarked for developing AI infrastructure and enhancing rural broadband services. However, the Senate parliamentarian has stipulated that this provision can only affect $500 million of new funding in the reconciliation bill, which pertains to AI.
Senate Republicans intend to pass the extensive budget bill by July 4, creating urgency for senators to potentially remove the AI provision. The bill, if passed by the Senate, will proceed to the House for further deliberation. Should it be enacted, this provision could halt the enforcement of over 149 existing laws across more than 40 states and the District of Columbia, enacted since 2019, which have garnered support from both Republican and Democratic lawmakers.
Opposition and Concerns
The AI provision has sparked significant opposition. Advocacy organizations, state attorneys general, and legislators from both political parties are voicing concerns about the dangers of hindering state-level AI regulations. They argue that this provision would incapacitate states’ ability to safeguard their citizens against emerging AI threats, particularly in the absence of robust federal legislation.
As the Brennan Center has previously warned, AI presents substantial risks to elections, including spreading misinformation and enabling attacks on election officials and infrastructure. Limiting AI regulation could leave elections vulnerable, posing significant risks to democratic processes.
The Current State of AI Regulation
Congress’s progress on AI regulation has been limited. It recently passed the Take It Down Act, addressing nonconsensual AI-generated images, but comprehensive federal AI legislation remains elusive.
Without sufficient federal oversight, states have actively worked to bridge this gap. Since 2019, states have enacted over 100 laws targeting AI-related challenges. In 2024 alone, nearly 100 AI-related bills were passed, and more than 1,000 have been introduced across the nation during the 2025 legislative sessions. Moreover, 25 states have enacted laws addressing AI use in political campaigns, including measures against deceptive media and deepfakes aimed at voter suppression.
The proposed regulation ban threatens to stall state efforts, rendering even modest regulations unenforceable.
Challenging the Anti-Regulatory Stance
Proponents of the regulatory freeze argue it will prevent a “patchwork of laws” that burden AI developers, asserting it is necessary for the U.S. to “win the AI race” against China. These claims, however, lack strong evidence and ignore states’ historical role in technological regulation.
States have pioneered regulations in areas like data privacy with the California Consumer Privacy Act of 2018 and biometrics with Illinois’ Biometric Information Privacy Act of 2008. These initiatives demonstrate that state regulations can coexist with innovation and foster responsive policies without impeding technological progress.
Research indicates that most state AI laws focus on preventing the abuse of AI technology rather than targeting AI development or design. While states need to enforce stronger regulations on AI developers, few state laws currently place significant constraints on AI companies, suggesting that a regulatory pause might not ease developers’ burdens as much as some suggest.
Despite these points, anti-regulatory advocates, such as venture capitalist and White House AI czar David Sacks, persist in promoting deregulation as beneficial to U.S. interests. Sacks argues against AI regulation, favoring partnerships with nations like the United Arab Emirates to prevent alignment with China. During a Senate AI hearing, Sen. Ted Cruz emphasized that “the way to beat China in the AI race is to outrace them in innovation, not saddle AI developers with European-style regulations,” referencing the EU’s stringent AI policy.
Potential Legal Challenges
The provision, as it stands, leaves room for considerable legal interpretation, likely resulting in numerous court cases over its implementation and scope.
While the budget bill’s AI provision seems to embody express preemption, where federal law supersedes state law, this particular approach lacks precedent. When Congress has preempted state regulation in other fields, it typically established new federal rules to replace state laws. The absence of comprehensive federal AI regulation means that preempting state laws would create a regulatory vacuum.
The breadth of the AI provision remains unclear. Although it aims to broadly restrict state regulation of AI models and systems, whether it predominantly affects laws targeting AI developers or extends to cover laws governing AI use remains uncertain.
If interpreted broadly, the provision could confer extensive protections to big tech companies, AI developers, and others who misuse AI technology. This aligns with efforts by the Trump administration and its allies to undermine AI safety regulations.
Implications for the Future
Without a comprehensive federal regulatory framework, a ban on state AI regulation effectively hands over control to private entities with minimal accountability. This could have far-reaching consequences, especially as AI becomes more integrated into election processes, from voter outreach to misinformation campaigns. The stakes for democracy become increasingly significant. States not only have the authority but also a duty to protect their elections. Restricting their ability to address AI threats could leave elections vulnerable at a critical time.

