1. News
  2. AI
  3. Senate AI Moratorium Sparks Heated Regulatory Debate

Senate AI Moratorium Sparks Heated Regulatory Debate

featured
Share

Share This Post

or copy the link

“This is absurd.”

These were the immediate thoughts of Amba Kak, co-executive director of the AI Now Institute, upon learning of a proposed moratorium on state-level AI regulations embedded within President Donald Trump’s controversial funding bill. This bill has recently been the subject of online exchanges between Trump and tech figure Elon Musk. According to the text of the bill, states would be prohibited from enforcing any laws or regulations concerning artificial intelligence models or automated decision systems for a decade from the date of enactment.

Kak contends that the moratorium represents a step backward in regulating AI, undermining the limited protections currently in place. She argued it could jeopardize pending legislation on issues like data privacy and facial recognition technologies in various states.

“It’s effectively halting progress and reverting to an earlier state,” Kak noted. Soon after this revelation, she provided testimony on the matter before the House Committee on Energy and Commerce.

The moratorium was seamlessly integrated into the House bill and has carried over into the Senate version with slight revisions. However, it has sparked considerable debate among lawmakers and generated significant discussions within the AI sector, where stakeholders are trying to understand the wider implications for the industry and society.

The core of the debate centers on differing perspectives. Many Republicans and technology leaders, including OpenAI’s CEO Sam Altman, argue that the moratorium would eliminate conflicting state laws that could impede American companies in their competition against international rivals like China. Meanwhile, many Democrats and AI experts contend that it would stifle essential regulatory frameworks and allow for the proliferation of unregulated AI systems.

Jutta Williams, a veteran of regulatory compliance in major tech firms and now advising socially responsible startups, believes that varied regulations create confusion and obstruction. She emphasizes the need for simplicity in compliance standards, pointing out that fragmented regulations can increase costs without yielding substantial progress.

Williams suggests that while the federal government has fallen short in addressing interstate regulatory issues, states should focus on social aspects they can manage instead of limiting AI businesses.

OpenAI has openly advocated for a moratorium on state laws, citing the narrowly vetoed California AI safety bill, SB 1047. In contrast, major tech companies like Google, Microsoft, Meta, and Apple have remained relatively silent on the issue. Jesse Dwyer, a spokesperson for Perplexity AI, issued a statement suggesting that certain organizations disregard regulations while asserting compliance, in contrast to their commitment to responsible AI practices.

Dario Amodei, CEO of Anthropic, has emerged as a notable voice opposing the moratorium, penning an opinion piece in The New York Times where he critiques the moratorium as overly simplistic in light of rapid advancements in AI technology.

Amodei advocates for a federal transparency standard requiring AI companies to disclose their approaches to safety testing and risk mitigation on their websites.

Kak praised Amodei’s acknowledgment of the importance of regulation within the tech industry but remains skeptical about the prospects of establishing effective federal standards, citing a lack of progress over the past decade.

Kak remarks, “The industry perspective shouldn’t dominate the regulatory conversation, as those stakeholders have vested interests. Independent public perspectives are essential.”

Concerning the argument that state-level AI regulations might hinder startups, Kak refutes this as a fallacy.

Contrary to claims that over 1,000 state laws are regulating AI, Chelsea Canada from the National Conference of State Legislatures indicates that while many AI-related bills were introduced in 2025, only around 75 have become law. Last year, only 84 out of 500 proposed AI bills were enacted.

Kak describes the current regulatory landscape as a focused set of rules that address the most problematic entities in the market, rather than a chaotic patchwork.

Kyle Morse, deputy executive director of the Tech Oversight Project, highlights concerns regarding the broad implications of the moratorium, suggesting it could hinder state-level protections beyond just AI regulations.

Morse warns that companies could exploit this moratorium by categorizing themselves as AI firms to evade regulation, even if AI is not central to their business model.

The future of the moratorium in the Senate remains in limbo. Senator Edward J. Markey (D-MA), a member of the Commerce, Science, and Transportation Committee, has announced intentions to introduce an amendment to block the moratorium.

“The effort to preempt states from regulating AI for the next decade is reckless and short-sighted,” he remarked.

Last Tuesday, Markey described the moratorium as a “backdoor AI ban” during Senate debates, asserting it prioritizes corporate interests over the welfare of communities.

“We must not let corporate interests undermine protections for kids, families, and marginalized groups,” Markey asserted, vowing to oppose the moratorium vigorously. A coalition of 260 bipartisan state lawmakers has echoed these sentiments in a letter to Congress, expressing their disapproval of the moratorium.

In addition, advocacy groups such as Americans for Responsible Innovation have launched a campaign against the moratorium, gathering 25,000 petitions in just two weeks.

Rep. Marjorie Taylor Greene (R-GA) has expressed opposition to the bill, amending her earlier support, while some Republican senators have signaled they may reject the current wording.

The Senate Commerce, Science, and Transportation Committee has proposed modifications to the moratorium, suggesting a shift from a blanket ban on state AI regulations to a stipulation that states cannot regulate AI if they wish to secure federal broadband funding.

Should the Senate approve revisions, the House would need to revisit the bill. Proponents must also navigate the Byrd Rule, which restricts non-budget-related clauses in fiscal legislation. Kak noted that the primary focus now appears to be on ensuring the sweeping rollback of state regulation aligns with the Byrd Rule.

If enacted, the moratorium would place comprehensive oversight of the burgeoning AI industry—expected to exceed $1 trillion in revenue within seven years—within a Congress that has been historically hesitant to regulate Big Tech adequately.

“Based on past experiences,” Morse cautioned, “Congress has struggled to create effective regulations and passing a moratorium is not likely to resolve these issues.”

Senate AI Moratorium Sparks Heated Regulatory Debate
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!