A $150 Million AI Lobbying War is Brewing Over Preemption
The highly anticipated debate over federal preemption in artificial intelligence (AI) policy has sparked a heated lobbying battle between Silicon Valley-backed companies and safety-focused donor networks. The National Defense Authorization Act, which must pass before the end of the year, poses a critical juncture for these competing interests.
The White House has floated an executive order on preemption that could override state rules, further fueling the controversy. Both sides are now mobilizing their networks of Super PACs, donors, and advocacy groups to shape the outcome. The stakes are high, with the fate of AI regulation hanging in the balance.
On one side is Public First, a bipartisan initiative backed by former Representatives Chris Stewart and Brad Carson, which aims to ensure "meaningful oversight" of the most powerful technology ever created. This group has launched two affiliated Super PACs and expects to raise at least $50 million for the 2026 cycle. They argue that states are operating as laboratories that reveal what works, provide early enforcement, and supply evidence needed to shape a future federal law with meaningful protections.
On the opposing side is Leading the Future (LTF), a coalition backed by GOP strategist Zac Moffatt and Democratic operative Josh Vlasto. This group operates through a multi-layered structure, including federal and state Super PACs, nonprofit advocacy arms, and grassroots organizing efforts. LTF's message is that a patchwork of state laws will cost American jobs and cede AI leadership to China.
The two coalitions diverge sharply on the role of federal preemption. Public First argues that a single national standard is essential to maintain competitiveness, while LTF frames state laws as costly barriers that could slow the development and deployment of advanced systems. The scale of spending reveals how quickly AI has moved to the center of American politics.
As the window for resolving this fight narrows, Congress faces a critical decision point about including preemption language in the National Defense Authorization Act. With AI's economic impact and labor displacement rising as voter concerns, the outcome will have far-reaching implications for the future of artificial intelligence regulation in the United States.
The irony is that both sides are funded by powerful interests, with Silicon Valley investors pouring hundreds of millions into LTF's efforts, while safety-focused donor networks support Public First. The real question is not who has more money or influence but what kind of regulatory framework will be put in place to govern this rapidly evolving technology.
Ultimately, the preemption showdown pits competing visions for AI governance against each other. One side seeks to establish a unified federal law that would preempt state regulations, while the other argues that states have filled a policy vacuum and should continue to play a role in regulating AI.
The outcome will set the tone for the nation's approach to artificial intelligence regulation, with significant implications for workers, consumers, and the economy. As the battle rages on, one thing is clear: the future of AI governance in America hangs precariously in the balance.
The highly anticipated debate over federal preemption in artificial intelligence (AI) policy has sparked a heated lobbying battle between Silicon Valley-backed companies and safety-focused donor networks. The National Defense Authorization Act, which must pass before the end of the year, poses a critical juncture for these competing interests.
The White House has floated an executive order on preemption that could override state rules, further fueling the controversy. Both sides are now mobilizing their networks of Super PACs, donors, and advocacy groups to shape the outcome. The stakes are high, with the fate of AI regulation hanging in the balance.
On one side is Public First, a bipartisan initiative backed by former Representatives Chris Stewart and Brad Carson, which aims to ensure "meaningful oversight" of the most powerful technology ever created. This group has launched two affiliated Super PACs and expects to raise at least $50 million for the 2026 cycle. They argue that states are operating as laboratories that reveal what works, provide early enforcement, and supply evidence needed to shape a future federal law with meaningful protections.
On the opposing side is Leading the Future (LTF), a coalition backed by GOP strategist Zac Moffatt and Democratic operative Josh Vlasto. This group operates through a multi-layered structure, including federal and state Super PACs, nonprofit advocacy arms, and grassroots organizing efforts. LTF's message is that a patchwork of state laws will cost American jobs and cede AI leadership to China.
The two coalitions diverge sharply on the role of federal preemption. Public First argues that a single national standard is essential to maintain competitiveness, while LTF frames state laws as costly barriers that could slow the development and deployment of advanced systems. The scale of spending reveals how quickly AI has moved to the center of American politics.
As the window for resolving this fight narrows, Congress faces a critical decision point about including preemption language in the National Defense Authorization Act. With AI's economic impact and labor displacement rising as voter concerns, the outcome will have far-reaching implications for the future of artificial intelligence regulation in the United States.
The irony is that both sides are funded by powerful interests, with Silicon Valley investors pouring hundreds of millions into LTF's efforts, while safety-focused donor networks support Public First. The real question is not who has more money or influence but what kind of regulatory framework will be put in place to govern this rapidly evolving technology.
Ultimately, the preemption showdown pits competing visions for AI governance against each other. One side seeks to establish a unified federal law that would preempt state regulations, while the other argues that states have filled a policy vacuum and should continue to play a role in regulating AI.
The outcome will set the tone for the nation's approach to artificial intelligence regulation, with significant implications for workers, consumers, and the economy. As the battle rages on, one thing is clear: the future of AI governance in America hangs precariously in the balance.