google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

RAISE Act New York faces opposition from Trump, AI industry super PAC

New York is 3,000 miles from the tech hub of Silicon Valley, but in recent weeks the state has found itself at the center of a fierce debate over artificial intelligence regulations.

A bipartisan super PAC called “Leading the Future” announced last week that it would target Democratic congressional candidate Alex Bores, who has publicly advocated for AI safety legislation in New York by supporting Responsible AI Safety and Education. (INCREASE) To behave. The bill would require major AI companies to publish security and risk protocols and disclose serious security incidents.

“They don’t want there to be any regulation,” Bores told CNBC’s “Squawk Box” on Monday. “What they’re saying is that if you dare to come forward and defy us, that means we have to bury you with millions and millions of dollars.”

Leading the Future (LTF) launched in August with more than $100 million in funding and aims to elevate “candidates who support a bold, forward-looking approach to AI.” a version. The group largely represents the Trump administration’s view that federal AI laws should preempt regulations imposed by certain states; This effort is mostly aimed at undermining big blue states like California and New York.

The Super PAC is backed by high-profile figures in the tech world, including OpenAI President Greg Brockman. palantir co-founder Joe Lonsdale, venture firm Andreessen Horowitz, and artificial intelligence startup Perplexity.

“LTF and its affiliates will oppose policies and those who support this agenda that stifle innovation, enable China to gain global AI supremacy, or make it harder for it to bring the benefits of AI to the world,” the group said in a statement.

Bores has served as a member of the New York State Assembly since 2023 and has previously worked at several technology companies, including Palantir. HE started Democratic Rep. Jerry Nadler announced in October that he was joining the congressional campaign for New York’s 12th district. not running for re-election.

As a councilman, Bores co-sponsored the RAISE Act.

“I’m very optimistic about the power of artificial intelligence, I take tech companies seriously as they think about what this can do in the future,” Bores said Monday. “But the same pathways that would potentially allow it to treat diseases [will] let’s say let him build a biological weapon. And so you just want to manage the risk of that potential.”

Assemblyman Alex Bores speaks at a press conference about the Climate Change Superfund Act at Pier 17 on May 26, 2023 in New York City.

Michael M. Santiago | Getty Images

UPGRADE Act passed in the New York state assembly and senate in June. Democratic Gov. Kathy Hochul has until the beginning of the 2026 session to decide whether to sign the bill into law.

On November 17, LTF leaders Zac Moffatt and Josh Vlasto announced that they planned to spend millions of dollars to defeat Bores’ congressional candidacy. In a statement, they accused Bores of pushing “ideologically and politically motivated legislation” that would “handcuff” the United States and its ability to lead in the field of artificial intelligence.

Moffatt and Vlasto told CNBC that the bill is “a clear example of patchwork, uninformed, and bureaucratic state laws that will slow America’s progress and open the door for China to win the global race for AI leadership.”

Moffatt has more than two decades of experience in digital and political strategy; Vlasto previously served as press secretary to Senator Chuck Schumer (D-NY) and chief of staff to former New York Governor Andrew Cuomo.

Policy He was the first to report the LTF’s effort to target Bores.

Bores seized on LTF’s announcement as a fundraising opportunity, urging voters to donate to his campaign “unless they want Trump megadonors to write all the tech policy.” a post In X.

“I have a master’s degree in computer science, have two patents, and have been working in technology for almost a decade,” Bores told CNBC last week. “If they’re afraid of people who understand their business regulating their business, they’re telling it to themselves.”

What is the INCREASE Law?

The RAISE Act applies to any major AI company; Google, Meta or OpenAI, which has spent more than $100 million on computing resources to train advanced models.

This will require companies to write, publish and follow safety and security protocols and update them as necessary. Violators could face fines of up to $30 million.

Companies will also need to take steps to implement safeguards to prevent their models from engaging in “critical harm,” such as aiding the creation of chemical weapons or large-scale, automated criminal activity. The bill defines “critical harm” as 100 deaths or serious injuries or at least $1 billion in damages.

Under the RAISE Act, large AI companies will not be able to release models that would create an “unreasonable risk of critical harm.” Opponents of the bill have strongly pushed back against this part of the legislation, Bores said.

“This is basically designed to eliminate the problem we have with tobacco companies who know that smoking causes cancer but publicly deny it and continue to put their products on the market,” he said.

The RAISE Act would also require AI companies to disclose significant security incidents. For example, if a model is stolen by a malicious actor, the developer will be required to disclose this incident within 72 hours of learning about it.

“Two weeks ago, we saw the Anthropic conversation about how China is using its model to cyberattack U.S. government agencies and our chemical production facilities,” Bores said. “Surprisingly, they didn’t need to explain it. I think it should be law and mandatory for every major AI developer.”

Anthropic, an artificial intelligence startup worth approximately $350 billion after recent investments, published a report. blog post earlier this month detailed what it called “the first documented case of a large-scale cyberattack carried out without significant human intervention.” Anthropic said it believes the threat actor is a Chinese state-backed group.

Bores told Technology Brew He said he drafted the first version of the bill in August 2024 and sent it to “all major developers” for feedback. He produced a second draft in December and requested a new round of red lines.

The INCREASE Act was published in March and amended in May and June.

“I worked really closely with a lot of people in the industry to get the details right,” Bores told Tech Brew.

US President Donald Trump arrives on the South Lawn of the White House on November 22, 2025 in Washington, DC.

John McDonnell | Getty Images

LTF’s decision to target Bores over the RAISE Act is emblematic of a broader debate about whether AI should be regulated at the state or federal level in the US

Some lawmakers and tech executives have argued that the “patchwork” of government AI policies will stifle innovation and leave the U.S. at risk of falling behind rivals such as China. But others, including Bores, said the federal government is moving too slowly to keep up with the rapid pace of AI development.

“The debate right now is, should we stop the states from making any progress before the feds fix the problem? Or should we actually work together to get the federal government to fix the problem?” said Bores.

In addition to New York, California, Colorado, Illinois and other states also have their own AI laws. already in effect or will begin early next year.

Last week, President Donald Trump defended a federal artificial intelligence standard in a post on social media site Truth Social.

“Investment in AI is helping the US Economy become the ‘HOTEST’ economy in the World, but US overregulation threatens to undermine this Great ‘Engine’ of Growth,” Trump said. wrote. “We MUST have a single Federal Standard rather than a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI ​​race.”

The White House has also begun drafting an executive order that would target state AI laws by launching legal challenges and halting federal funding, CNBC reported Thursday. But a day later, the Trump administration suspended that effort. Reuters.

The White House had no comment for this story.

Earlier this year, a proposed amendment to Trump’s “One Big Good Bill Act” would enact a 10-year suspension of state-level AI laws. This provision ultimately failed and was not included in the legislation, but the Trump administration recently revived the effort.

The White House is working to see whether a moratorium on certain state AI laws could be included in one of the major bills Congress is pursuing.

“What we see with artificial intelligence is natural, states are stepping up and advancing rapidly,” Bores said. “We should eventually have a federal AI standard. I totally agree with that.”

WRISTWATCH: AI industry-backed super PAC picks first target

AI industry-backed super PAC picks first target

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button