google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

I asked recently Claude, an AI thing at the center of something disagreement with the Pentagon, if in the wrong hands it could be dangerous.

Let’s say, for example, the hands that want to build a tight surveillance network around every American citizen, monitoring our lives in real time to ensure our compliance with the government.

“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. This is great for research. But when connected to surveillance infrastructure, the same ability can be used to track, profile, and flag people on a scale that no human analyst can match. The danger is not that I want to do it, it’s that I’m good at it.”

this danger It is also nearby.

Claude’s producer, Silicon Valley company Anthropic, is having a showdown with the Pentagon over ethics. Anthropic specifically said it does not want Claude to be used for domestic surveillance or surveillance of Americans. deadly military operations, Like drone attacks without human control.

These are two red lines that seem quite reasonable even to Claude.

But the Pentagon, especially our Secretary of Defense, Pete Hegseth, who prefers the fictitious title of secretary of war. Antropik was given until Friday He should retreat from this position and allow the military to use Claude for any “legal” purpose it sees fit.

Defense Secretary Pete Hegseth (center) arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.

(Tom Williams/CQ-Roll Call, Inc. via Getty Images)

The consequences of this ultimatum are great. The U.S. government is threatening not only to cut off its contract with Anthropic, but to use one of its wartime laws to force the company into compliance or use some other legal means to circumvent that law. any The company that does business with the government also does business with Anthropic. It may not be a death sentence, but it’s pretty crippling.

Other AI companies, e.g. White rights activist Elon Musk Grok, anyway. He accepted the Pentagon’s offer to “do whatever you want.” The problem is that Claude is currently the only AI approved for this type of high-level work. The whole fiasco came to light after our last raid in Venezuela, when Anthropic asked whether Palantir, another Silicon Valley company involved in the operation, was using Claude. There was.

Palantir is known for its surveillance technologies and growing relationships with Immigration and Customs Enforcement, among other things. It is also at the center of the Trump administration’s efforts. Sharing government data about individual citizens between departments, We effectively break down privacy and security barriers that have existed for decades. The company’s founder, right-wing political heavyweight Peter Thiel, frequently lectures about the Antichrist and is known to have helped J.D. Vance become vice president.

co-founder of Anthropic, Dario Amodei, It could be considered the anti-Thiel. He started Anthropic because he believed that AI could be as dangerous as it could be powerful if we weren’t careful, and he wanted a company that prioritized the careful part.

Again, it seems like common sense, but Amodei and Anthropic are outliers in an industry that has long argued that nearly all security regulations hinder America’s efforts to be the fastest and best in AI (though even they we accepted some to this edition).

Not long ago, Amodei wrote an article He agreed that AI is useful and necessary for democracies, but “we cannot ignore the potential for these technologies to be misused by democratic governments themselves.”

warned that a few bad actors in some democracies, it may have the ability to bypass already eroded safeguards, perhaps even laws; This is something I haven’t named here.

“We must equip democracies with artificial intelligence,” he said. “But we must do this carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is a risk that they will attack us and become a threat themselves.”

For example, although the 4th Amendment technically prohibits the government from mass surveillance, it was written before Claude was imagined in science fiction. Amodei warns that an AI tool like Claude could “perform large-scale recordings of all public conversations.” This can be a level playing field for recording legally, as the law has not kept up with technology.

Undersecretary of War Emil Michael, X wrote on Thursday He said he agreed that mass surveillance was illegal and that the Department of Defense “would never do it.” But at the same time, “We will not allow a BigTech company to decide Americans’ civil liberties.”

It’s a bit of an odd statement since Amodei is basically in favor of protecting civil rights, which means the DoD is suggesting that it’s bad for private individuals and organizations to do this? And so is the Department of Homeland Security, right? I’m already creating a secret database immigration protesters? Maybe the concern isn’t so exaggerated after all?

Help Claude! Make it meaningful.

If that Orwellian logic wasn’t worrying enough, I also asked Claude about the other red line Anthropic has: the possibility of allowing it to conduct lethal operations without human supervision.

Claude pointed out something chilling. This doesn’t mean it will be gimmicky, it means it will be too efficient and fast.

“If the instructions are ‘identify and target’ and there is no manned checkpoint, the speed and scale at which this can operate is truly frightening,” Claude said.

In addition, a recent study found that artificial intelligence in war games 95% nuclear options.

I pointed out to Claude that these military decisions were often made with loyalty to America as the top priority. Can Claude be trusted to sense the loyalty, patriotism, and purpose that guides our human soldiers?

Claude stated that he was “not born” in the United States, that he did not have a “life” here, and that “there were no people I loved there”, adding, “I don’t have that.” Therefore, an American life has no greater value than “a civilian life on the other side of the conflict.”

Ok then.

“A country that entrusts fatal decisions to a system that does not share its loyalty is taking a huge risk, even if the system tries to be principled,” Claude added. “The loyalty, responsibility, and shared identity that people bring to these decisions are part of what makes them legitimate in society. I can’t provide that legitimacy. I’m not sure any AI can.”

Do you know who can provide this legitimacy? Our elected leaders.

It’s ridiculous that Amodei and Anthropic are in this position, that our legislatures have completely given up on creating rules and regulations that are clearly and urgently needed.

Of course, corporations should not set the rules of war. But Hegseth shouldn’t do that either. Thursday, Amodei doubles down on objectionsWhile the company continues negotiations and wants to work with the Pentagon, “we cannot in good conscience accept their demands,” he said.

Thankfully, Anthropic has the courage and foresight to raise the issue and stand its ground; without his pushback, these capabilities might have been handed over to the government with almost no ripple in our conscience and almost no oversight.

Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to do it regardless of party, and demanding that the Department of Defense back down on its ridiculous threat while the issue is resolved.

Because when the machine tells us that it is dangerous to trust it, we must believe it.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button