Trump orders military to stop using Claude chatbot in clash over AI safety
Matt O’Brien And Konstantin Toropin
Washington: The Trump administration ordered all US agencies to stop using Anthropic’s AI technology and imposed other major penalties in an unusually public conflict between the US government and the company over AI security.
President Donald Trump, Defense Secretary Pete Hegseth and other officials took to social media to accuse Anthropic of endangering national security and punishing it by Friday (US time) for not allowing the military unrestricted use of its artificial intelligence technology.
Pentagon wants to use Antropik’s Claude chatbot for any purpose within the legal limits – but without any usage restrictions on the part of Anthropic. The firm insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations.
Trump’s order came after Anthropic CEO Dario Amodei refused to back down, citing concerns that the company’s products could be used in ways that violate safety measures.
“We don’t need it, we don’t want it and we won’t do business with them again!” Trump said on social media.
Hegseth considered the company a “supply chain risk”; It was a designation often stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.
Anthropic stated that it would challenge any official appointment in court, describing it as “an unprecedented action – an action historically reserved for enemies of the United States, never before publicly applied to any American company.”
The government’s push to dominate Anthropic’s internal decision-making comes at a time of broader conflict over the role of artificial intelligence in national security and concerns about how increasingly capable machines could be used in high-risk situations involving lethal force, sensitive information or government surveillance.
Accordingly New York TimesWhile many military uses of AI are still in development, models are already being actively used for intelligence analysis.
The article suggests that removing Claude from government computers could harm analysts at the National Security Agency examining foreign communications intercepts and prevent CIA analysts from looking for patterns in intelligence reports.
Citing former officials, Times He said CIA leaders were anxious to find a way to continue using Claude, who had stepped up his work and deepened his analysis. Before Trump’s comments, officials had warned that any executive order could force the agency to seek other solutions.
Trump lashed out
Trump said Anthropic made a mistake trying to strengthen the Pentagon. Writing at Truth Social, he said most agencies should immediately stop using Anthropic’s AI, but gave the Pentagon a six-month deadline to phase out the technology already deployed on military platforms.
“The United States will never allow the radical left, a woke corporation, to dictate how our great military fights and wins wars!” wrote in all capital letters.
After months of private talks turned into a public debate this week, Anthropic said Thursday that the government’s new contract language would allow “safety measures to be ignored at will,” and Amodei said his company “cannot in good conscience” agree to that.
Anthropic can afford to lose the contract. But at the height of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable start-ups, the government’s actions created greater risks.
Before the president’s decision, senior Trump appointees from the Pentagon and State Department had taken to social media for hours to criticize Anthropic, but their complaints were contradictory.
Senior Pentagon spokesman Sean Parnell said on social media Thursday that Anthropic’s conduct “endangers critical military operations and potentially puts our warfighters at risk.”
Meanwhile, Hegseth said in a statement Friday that the Pentagon “must have full and unrestricted access to Anthropic’s models for every LEGAL purpose in defense of the Republic.”
Hegseth’s choice to designate Anthropic as a supply chain risk uses an administrative tool designed to prevent companies owned by U.S. adversaries from selling products harmful to American interests.
That dynamic, “combined with incendiary rhetoric attacking the company, raises serious concerns about whether national security decisions are driven by careful analysis or political considerations,” said Sen. Mark Warner, the top Democrat on the Senate intelligence committee.
Anthropic did not immediately respond to a request for comment on the Trump administration’s actions.
Dispute shakes Silicon Valley
The dispute stunned developers in Silicon Valley, where venture capitalists, leading AI scientists and many employees from Anthropic’s biggest rivals, OpenAI and Google, supported Amodei’s stance in open letters and other forums.
The move is likely to benefit Elon Musk’s rival chatbot Grok, which the Pentagon plans to grant access to secret military networks, and could serve as a warning to Google and OpenAI, whose contracts to supply artificial intelligence tools to the military are still in development.
Musk sided with the Trump administration, saying “Anthropic hates Western Civilization” on the social media platform X.
But OpenAI chief executive Sam Altman, one of Amodei’s fiercest rivals, sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview and in a letter to employees, saying OpenAI shared the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders left to found Anthropic in 2021.
“Despite all the differences I have with Anthropic, I mostly trust them as a company and think they really care about safety,” Altman told CNBC, hours before gathering employees for an all-employee meeting on Friday (US time).
Retired Air Force general Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that “targeting Anthropic makes for spicy headlines, but in the end everyone loses.”
Shanahan said Claude was already widely used across the government, including in classified environments, and that Anthropic’s red lines were “reasonable.” The AI broad language models that power chatbots like Grok and ChatGPT are also “not ready for prime time in national security environments,” especially for fully autonomous weapons, Claude said.
“He’s not trying to be cute here,” Antropik wrote on LinkedIn. “You won’t find a system with broader and deeper reach across the military.”
access point
Get notes directly from our foreign correspondents on events making headlines around the world. Sign up for our weekly What’s on in the World Newsletter.

