Microsoft to keep using Anthropic’s Claude for many clients despite Pentagon’s ‘supply chain risk’ label on AI firm

Microsoft said Thursday it won’t give up artificial intelligence technology from its AI startup Anthropic built into its products for customers, except for those related to the U.S. Department of Defense.
This comes after the Department of Defense, which the Trump administration calls the War Department, sent Anthropic a formal notice severing ties with the company and labeling it a “supply chain risk.”
Microsoft became the first major company to say it would continue to work with Anthropic on non-government projects following the Pentagon’s notice.
“Our attorneys reviewed the appointment and concluded that Anthropic products, including Claude, may be made available to our customers outside the War Department through platforms such as M365, GitHub, and Microsoft’s AI Foundry, and that we may continue to work with Anthropic on non-defense-related projects,” a Microsoft spokesperson said, as quoted by Reuters and CNBC.
Some defense technology companies have asked their employees to avoid using Anthropic’s Claude models and turn to alternatives.
Microsoft, meanwhile, provides its tools to a number of US government agencies. The War Department makes extensive use of Microsoft 365 productivity software. The company announced in September that it would integrate Anthropic’s generative AI models into the Microsoft 365 Copilot add-on for Microsoft 365 subscriptions.
What other companies are using Anthropic’s AI models?
Amazon, Anthropic’s investor and a major customer of the company’s Claude model, did not immediately respond to a request for comment outside regular business hours.
Palantir’s Maven Intelligent Systems, a software platform that provides intelligence analysis and weapons targeting to militaries, uses multiple command prompts and workflows built using Anthropic’s Claude code, Reuters previously reported.
Pentagon puts Anthropic on ban list
The Pentagon on Thursday imposed a formal supply chain risk designation on artificial intelligence lab Anthropic, limiting the use of a technology that Reuters reported was being used by the United States in its war with Iran.
The “supply chain risk” label, approved in a statement by Anthropic, will take effect immediately and prohibit government contractors from using Anthropic’s technology in their work for the U.S. military.
However, Anthropic CEO Dario Amodei wrote in his statement that companies can use the Claude AI model in other projects unrelated to the Pentagon. He said the definition had a “narrow scope” and the restrictions only applied to the use of Anthropic AI in Pentagon contracts.
“This expressly applies only to use of Claude by customers as a direct part of contracts with the War Department; it does not apply to all uses of Claude by customers with such contracts.”
The identification of the risk came after a months-long dispute over the company’s insistence on security measures that the Defense Department, which the Trump administration calls the War Department, said went too far. Amodei reiterated in his statement that the company would challenge this appointment in court.
The action represented an extraordinary U.S. rebuke to an American technology company that began working with the Pentagon before its rivals. The action comes as the department continues to rely on Anthropic’s technology to support military operations, including in Iran, Reuters reported, citing a person familiar with the matter.



