US military used Anthropic’s AI model Claude in Venezuela raid, report says | AI (artificial intelligence)

Claude, the artificial intelligence model developed by Anthropic, was used during the US army’s operation to kidnap Nicolás Maduro from Venezuela, Wall Street Journal clarified Saturday in a high-profile example of how the U.S. defense department is using artificial intelligence in its operations.
The US raid on Venezuela included the bombing of the capital Caracas and the killing of 83 people, according to the Venezuelan defense ministry. Anthropic’s terms of use prohibit the use of Claude for violent purposes, weapons development, or surveillance.
Anthropic was the first AI developer known to have been used by the US department of defense in a covert operation. It was unclear how the vehicle, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed.
A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was necessary to comply with its usage policies. The US defense department did not comment on the allegations.
The WSJ cited unnamed sources who said Claude was employed through Anthropic’s partnership with Palantir Technologies, a contractor that works with the U.S. department of defense and federal law enforcement. Palantir declined to comment on the allegations.
The US and other militaries are increasingly using AI as part of their arsenal. The Israeli military has used drones with autonomous capabilities in Gaza and has made extensive use of artificial intelligence to replenish its target bank in Gaza. The US military has used artificial intelligence targeting in attacks in Iraq and Syria in recent years.
Critics have warned against the use of artificial intelligence in weapons technologies and the deployment of autonomous weapons systems, pointing out targeting errors created by computers determining who should and should not be killed.
AI companies are grappling with how their technologies should interact with the defense sector; Dario Amodei, CEO of Anthropic, calls for regulation to prevent harm from the deployment of AI. Amodei also expressed caution about the use of artificial intelligence in autonomous lethal operations and surveillance in the United States.
This more cautious stance apparently irritated the US defense department; Secretary of War Pete Hegseth said in January that the department “will not be using artificial intelligence models that will not allow you to engage in wars.”
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The defense department is also using a custom version of Google’s Gemini and OpenAI systems to support research.




