US draws up strict new AI guidelines amid Anthropic clash: Report

The report comes a day after the Pentagon officially designated Anthropic a “supply chain risk” and banned government contractors from using the artificial intelligence firm’s technology for the US military. The move comes after a months-long dispute over the company’s insistence on security measures that the Defense Department said went too far.
Draft guidelines reviewed by the FT say AI groups looking to do business with the government must grant an irrevocable license to the US to use their systems for all lawful purposes.
The US General Services Administration (GSA) guidance will apply to civilian contracts and is part of a broader effort to strengthen the procurement of AI services across the government, the FT reported, adding that it mirrors measures the Pentagon is considering for military contracts.
The White House and GSA did not immediately respond to requests for comment.
The GSA draft also mandates that contractors “must not deliberately encode partisan or ideological judgments into AI system data outputs,” the FT reported.
Companies are also required to disclose whether their models “have been modified or configured to comply with any federal government or business compliance or regulatory framework outside the United States,” the FT reported.




