Appeals Court Blocks Anthropic Bid to Halt Pentagon AI Blacklist
A federal appeals court has refused to block the Pentagon from blacklisting AI firm Anthropic, escalating its legal fight with the Trump administration. The decision keeps the company cut off from key defense contracts, for now.
The ruling deepens a growing split inside the federal courts, with one judge backing national security concerns and another previously siding with the company.
According to AP News and Reuters, the D.C. appeals court said Anthropic failed to prove immediate harm, allowing the Pentagon’s “supply chain risk” designation to remain in place. The label prevents the company from working with the Department of Defense and could expand across federal agencies.
But that outcome clashes with a California ruling that temporarily blocked a broader federal ban, suggesting possible retaliation by officials. The contradiction has left Anthropic operating under conflicting legal orders.
“The extent of damage was not clear enough,” the appeals court found, according to AP News.
The dispute stems from Anthropic’s refusal to allow its AI system, Claude, to be used in autonomous weapons or large-scale surveillance, raising questions about whether the government is punishing a company for its safety policies.
The case is now emerging as a test of federal power over AI firms, especially as the Pentagon expands its use of artificial intelligence in military operations.
A key hearing is scheduled for May 19, where judges are expected to review more evidence and clarify the legal path forward.
For now, the battle remains unresolved and increasingly consequential.




