Particle.news
Download on the App Store

Judge Halts Pentagon Blacklist of Anthropic for Now

The case tests how far the government can force AI vendors to drop safety limits for military use.

Overview

  • Anthropic won a preliminary injunction Thursday in San Francisco that blocks the Pentagon’s supply‑chain‑risk label and pauses the president’s order telling agencies to stop using Claude.
  • The judge stayed her order for seven days to let the government appeal, and a related challenge in the D.C. Circuit remains unresolved.
  • Judge Rita Lin said the measures looked punitive and called them classic illegal First Amendment retaliation for Anthropic’s public criticism of contracting terms.
  • The dispute grew from Anthropic’s refusal to let its AI support fully autonomous lethal targeting or mass surveillance of Americans, after the Pentagon pushed for use of the system for all lawful purposes.
  • The ruling restores the pre‑ban status but lets the Pentagon shift to other AI providers without citing the label, a significant step since Claude had a $200 million contract and was cleared on some classified networks.