Particle.news
Download on the App Store

Judge Questions Pentagon Blacklist of Anthropic, Calling It an Attempt to Cripple the Company

The hearing sets up a test of how much control AI makers can keep over military uses of their models.

Overview

  • At a San Francisco hearing Tuesday, U.S. District Judge Rita Lin called the Pentagon’s “supply chain risk” label on Anthropic “troubling” and said it looked like an attempt to cripple the company.
  • Anthropic asked for a preliminary injunction to pause the designation and the separate order for all agencies to stop using its Claude AI, arguing it was punished for rejecting uses such as mass domestic surveillance and fully autonomous weapons.
  • Government lawyers said officials fear Anthropic could later manipulate deployed systems or add a “kill switch,” while acknowledging that Defense Secretary Pete Hegseth’s social post threatening to cut off all contractor ties has no legal force.
  • The label, which blocks Defense use and pressures contractors, is the first public use of this procurement tool against a U.S. firm and was built to guard against sabotage by adversaries in national‑security systems.
  • The designation remains in effect as the Pentagon phases out Claude and shifts work to rivals, industry partners rethink contracts, and courts in California and Washington, D.C., weigh challenges that Microsoft and other supporters say could shape future defense AI deals.