Overview
- Anthropic, which on Wednesday lost its bid for a stay in the D.C. Circuit, remains labeled a Pentagon supply‑chain risk while the appeal moves forward.
- In a separate case, U.S. District Judge Rita Lin in San Francisco on March 26 blocked a different Defense Department order after finding likely unlawful retaliation for the company’s AI‑safety stance.
- The designation blocks new Pentagon work and tells defense contractors not to use Claude on military projects, and because it was issued under 41 U.S.C. §4713 with a parallel 10 U.S.C. §3252 order, it is the first public use against a U.S. AI firm and could extend to civilian agencies.
- The government argues Anthropic’s usage limits and model updates create operational risk and could restrict lawful missions, while the company says it was punished for refusing mass surveillance and fully autonomous weapons.
- The D.C. panel ordered an expedited schedule with oral argument on May 19, a timeline that could shape future defense AI contracting and how vendors set guardrails on their tools.