Overview
- Anthropic filed two challenges on Monday—in federal court in California and in the D.C. Circuit—seeking to vacate and block the Pentagon’s supply‑chain risk designation.
- The Defense Department kept the designation in place with a six‑month phase‑out of Claude from military use and required vendors to certify non‑use in defense work, prompting rapid shifts to alternative AI providers.
- At the center of the dispute, Anthropic refused to drop guardrails that bar fully autonomous lethal targeting and mass domestic surveillance, while the Pentagon demanded access for all lawful military purposes.
- Legal scholars say the statute invoked has rarely been used, has not been tested against a U.S. company, and may be vulnerable on statutory, due‑process, and First Amendment grounds given officials’ public statements.
- Anthropic warns the action could cut 2026 revenue by multiple billions of dollars and disrupt operations even as Claude reportedly remained in recent Iran‑related workflows, and OpenAI moved quickly to secure a Pentagon deal.