Particle.news
Download on the App Store

Pentagon Blacklists Anthropic as Supply-Chain Risk After Guardrails Clash

The move asserts the Pentagon’s position that lawful military uses cannot be contractually limited by private AI vendors’ ethics policies.

Overview

  • On March 6, the Defense Department formally designated Anthropic a supply-chain risk under Section 3252, immediately barring defense contractors from using its technology for military work in a rare action against a U.S. company.
  • Anthropic said it will sue to overturn the designation, calling it legally unsupportable and a response to the company’s refusal to permit uses such as mass domestic surveillance or autonomous weapons.
  • Following a White House directive, federal agencies began dropping Anthropic tools, with the State and Health departments instructing staff to shift to alternatives like OpenAI’s ChatGPT or Google’s Gemini.
  • OpenAI signed a Pentagon agreement after Anthropic’s refusal, and CEO Sam Altman told employees the company cannot control how the military ultimately uses its models, prompting internal and public criticism.
  • Industry and market effects mounted as reports said Claude-assisted tools were used in recent Iran operations, a roughly $200 million Pentagon contract for Anthropic hung in the balance, and Sensor Tower data showed U.S. ChatGPT app uninstalls spiked about 295% while Claude downloads rose.