Particle.news
Download on the App Store

Pentagon Weighs Supply-Chain Risk Label for Anthropic as Relationship Review Intensifies

The Defense Department is pressing for unrestricted AI use, putting Anthropic’s usage limits in direct conflict.

Overview

  • The Pentagon has confirmed its relationship with Anthropic is under review, and multiple outlets report senior officials are considering a rare supply‑chain risk designation that would force contractors to cut ties with the company.
  • Anthropic refuses to allow mass domestic surveillance or fully autonomous weapons, while defense leaders want models available for “all lawful uses” without vendor guardrails.
  • Rival labs OpenAI, Google, and xAI have shown greater flexibility, with those companies allowing all lawful uses on unclassified systems and one reportedly accepting that standard across all systems, according to a senior DoD official.
  • Claude is reported to be the only AI model operating on Pentagon classified networks under a contract valued up to $200 million, and it was reportedly used via Palantir in January’s operation targeting Nicolás Maduro, complicating any disentanglement.
  • Palantir’s role as an intermediary is central to the dispute, with a reported inquiry about Claude’s operational use prompting Pentagon concerns that Anthropic might resist certain missions, a characterization the company denies as it touts ongoing “productive” talks.