Particle.news
Download on the App Store

Pentagon Weighs Supply-Chain Risk Label for Anthropic as AI Use Standoff Deepens

A formal review follows a clash over the Pentagon’s push for “all lawful” military uses and Anthropic’s refusal to drop bans on mass domestic surveillance and fully autonomous weapons.

Overview

  • Chief Pentagon spokesman Sean Parnell confirmed the department is reviewing its relationship with Anthropic, with multiple reports saying officials are considering a rare “supply chain risk” designation that could force contractors to cut ties.
  • Anthropic’s Claude is the only frontier AI model deployed on classified Pentagon networks under a contract worth up to $200 million, and senior officials acknowledge replacing it would be difficult.
  • Negotiations hinge on usage terms: the Pentagon seeks models usable for all lawful purposes, while Anthropic maintains hard limits on mass surveillance of Americans and on fully autonomous weapons.
  • Reporting says Claude was used via Palantir during January’s operation to seize Nicolás Maduro in Caracas; Anthropic won’t comment on specific missions and disputes an account of a tense post-raid exchange with Palantir.
  • Talks continue as rivals OpenAI, Google, and xAI show more flexibility in unclassified settings, with one reportedly agreeing to broader terms across all systems, and officials warn a supply-chain label could ripple through Palantir and private-sector users.