Particle.news
Download on the App Store

Pentagon Labels Anthropic a Supply-Chain Risk and Expands OpenAI Deal as Backlash Mounts

The dispute now tests whether the Pentagon or AI vendors will define limits on military uses.

Overview

  • After an ultimatum and a presidential order halting federal use of its tools, Anthropic was formally designated a supply‑chain risk as OpenAI signed an expanded Pentagon contract on February 28.
  • OpenAI robotics lead Caitlin Kalinowski resigned, saying the deal was rushed and warning against domestic surveillance without judicial oversight and lethal autonomy without human authorization.
  • Anthropic CEO Dario Amodei says the company will challenge the designation in court, while the Pentagon plans to keep using Anthropic during a transition of up to six months, reportedly including current operations against Iran.
  • OpenAI moved to clarify contract limits, with Sam Altman stating its systems should not be used for domestic mass surveillance of U.S. persons or by Defense Department intelligence agencies, though analysts note enforcement gaps.
  • The agreement triggered reputational and commercial fallout, including a spike in ChatGPT app uninstalls and a surge in downloads of Anthropic’s Claude, as debate intensifies over AI’s role in targeting and force decisions.