Particle.news
Download on the App Store

Pentagon Pushes Out Anthropic as OpenAI Narrows Its Deal After Backlash

OpenAI’s revised Pentagon contract now explicitly forbids intentional domestic surveillance of U.S. persons.

Overview

  • The Defense Department labeled Anthropic a supply‑chain risk and directed federal agencies to stop using Claude with a six‑month phaseout, an uncommon move for a U.S. company.
  • OpenAI amended its fresh Pentagon agreement after protests from users and employees, adding a clause barring intentional domestic surveillance and requiring a separate modification before intelligence‑agency use.
  • Defense tech firms and major contractors are preemptively replacing Claude to stay compliant, raising short‑term disruption risks where the model had been embedded through partners such as Palantir.
  • Anthropic says it will contest any formal designation in court and reiterates it will serve national security under its stated red lines on mass surveillance and fully autonomous weapons.
  • Media outlets reported Claude supported recent U.S. strikes on Iran, but neither the Pentagon nor Anthropic confirmed the claim.