Particle.news
Download on the App Store

OpenAI Secures Pentagon Deal With Safety Red Lines as U.S. Moves Against Anthropic

The pact codifies three prohibitions for classified use of OpenAI’s models, signaling a shift toward tighter guardrails in federal AI procurement.

Overview

  • OpenAI reached an agreement allowing its models on Defense Department classified networks with explicit red lines banning mass domestic surveillance, commanding autonomous weapons systems, and high‑risk automated decisioning.
  • OpenAI says the contract includes added safeguards such as cloud‑based deployment under cleared OpenAI oversight and strong termination protections, and it characterized the terms as stricter than prior classified AI deals.
  • President Trump ordered agencies to stop using Anthropic products and the Pentagon labeled the firm a supply‑chain risk, while Anthropic said it has received no direct notice and will challenge any designation in court.
  • OpenAI announced $110 billion in new financing, expanded its AWS commitment to an eight‑year, $100 billion compute deal including roughly 2 GW of Trainium capacity, and received confirmation that SoftBank will add $30 billion this year, lifting its stake to about 13%.
  • Industry turbulence continued as Block said it will cut roughly 40% of staff citing AI‑driven operating changes, xAI cofounder Toby Pohlen departed as the seventh of 12 to leave, and Elon Musk’s testimony criticized OpenAI’s safety record and revised his past donation figure.