Particle.news
Download on the App Store

Anthropic Restricts Powerful AI Model and Rolls Out Safer Release as Industry Mobilizes

The move signals a shift from hype to hard safety work under pressure from faster models.

Overview

  • Anthropic limited access to its Mythos model to a closed defensive program of about 40 vetted tech and security organizations and released Claude Opus 4.7 with automated blocks on high‑risk cyber requests.
  • Independent testing found Mythos could spot software flaws at scale and in some cases execute multi‑step network intrusions, prompting warnings from Mandiant about shorter times from bug discovery to mass exploitation.
  • CrowdStrike said the case matters to every enterprise and reported an 89% year‑over‑year rise in attackers using AI, a trend that pushes defenders to adopt AI tools that can scan, patch, and respond at machine speed.
  • Major players including Amazon, Apple, Google, Microsoft, and Nvidia agreed to join Glasswing, an industry effort to coordinate norms and defenses for high‑risk capabilities, even as commentators pressed for public oversight.
  • Universities are underprepared for AI use in coursework, with Coursera reporting 44% of college tasks now use algorithms, only 26% of institutions have a formal policy, and just 27% of instructors feel able to spot AI‑generated work.