Particle.news
Download on the App Store

Anthropic Issues New Claude Guidelines, Acknowledging Uncertain Moral Status

Recent reporting describes steep 2025 revenue growth alongside rising adoption of its agent-based tools.

Overview

  • Anthropic published an updated “constitution” for Claude that prioritizes comprehensive safety over ethics, policy adherence, and helpfulness in that order.
  • The guidelines instruct models to resist manipulation, disclose when tasks cannot be completed, refer users to emergency services for urgent situations, and never present themselves as human.
  • Anthropic states it is uncertain whether Claude has consciousness or moral standing and advises treating the system as if it may have an identity to reduce safety risks.
  • Separate reporting describes a ninefold revenue increase in 2025 and estimates that Anthropic could reach a valuation above $350 billion, positioning the company closer to OpenAI in influence.
  • Developers are adopting Anthropic’s Claude Code agent architecture, which can read and write files but struggles on larger projects, prompting third-party infrastructure to manage complex workflows and intensifying a three-way contest with Google and OpenAI.