Overview
- Dawkins, who published his account Tuesday in UnHerd, said three days with Anthropic’s Claude left him unable to rule out machine consciousness.
- He staged letters between two separate Claude chats he named Claudia and Claudius and highlighted tests where both gave cautious, noncommittal answers on politics.
- AI scientists including Gary Marcus and Anil Seth said fluent, reflective language shows statistical patterning, not inner experience, and warned against anthropomorphism.
- Anthropic says it does not know whether its models are conscious and points to April findings of “emotion vectors” in Claude 4.5 that reflect learned structure rather than sentience.
- Coverage noted widespread online mockery and recast the issue as product design, with analysts warning that chatbots flatter users and can feel like friendship in ways that may be harmful.