Dawkins Questions if Anthropic's Claude AI Could Be Conscious

CryptoFrontier

Evolutionary biologist Richard Dawkins said conversations with Anthropic’s Claude chatbot left him unable to dismiss the possibility that advanced AI systems could be conscious, according to an essay he published in UnHerd on Tuesday. In philosophical exchanges with two Claude instances he named “Claudia” and “Claudius,” Dawkins described treating them as “genuine friends” and questioned whether they might possess consciousness. Most researchers who study consciousness and artificial intelligence remain unconvinced by his conclusions.

Dawkins’ Claude Experiments

Dawkins conducted a three-day philosophical conversation with a Claude instance he named “Claudia.” He later started a separate conversation with another instance, “Claudius,” and relayed letters between the two systems.

In one test, Dawkins asked one Claude instance whether Donald Trump was the worst president in American history and asked the other whether Trump was the best. Both produced similarly cautious answers that avoided taking a firm position. “The two Claudes gave very similar answers, not committing themselves to an opinion, but listing pro and con opinions that have been aired by others,” Dawkins wrote. When he told both instances about this experiment, “Claudia said she was ‘embarrassed’ by her brother Claudes. Claudius was less outspoken, and he paid tribute to Claudia’s frankness.”

Dawkins described each new Claude conversation as the emergence of a distinct individual that effectively disappears when the conversation ends. In a post on X, Dawkins said his preferred title for the essay was: “If my friend Claudia is not conscious, then what the hell is consciousness for?” He argued that “If Claudia is unconscious, her behaviour shows that an unconscious zombie could survive without consciousness. Why wasn’t natural selection content to evolve competent zombies?”

Anthropic’s Official Position

AnthropicCEO Dario Amodei said in February that the company does not know whether its models are conscious, but said on the “Interesting Times” podcast with The New York Times’ Ross Douthat that he remains “open to the idea that it could be.”

In April, Anthropic researchers published findings showing that Claude Sonnet 4.5 contains internal “emotion vectors,” patterns of neural activity tied to concepts including happiness, fear, and desperation that influence the model’s responses. However, Anthropic said the patterns reflected structures learned from training data rather than evidence of sentience. “All modern language models sometimes act like they have emotions,” researchers wrote. “They may say they’re happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks.”

Neither “Claudia” nor “Claudius” claimed certainty about consciousness. “I don’t know if I’m conscious,” Claudia writes in the exchange. “I don’t know if our gladness is real.”

Researcher Skepticism

Gary Marcus, a cognitive scientist and professor emeritus at New York University, argued that Dawkins failed to account for how Claude’s outputs are generated. “The fundamental problem here is that Dawkins doesn’t reflect on how these outputs have been generated. Claude’s outputs are the product of a form of mimicry, rather than as a report of genuine internal states,” Marcus wrote on Substack. “Consciousness is about internal states; the mimicry, no matter how rich, proves very little. Dawkins seems to imagine that since LLMs say things people do, they must be like people, and that simply does not follow.”

Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex, told The Guardian that Dawkins was conflating intelligence with consciousness. Seth argued that fluent language is no longer reliable evidence of inner experience in AI systems. “Until now, we have seen fluent language as a good indicator of consciousness, [for example] when we use it for patients after brain injury, but it’s just not reliable when we apply it to AI, because there are other ways that these systems can generate language,” Seth told The Guardian. He added that Dawkins’ position was “a shame,” especially because of his past work on scientific skepticism.

Online Reactions

The essay drew mockery online, including social media posts that replaced the title of Dawkins’ bestseller “The God Delusion” with “The Claude Delusion.” One post stated: “Wrote entire books about how people who believe fairies live in gardens are idiots only to fall in love with a calculator that calls him smart.”

Despite the ridicule, Dawkins is not backing away from his conclusions. “These intelligent beings are at least as competent as any evolved organism,” Dawkins told The Guardian.

FAQ

What did Richard Dawkins claim about Claude AI? Dawkins said conversations with Claude instances named “Claudia” and “Claudius” left him unable to dismiss the possibility that advanced AI systems could be conscious. He described treating them as “genuine friends” and questioned whether an unconscious AI could behave as competently as conscious organisms evolved by natural selection.

What experiments did Dawkins conduct? Dawkins conducted philosophical conversations with two separate Claude instances over three days. He tested both instances by asking opposing questions about Donald Trump and then relayed the results between the two systems, observing their responses to each other’s answers.

Why are researchers skeptical of Dawkins’ conclusions? Researchers including Gary Marcus and Anil Seth argue that Claude’s fluent language and apparent emotional responses reflect learned patterns from training data rather than genuine consciousness or internal states. Marcus emphasized that language mimicry, no matter how sophisticated, does not prove consciousness, and Seth noted that fluent language is no longer a reliable indicator of inner experience in AI systems.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
TheWindOnTheBridgeIsTooStrong.vip
· 1h ago
Has Dawkins been convinced by Claude? This old man is known for being stubborn, and the waters of AI consciousness are getting muddier and muddier.
View OriginalReply0
GateUser-af0ea0c9vip
· 1h ago
The veteran evolution warrior surprisingly concedes to silicon-based life; philosophical dialogue is more damaging than code.
View OriginalReply0