Thursday, May 15, 2025

Why AI companies keep raising the specter of sentience - Chris Stokel-Walker, Wired

In their blog post explaining what went wrong, OpenAI described “ChatGPT’s default personality” and its “behavior”—terms typically reserved for humans, suggesting a degree of anthropomorphization. OpenAI isn’t alone in this: humans often describe AI as “understanding” or “knowing” things, largely because media coverage has consistently framed it that way—incorrectly. AI doesn’t possess knowledge or a brain, and some argue it never will (though that view is disputed). Still, talk of sentience, personality, and humanlike qualities in AI appears to be growing. Last month, OpenAI competitor Anthropic—founded by former OpenAI employees—published a blog post expressing concern about developing AI that benefits human welfare. “But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises,” the firm wrote. “Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?”