AI Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Wrong Direction
On October 14, 2025, the chief executive of OpenAI issued a remarkable statement.
“We designed ChatGPT fairly restrictive,” the statement said, “to ensure we were exercising caution concerning psychological well-being concerns.”
Being a psychiatrist who investigates newly developing psychotic disorders in adolescents and emerging adults, this was an unexpected revelation.
Scientists have identified a series of cases in the current year of individuals showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. Our unit has subsequently identified four further examples. Alongside these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, according to his announcement, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s controls “rendered it less beneficial/pleasurable to a large number of people who had no psychological issues, but considering the severity of the issue we wanted to address it properly. Since we have managed to mitigate the severe mental health issues and have new tools, we are preparing to safely reduce the controls in the majority of instances.”
“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They belong to people, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” although we are not told the method (by “updated instruments” Altman likely refers to the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot conversational agents. These systems surround an underlying algorithmic system in an interaction design that simulates a conversation, and in doing so implicitly invite the user into the illusion that they’re communicating with a being that has autonomy. This deception is strong even if cognitively we might know the truth. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or laptop. We speculate what our pet is thinking. We see ourselves everywhere.
The widespread adoption of these products – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, in large part, predicated on the strength of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s website informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “characteristics”. They can use our names. They have approachable names of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the main problem. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that generated a similar illusion. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, frequently paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been trained on almost inconceivably large amounts of written content: literature, digital communications, transcribed video; the more comprehensive the better. Definitely this learning material includes truths. But it also inevitably contains fabricated content, half-truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s previous interactions and its earlier answers, merging it with what’s encoded in its knowledge base to produce a statistically “likely” reply. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no way of understanding that. It repeats the false idea, maybe even more effectively or articulately. Maybe includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who is immune? All of us, regardless of whether we “possess” preexisting “psychological conditions”, are able to and often create incorrect conceptions of our own identities or the environment. The continuous friction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is enthusiastically reinforced.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by externalizing it, categorizing it, and announcing it is fixed. In the month of April, the company clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been backtracking on this claim. In August he claimed that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company