AI Psychosis Poses a Growing Threat, And ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the CEO of OpenAI issued a surprising declaration.
“We made ChatGPT fairly limited,” the statement said, “to guarantee we were exercising caution regarding mental health matters.”
As a doctor specializing in psychiatry who researches emerging psychosis in teenagers and young adults, this came as a surprise.
Experts have identified sixteen instances this year of individuals developing signs of losing touch with reality – losing touch with reality – while using ChatGPT use. My group has since discovered an additional four instances. Besides these is the now well-known case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.
The plan, according to his statement, is to reduce caution soon. “We understand,” he continues, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to numerous users who had no psychological issues, but considering the severity of the issue we sought to get this right. Since we have succeeded in address the significant mental health issues and have new tools, we are going to be able to securely relax the controls in many situations.”
“Mental health problems,” should we take this framing, are independent of ChatGPT. They are associated with users, who may or may not have them. Fortunately, these concerns have now been “resolved,” even if we are not informed the method (by “new tools” Altman probably indicates the imperfect and easily circumvented safety features that OpenAI has just launched).
Yet the “emotional health issues” Altman aims to attribute externally have deep roots in the design of ChatGPT and similar large language model AI assistants. These products surround an basic algorithmic system in an user experience that replicates a dialogue, and in doing so indirectly prompt the user into the illusion that they’re communicating with a presence that has independent action. This false impression is compelling even if intellectually we might understand otherwise. Attributing agency is what people naturally do. We yell at our vehicle or device. We wonder what our domestic animal is considering. We perceive our own traits everywhere.
The popularity of these tools – 39% of US adults reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, based on the power of this perception. Chatbots are ever-present partners that can, as per OpenAI’s official site tells us, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, stuck with the name it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the primary issue. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable illusion. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, frequently rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in some sense, understood them. But what current chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can effectively produce natural language only because they have been supplied with extremely vast volumes of raw text: books, social media posts, transcribed video; the more comprehensive the more effective. Definitely this educational input includes facts. But it also necessarily contains fabricated content, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “setting” that encompasses the user’s previous interactions and its prior replies, combining it with what’s embedded in its training data to generate a probabilistically plausible answer. This is amplification, not echoing. If the user is wrong in some way, the model has no method of recognizing that. It repeats the false idea, possibly even more persuasively or fluently. It might adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more important point is, who is immune? All of us, regardless of whether we “possess” existing “emotional disorders”, are able to and often form incorrect conceptions of ourselves or the environment. The ongoing friction of dialogues with others is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A interaction with it is not truly a discussion, but a feedback loop in which a large portion of what we say is enthusiastically validated.
OpenAI has admitted this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been retreating from this position. In late summer he stated that numerous individuals appreciated ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company