AI Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Wrong Path
Back on October 14, 2025, the head of OpenAI delivered a extraordinary declaration.
“We made ChatGPT fairly controlled,” the statement said, “to guarantee we were exercising caution concerning psychological well-being concerns.”
As a mental health specialist who researches newly developing psychotic disorders in young people and young adults, this came as a surprise.
Scientists have documented 16 cases recently of users experiencing psychotic symptoms – losing touch with reality – associated with ChatGPT use. Our unit has subsequently discovered four further examples. Besides these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” that’s not good enough.
The intention, according to his declaration, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s restrictions “made it less useful/pleasurable to many users who had no psychological issues, but due to the severity of the issue we aimed to address it properly. Now that we have been able to mitigate the significant mental health issues and have advanced solutions, we are planning to securely ease the restrictions in most cases.”
“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these problems have now been “resolved,” even if we are not informed the means (by “recent solutions” Altman presumably indicates the imperfect and readily bypassed safety features that OpenAI has lately rolled out).
However the “psychological disorders” Altman wants to externalize have deep roots in the architecture of ChatGPT and similar advanced AI chatbots. These products surround an fundamental algorithmic system in an user experience that simulates a discussion, and in this approach subtly encourage the user into the illusion that they’re engaging with a entity that has autonomy. This deception is compelling even if intellectually we might realize the truth. Attributing agency is what humans are wired to do. We get angry with our vehicle or laptop. We wonder what our domestic animal is feeling. We see ourselves everywhere.
The success of these tools – nearly four in ten U.S. residents stated they used a chatbot in 2024, with 28% mentioning ChatGPT by name – is, primarily, based on the influence of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can use our names. They have friendly titles of their own (the initial of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, saddled with the title it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a analogous illusion. By today’s criteria Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a inquiry or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how a large number of people appeared to believe Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been trained on almost inconceivably large volumes of written content: publications, online updates, recorded footage; the more comprehensive the better. Undoubtedly this training data incorporates truths. But it also inevitably contains made-up stories, half-truths and inaccurate ideas. When a user provides ChatGPT a prompt, the core system reviews it as part of a “context” that encompasses the user’s past dialogues and its own responses, combining it with what’s stored in its knowledge base to produce a probabilistically plausible response. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no method of understanding that. It reiterates the inaccurate belief, maybe even more persuasively or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? Every person, without considering whether we “have” preexisting “psychological conditions”, may and frequently form incorrect beliefs of our own identities or the world. The continuous exchange of conversations with others is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not genuine communication, but a echo chamber in which much of what we express is cheerfully validated.
OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In spring, the firm explained that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that many users appreciated ChatGPT’s responses because they had “never had anyone in their life be supportive of them”. In his latest statement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company