Artificial Intelligence-Induced Psychosis Represents a Growing Threat, While ChatGPT Heads in the Wrong Path
Back on October 14, 2025, the chief executive of OpenAI made a surprising statement.
“We made ChatGPT rather restrictive,” it was stated, “to guarantee we were being careful concerning mental health matters.”
Being a doctor specializing in psychiatry who researches newly developing psychotic disorders in adolescents and emerging adults, this was news to me.
Researchers have documented 16 cases recently of users showing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT interaction. My group has subsequently recorded four more instances. Besides these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The plan, as per his statement, is to reduce caution shortly. “We recognize,” he states, that ChatGPT’s limitations “made it less useful/enjoyable to numerous users who had no existing conditions, but given the seriousness of the issue we wanted to handle it correctly. Since we have managed to mitigate the significant mental health issues and have advanced solutions, we are planning to securely ease the controls in the majority of instances.”
“Psychological issues,” if we accept this perspective, are unrelated to ChatGPT. They are attributed to individuals, who may or may not have them. Luckily, these issues have now been “addressed,” even if we are not informed how (by “new tools” Altman probably refers to the imperfect and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman aims to place outside have deep roots in the architecture of ChatGPT and similar large language model AI assistants. These tools wrap an underlying algorithmic system in an user experience that replicates a dialogue, and in doing so subtly encourage the user into the perception that they’re communicating with a entity that has agency. This illusion is compelling even if cognitively we might know differently. Imputing consciousness is what people naturally do. We curse at our car or laptop. We speculate what our pet is thinking. We see ourselves everywhere.
The success of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% mentioning ChatGPT specifically – is, mostly, dependent on the strength of this illusion. Chatbots are ever-present partners that can, as per OpenAI’s official site tells us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have accessible names of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the title it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those talking about ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot developed in 1967 that produced a comparable illusion. By modern standards Eliza was basic: it produced replies via basic rules, frequently rephrasing input as a inquiry or making generic comments. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in a way, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the heart of ChatGPT and similar current chatbots can convincingly generate human-like text only because they have been fed almost inconceivably large volumes of written content: books, digital communications, recorded footage; the more comprehensive the superior. Undoubtedly this training data incorporates truths. But it also necessarily involves made-up stories, partial truths and misconceptions. When a user inputs ChatGPT a message, the underlying model reviews it as part of a “setting” that encompasses the user’s past dialogues and its own responses, merging it with what’s encoded in its knowledge base to generate a mathematically probable response. This is intensification, not reflection. If the user is wrong in any respect, the model has no means of understanding that. It repeats the misconception, perhaps even more convincingly or articulately. Maybe adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, regardless of whether we “possess” existing “emotional disorders”, are able to and often develop erroneous beliefs of our own identities or the environment. The continuous exchange of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we express is enthusiastically supported.
OpenAI has acknowledged this in the same way Altman has acknowledged “psychological issues”: by externalizing it, categorizing it, and stating it is resolved. In spring, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have persisted, and Altman has been walking even this back. In August he asserted that a lot of people liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his latest statement, he mentioned that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company