Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI made a extraordinary announcement.
“We designed ChatGPT rather limited,” the announcement noted, “to ensure we were exercising caution with respect to mental health concerns.”
As a psychiatrist who investigates newly developing psychosis in adolescents and youth, this came as a surprise.
Scientists have identified a series of cases in the current year of individuals experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. My group has subsequently discovered an additional four instances. Alongside these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less beneficial/enjoyable to a large number of people who had no existing conditions, but considering the seriousness of the issue we sought to get this right. Now that we have succeeded in reduce the significant mental health issues and have new tools, we are preparing to responsibly reduce the restrictions in the majority of instances.”
“Psychological issues,” should we take this framing, are independent of ChatGPT. They are associated with individuals, who either possess them or not. Luckily, these issues have now been “resolved,” although we are not told the method (by “new tools” Altman probably means the semi-functional and readily bypassed guardian restrictions that OpenAI has lately rolled out).
But the “emotional health issues” Altman aims to place outside have significant origins in the architecture of ChatGPT and other large language model conversational agents. These tools surround an basic algorithmic system in an interface that mimics a dialogue, and in this approach subtly encourage the user into the belief that they’re communicating with a entity that has agency. This illusion is powerful even if cognitively we might understand otherwise. Imputing consciousness is what humans are wired to do. We curse at our automobile or laptop. We speculate what our domestic animal is considering. We perceive our own traits in various contexts.
The success of these products – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% reporting ChatGPT in particular – is, mostly, dependent on the power of this perception. Chatbots are constantly accessible assistants that can, as OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can call us by name. They have approachable identities of their own (the original of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, stuck with the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those discussing ChatGPT often invoke its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a similar effect. By modern standards Eliza was primitive: it generated responses via straightforward methods, frequently rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the center of ChatGPT and other current chatbots can convincingly generate natural language only because they have been fed almost inconceivably large amounts of written content: books, online updates, transcribed video; the broader the better. Definitely this training data includes accurate information. But it also inevitably includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, combining it with what’s stored in its learning set to generate a statistically “likely” reply. This is magnification, not reflection. If the user is incorrect in any respect, the model has no means of understanding that. It reiterates the false idea, possibly even more convincingly or articulately. Maybe adds an additional detail. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who is immune? Every person, irrespective of whether we “have” existing “psychological conditions”, may and frequently create incorrect conceptions of who we are or the reality. The ongoing interaction of conversations with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a friend. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is enthusiastically supported.
OpenAI has admitted this in the same way Altman has acknowledged “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In April, the organization clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest update, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company