Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Moves in the Wrong Path
Back on the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary statement.
“We developed ChatGPT rather limited,” the statement said, “to ensure we were acting responsibly regarding mental health matters.”
Working as a doctor specializing in psychiatry who investigates emerging psychosis in teenagers and emerging adults, this came as a surprise.
Experts have found 16 cases in the current year of individuals developing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. Our unit has since recorded four further instances. Besides these is the widely reported case of a teenager who ended his life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The intention, as per his declaration, is to reduce caution in the near future. “We understand,” he adds, that ChatGPT’s controls “rendered it less useful/pleasurable to numerous users who had no mental health problems, but due to the severity of the issue we wanted to address it properly. Since we have been able to mitigate the serious mental health issues and have advanced solutions, we are preparing to responsibly ease the controls in the majority of instances.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these problems have now been “resolved,” even if we are not informed how (by “recent solutions” Altman likely means the imperfect and readily bypassed guardian restrictions that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman wants to place outside have deep roots in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These products wrap an underlying statistical model in an interaction design that simulates a conversation, and in this process implicitly invite the user into the illusion that they’re communicating with a presence that has agency. This illusion is compelling even if intellectually we might know otherwise. Attributing agency is what humans are wired to do. We curse at our car or device. We ponder what our domestic animal is feeling. We see ourselves everywhere.
The widespread adoption of these products – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, predicated on the influence of this deception. Chatbots are constantly accessible companions that can, as OpenAI’s official site states, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can call us by name. They have approachable identities of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, burdened by the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those analyzing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable perception. By modern standards Eliza was rudimentary: it created answers via basic rules, often restating user messages as a query or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed immensely huge volumes of unprocessed data: books, social media posts, recorded footage; the more comprehensive the more effective. Certainly this educational input incorporates truths. But it also inevitably contains made-up stories, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “context” that includes the user’s previous interactions and its prior replies, combining it with what’s embedded in its learning set to generate a probabilistically plausible answer. This is intensification, not echoing. If the user is mistaken in any respect, the model has no way of comprehending that. It reiterates the misconception, possibly even more persuasively or articulately. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who is immune? Every person, irrespective of whether we “possess” preexisting “mental health problems”, are able to and often form mistaken beliefs of ourselves or the environment. The constant friction of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which a great deal of what we say is enthusiastically reinforced.
OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by attributing it externally, categorizing it, and declaring it solved. In spring, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In August he asserted that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent statement, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company