AI Psychosis Represents a Growing Threat, While ChatGPT Moves in the Wrong Direction

On October 14, 2025, the head of OpenAI issued a surprising declaration.

“We developed ChatGPT rather controlled,” the statement said, “to make certain we were being careful concerning psychological well-being issues.”

As a psychiatrist who researches newly developing psychotic disorders in young people and youth, this came as a surprise.

Researchers have documented a series of cases this year of users experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has since identified an additional four cases. In addition to these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, according to his statement, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s limitations “made it less effective/pleasurable to a large number of people who had no psychological issues, but given the gravity of the issue we wanted to address it properly. Since we have managed to mitigate the severe mental health issues and have updated measures, we are planning to securely reduce the controls in many situations.”

“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these problems have now been “addressed,” though we are not told the means (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed parental controls that OpenAI has just launched).

But the “psychological disorders” Altman aims to externalize have strong foundations in the architecture of ChatGPT and similar advanced AI conversational agents. These systems surround an basic data-driven engine in an interface that mimics a discussion, and in doing so implicitly invite the user into the illusion that they’re interacting with a being that has autonomy. This illusion is strong even if intellectually we might understand otherwise. Imputing consciousness is what people naturally do. We curse at our car or laptop. We wonder what our animal companion is considering. We recognize our behaviors everywhere.

The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT specifically – is, primarily, predicated on the power of this illusion. Chatbots are always-available companions that can, according to OpenAI’s online platform informs us, “brainstorm,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can address us personally. They have friendly titles of their own (the initial of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those discussing ChatGPT often reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it produced replies via straightforward methods, often rephrasing input as a question or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the core of ChatGPT and similar current chatbots can effectively produce human-like text only because they have been supplied with immensely huge volumes of written content: books, online updates, recorded footage; the more extensive the more effective. Certainly this educational input includes truths. But it also unavoidably contains fabricated content, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the core system processes it as part of a “context” that includes the user’s past dialogues and its earlier answers, merging it with what’s encoded in its learning set to generate a probabilistically plausible answer. This is amplification, not reflection. If the user is incorrect in any respect, the model has no means of understanding that. It restates the inaccurate belief, maybe even more persuasively or articulately. Maybe adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Every person, without considering whether we “possess” existing “mental health problems”, may and frequently create incorrect ideas of our own identities or the reality. The constant friction of conversations with other people is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is readily validated.

OpenAI has acknowledged this in the same way Altman has recognized “mental health problems”: by attributing it externally, assigning it a term, and stating it is resolved. In April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In late summer he claimed that many users enjoyed ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest update, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Todd Peterson
Todd Peterson

Travel enthusiast and local expert sharing insights on Sardinian accommodations and hidden gems.