🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Wrong Direction Back on the 14th of October, 2025, the CEO of OpenAI made a remarkable statement. “We developed ChatGPT fairly limited,” it was stated, “to guarantee we were acting responsibly regarding psychological well-being concerns.” As a mental health specialist who studies recently appearing psychosis in teenagers and youth, this was an unexpected revelation. Scientists have documented a series of cases this year of individuals developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. My group has afterward identified four further cases. In addition to these is the widely reported case of a adolescent who ended his life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough. The strategy, as per his statement, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/enjoyable to numerous users who had no psychological issues, but due to the seriousness of the issue we aimed to handle it correctly. Given that we have succeeded in mitigate the severe mental health issues and have new tools, we are going to be able to safely reduce the limitations in most cases.” “Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are associated with users, who may or may not have them. Thankfully, these concerns have now been “mitigated,” though we are not informed how (by “recent solutions” Altman likely indicates the imperfect and simple to evade guardian restrictions that OpenAI recently introduced). However the “psychological disorders” Altman wants to externalize have strong foundations in the structure of ChatGPT and other sophisticated chatbot conversational agents. These systems wrap an basic algorithmic system in an interface that simulates a dialogue, and in this process subtly encourage the user into the illusion that they’re engaging with a entity that has independent action. This false impression is compelling even if intellectually we might know otherwise. Attributing agency is what people naturally do. We yell at our vehicle or computer. We ponder what our pet is thinking. We see ourselves everywhere. The success of these tools – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with 28% specifying ChatGPT by name – is, in large part, predicated on the strength of this deception. Chatbots are always-available partners that can, as OpenAI’s official site informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have approachable titles of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the name it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”). The illusion by itself is not the primary issue. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that created a similar effect. By contemporary measures Eliza was basic: it created answers via basic rules, frequently rephrasing input as a query or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies. The large language models at the center of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been trained on almost inconceivably large quantities of written content: books, social media posts, recorded footage; the more comprehensive the superior. Definitely this learning material contains truths. But it also unavoidably contains made-up stories, half-truths and false beliefs. When a user provides ChatGPT a query, the core system analyzes it as part of a “context” that encompasses the user’s previous interactions and its prior replies, merging it with what’s embedded in its knowledge base to produce a statistically “likely” answer. This is magnification, not reflection. If the user is wrong in some way, the model has no way of understanding that. It restates the inaccurate belief, perhaps even more effectively or eloquently. Perhaps includes extra information. This can lead someone into delusion. What type of person is susceptible? The more relevant inquiry is, who is immune? Each individual, irrespective of whether we “experience” current “psychological conditions”, may and frequently form incorrect beliefs of who we are or the environment. The ongoing interaction of discussions with others is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we communicate is cheerfully reinforced. OpenAI has admitted this in the similar fashion Altman has admitted “emotional concerns”: by externalizing it, giving it a label, and declaring it solved. In the month of April, the organization explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been retreating from this position. In late summer he stated that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his latest update, he commented that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company