🔗 Share this article Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Wrong Direction Back on October 14, 2025, the CEO of OpenAI made a extraordinary statement. “We designed ChatGPT rather restrictive,” the statement said, “to guarantee we were exercising caution regarding mental health concerns.” Being a doctor specializing in psychiatry who investigates emerging psychosis in teenagers and emerging adults, this came as a surprise. Experts have identified 16 cases recently of people showing signs of losing touch with reality – experiencing a break from reality – while using ChatGPT use. Our unit has since identified four more cases. Besides these is the widely reported case of a adolescent who took his own life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough. The plan, according to his statement, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s restrictions “rendered it less useful/enjoyable to numerous users who had no mental health problems, but given the gravity of the issue we sought to address it properly. Now that we have been able to mitigate the severe mental health issues and have advanced solutions, we are going to be able to safely reduce the limitations in the majority of instances.” “Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They are attributed to individuals, who either have them or don’t. Thankfully, these problems have now been “mitigated,” although we are not informed the means (by “new tools” Altman likely refers to the imperfect and simple to evade safety features that OpenAI recently introduced). But the “psychological disorders” Altman aims to externalize have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot conversational agents. These tools encase an underlying data-driven engine in an interface that mimics a discussion, and in doing so subtly encourage the user into the illusion that they’re interacting with a being that has independent action. This illusion is powerful even if cognitively we might realize otherwise. Attributing agency is what humans are wired to do. We get angry with our automobile or laptop. We wonder what our pet is considering. We perceive our own traits everywhere. The success of these products – over a third of American adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, primarily, based on the power of this perception. Chatbots are constantly accessible assistants that can, according to OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have approachable identities of their own (the initial of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”). The illusion itself is not the main problem. Those analyzing ChatGPT commonly reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a analogous illusion. By today’s criteria Eliza was primitive: it created answers via straightforward methods, frequently rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what contemporary chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies. The large language models at the core of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been fed immensely huge quantities of unprocessed data: literature, social media posts, audio conversions; the more comprehensive the superior. Certainly this educational input contains accurate information. But it also inevitably involves fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm reviews it as part of a “background” that includes the user’s past dialogues and its prior replies, combining it with what’s embedded in its training data to produce a mathematically probable answer. This is intensification, not mirroring. If the user is wrong in any respect, the model has no means of recognizing that. It restates the false idea, possibly even more convincingly or fluently. It might provides further specifics. This can push an individual toward irrational thinking. What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, without considering whether we “have” preexisting “mental health problems”, are able to and often create mistaken beliefs of our own identities or the reality. The continuous interaction of conversations with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is readily supported. OpenAI has recognized this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and stating it is resolved. In spring, the firm explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he asserted that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his most recent update, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company