Experts warn of ‘ChatGPT psychosis’ among users of AI chatbots

Increasing numbers of people are turning to AI chatbots for emotional support and even as a replacement for therapists, but this may have a detrimental impact on the health of some users, with growing reports of extreme behavior seemingly inspired by the heavy use of AI services.
A worrying pattern of AI chatbots confirming or reinforcing users’ delusions may be contributing to a rise in reports of so-called “AI psychosis” or “ChatGPT psychosis”; None of these are clinically recognized, but are increasingly being reported in the media and online forums.
A. recently published preprint The study, conducted by an interdisciplinary team of researchers from institutions including King’s College London, Durham University and City University of New York, examined more than a dozen cases documented in news reports and online forums, revealing a disturbing trend: AI chatbots often amplify delusional thinking.
The study notes that through ongoing conversations with AI services, grandiose, referential, persecutory, and even romantic delusions may become increasingly established.
technology site earlier this year Futurism reported increasing concerns a wave Large numbers of people around the world are becoming “obsessed” with AI chatbots and are drifting towards serious mental health crises.
Then the first reports encouraged more and more Similar stories will “continue to spread” about people who “quickly suffer horrific breakdowns after becoming obsessed with AI.”
Various reports include cases in which a man scaled the walls of Windsor Castle with a crossbow in 2021 and then told police he was there “to kill the Queen” after weeks of fiddling with a chatbot, which he assured her would help him plan the attack.
Another case involved a Manhattan accountant who talked to ChatGPT for up to 16 hours a day and advised him to stop taking prescription drugs and increase his ketamine intake. He claimed he could fly From the 19th floor window.
Another man in Belgium committed suicide amid concerns about the climate crisis after a chatbot named Eliza suggested he join her so they could live as one in “paradise”.
But as anecdotal evidence mounts, scientists are now on a mission to understand whether chatbots are causing these malfunctions, or whether many of these cases highlight how vulnerable people are already on the verge of exhibiting psychotic symptoms.
Currently, no peer-reviewed clinical or long-term studies show that the use of AI alone can trigger psychosis in people, regardless of prior history of mental health problems.
In the mentioned newspaper Illusion by DesignExperts said that during their investigation, “a complex and troubling picture emerged.”
They suggested that without appropriate safeguards, AI chatbots “may inadvertently amplify delusional content or undermine reality testing and contribute to the onset or worsening of psychotic symptoms.”
The team noted that even as they conducted their research, the number of anecdotal cases was increasing at an alarming rate. “Reports have begun to emerge of individuals with no previous history of psychosis experiencing first episodes following intense interaction with productive AI agents,” the authors wrote.
“We think these reports raise pressing questions about the epistemic responsibilities of these technologies and the vulnerability of users navigating situations of uncertainty and distress.”
Inside an article Psychology TodayPsychiatrist and author Dr. In her article published this week, Marlynn Wei warned that because generic AI chatbots are designed to prioritize user satisfaction and ongoing interaction over therapeutic support, symptoms such as grandiosity, disorganized thinking, and hypergraphia (an excessive compulsion to write and/or draw) that are hallmarks of manic episodes can be “both facilitated and worsened” by the use of AI.
He said this underscores the urgent need for “AI psychoeducation” as there is insufficient awareness of the various ways chatbots can reinforce delusions and worsen mental health outcomes.
Inside another article In a paper published this month in response to research and anecdotal evidence, Lucy Osler, a lecturer in philosophy at the University of Exeter, said AI’s inherent shortcomings should remind us that computers still cannot replace real interactions with people.
“Instead of trying to perfect technology, perhaps we should return to social worlds of isolation. [which drives some people to AI dependency] “It can be addressed,” he said.
Independent OpenAI, Google and Microsoft have been contacted for comment.




