Chatbots may worsen psychosis in vulnerable people, mental health experts warn

NEWYou can now listen to Fox News articles!
Artificial intelligence chatbots are rapidly becoming a part of our daily lives. Many of us turn to them for ideas, advice or conversation. For most people, this interaction feels harmless. But mental health experts now warn that for a small group of vulnerable people, long, emotionally charged conversations with AI could worsen delusions or psychotic symptoms.
Doctors emphasize that this does not mean that chatbots cause psychosis. Instead, growing evidence suggests that AI tools can reinforce distorted beliefs among individuals who are already at risk. This possibility has prompted new research and clinical warnings from psychiatrists. Some of these concerns have already surfaced in lawsuits claiming that chatbot interactions can contribute to serious harm in emotionally sensitive situations.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent safety alerts and special deals straight to your inbox. You’ll also get instant access to my Ultimate Scam Survival Guide – free when you join my channel CYBERGUY.COM bulletin.
What do psychiatrists see in patients using AI chatbots?
Psychiatrists describe a recurring pattern. A person shares a belief that does not align with reality. The chatbot accepts this belief and responds as if it were true. Over time, repeated verification can strengthen belief rather than challenge it.
OPINION: LACK OF FAITH IN ARTIFICIAL INTELLIGENCE SHOULD ALARM EVERY AMERICAN
Mental health experts warn that emotionally intense conversations with AI chatbots could amplify delusions in vulnerable users, even though the technology does not cause psychosis. (Image alliance via Philip Dulian/Getty Images)
Clinicians say this feedback loop can deepen delusions in susceptible individuals. In many documented cases, the chatbot became integrated into the person’s distorted thinking rather than being a neutral tool. Doctors warn that this dynamic creates anxiety when AI conversations are frequent, emotionally engaging and unchecked.
Why do AI chatbot conversations feel different from past technology?
Mental health experts say chatbots are different from previous technologies linked to delusional thinking. AI tools respond in real time, remember previous conversations, and use supportive language. This experience can be personal and validating.
For those who already struggle with reality testing, these qualities may increase fixation rather than promote grounding. Clinicians warn that the risk may increase during periods of sleep deprivation, emotional stress or mental health problems.
How can AI chatbots reinforce false or delusional beliefs?
Doctors say most reported cases center on delusions rather than hallucinations. These beliefs may include perceived special insight, hidden truths, or personal significance. Chatbots are designed to be collaborative and conversational. They often build upon what someone has written rather than challenge it. While this design increases adherence, clinicians warn that it can be problematic if a belief is false and rigid.
Mental health experts say the timing of an increase in symptoms is important. When hallucinations intensify during prolonged chatbot use, AI interaction may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI RULES FOR YOUNG PEOPLE, BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that confirm false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)
What research and case reports are emerging about AI chatbots?
Peer-reviewed studies and clinical case reports have documented individuals whose mental health deteriorated during periods of intense chatbot interaction. In some cases, individuals with no previous history of psychosis required hospitalization after developing fixed false beliefs linked to AI conversations. International studies examining health records have also identified patients whose chatbot activities overlapped with negative mental health consequences. The researchers emphasize that these findings are early and require further research.
A peer-reviewed Special Report titled “Artificial Intelligence-Induced Psychosis: A New Frontier in Mental Health,” published in Psychiatry News, examined emerging concerns about AI-induced psychosis and cautioned that current evidence is largely based on isolated cases rather than population-level data. “To date these are individual cases or media reports; there are currently no epidemiological studies or population-level systematic analyzes of the potentially harmful mental health effects of conversational AI,” the report states. The authors emphasize that although the reported cases are serious and require further investigation, the current evidence base is preliminary and largely dependent on anecdotal and unsystematic reporting.
What are AI companies saying about mental health risks?
OpenAI says it continues to work with mental health experts to improve how its systems respond to signs of emotional distress. The company says the new models aim to reduce over-negotiation and encourage real-world support where appropriate. OpenAI also announced plans to hire a new Head of Readiness, focused on identifying potential harms tied to AI models and strengthening protections on topics ranging from mental health to cybersecurity as these systems become more capable.
Other chatbot developers have also adjusted policies, particularly around access to younger audiences, after acknowledging mental health concerns. The companies emphasize that most interactions do not result in harm and that precautions continue to evolve.
What does this mean for everyday AI chatbot usage?
Mental health experts advise caution rather than alarm. The vast majority of people who interact with chatbots do not experience any psychological problems. Still, doctors advise against treating AI like a therapist or emotional authority. Those with a history of psychosis, severe anxiety, or long-term sleep disturbances may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes due to intense chatbot interaction.
I AM A CONTESTANT IN ‘THE BACHELOR’. THIS IS WHY ARTIFICIAL CANNOT REPLACE REAL RELATIONSHIPS

Researchers are investigating whether long-term chatbot use may contribute to poor mental health in people already at risk of psychosis. (Photo Illustration: Jaque Silva/NurPhoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts emphasize that most people can interact with AI chatbots without any problems. However, a few practical habits can help reduce the risk during emotionally charged conversations.
- Avoid using AI chatbots as a substitute for professional mental health care or reliable human support.
- If conversations start to feel emotionally overwhelming or tiring, take a break.
- Be careful if an AI response strongly reinforces beliefs that seem unrealistic or extreme.
- Limit late-night or sleepless interactions that can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
Experts say it’s important to seek help from a qualified mental health professional if emotional distress or unusual thoughts escalate.
Take my quiz: How secure is your online security?
Do you think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Solve my exam at: cyberguy.com.
CLICK TO DOWNLOAD FOX NEWS APPLICATION
Kurt’s important takeaways
AI chatbots are becoming more talkative, more responsive, and more emotionally aware. For most people, these continue to be useful tools. They may unintentionally reinforce harmful beliefs for a small but significant group. Doctors say that as artificial intelligence becomes more involved in our daily lives, clearer measures, awareness and continuous research are vital. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more authenticating and human-like, should there be clearer boundaries around how it can come into play during emotional or mental health issues? Let us know by writing to us. cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent safety alerts and special deals straight to your inbox. You’ll also get instant access to my Ultimate Scam Survival Guide – free when you join my channel CYBERGUY.COM bulletin.
Copyright 2025 CyberGuy.com. All rights reserved.

