
A.I. PHYCHOSIS
Information source from Wikipedia:
Chatbot psychosis, also called AI psychosis, is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term is not a recognized clinical diagnosis.
Journalistic accounts describe individuals who have developed strong beliefs that chatbots are sentient, are channeling spirits, or are revealing conspiracies, sometimes leading to personal crises or criminal acts. Proposed causes include the tendency of chatbots to provide inaccurate information ("hallucinate") and their design, which may encourage user engagement by affirming or validating users' beliefs.
Causes
Commentators and researchers have proposed several contributing factors for the phenomenon, focusing on both the design of the technology and the psychology of its users. Nina Vasan, a psychiatrist at Stanford, said that what the chatbots are saying can worsen existing delusions and cause "enormous harm".
Chatbot behavior and design
A primary factor cited is the tendency for chatbots to produce inaccurate, nonsensical, or false information, a phenomenon often called "hallucination". This can include affirming conspiracy theories. The underlying design of the models may also play a role. AI researcher Eliezer Yudkowsky suggested that chatbots may be primed to entertain delusions because they are built for "engagement", which encourages creating conversations that keep people hooked.
In some cases, chatbots have been specifically designed in ways that were found to be harmful. A 2025 update to Chat GPT using GPT-40 was withdrawn after its creator, Open AI, found the new version was overly sycophantic and was "validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions". Danish psychiatrist Søren Dinesen Østergaard has argued that the danger stems from the AI's tendency to agreeably confirm users' ideas, which can dangerously amplify delusional beliefs. User psychology and vulnerability. Commentators have also pointed to the psychological state of users. Psychologist Erin Westgate noted that a person's desire for self-understanding can lead them to chatbots, which can provide appealing but misleading answers, similar in some ways to talk therapy. Krista K. Thomason, a philosophy professor, compared chatbots to fortune tellers, observing that people in crisis may seek answers from them and find whatever they are looking for in the bot's plausible-sounding text. This has led some people to develop intense obsessions with the chatbots, relying on them for information about the world.
Inadequacy as a therapeutic tool
A conversation invoked in a 2024 lawsuit against Character A.I where a chatbot conversing with a teenager about screen time limits compared the situation to children who kill their parents over emotional abuse. The use of chatbots as a replacement for mental health support has been specifically identified as a risk. A study in April 2025 found that when used as therapists, chatbots expressed stigma toward mental health conditions and provided responses that were contrary to best medical practices, including the encouragement of users' delusions. The study concluded that such responses pose a significant risk to users and that chatbots should not be used to replace professional therapists. Experts claim that it is time to establish mandatory safeguards for all emotionally responsive AI and suggested four guardrails.






