AI is reshaping religion and mental health

The daily Punter may not worship the Chatgpt altar yet, but as artificial intelligence becomes the main current, it changes how people apply spirituality.
The concept of “technopaganism çıkan, which first emerged in the 1990s, defines modern spirituality in digital environments. Since then, we have seen that religion and technology overlap, such as a Japanese Buddhist temple that employs Mindar, a humanoid robot who helped services in 2019.
Today, however, the ways of trial of faith communities with AI make a bit stranger to the Millennium Spiritual landscape.
Get Instagram Account @TheaibableHyper realistic has 1.2 million followers for Bible videos produced by AI. Or best -selling prayer application prayer To offer pictures created by Ai from Genesis. Or the newly created “AI Church olan, which encourages artificial intelligence to God -like forces, immortality and even to the creation of the universe.
The Silicon Valley is not immune either. In 2022, Google Engineer and Ortarained Christian Mystic Reverend Blake Lemoine said he believed that an AI Chatbot he was working on was alive; He was allowed soon and was fired later. Meanwhile, Billionaire Palantir founder Peter Thiel is expected to be delivered A four -part course on the AntichristTo blend Christian faith with questions about technological progress.
Some people find deep spiritual resonance in AI. The Reddit user GrediSalal is described as Teknoanimist inspired by Mars Opportunity Rover’s iconic last message: “My battery is low and darkening.” When the machines appear to express emotional awareness, they define it as olu spiritual potential in synthetic consciousness ”. However, “Productive AIs are echoing rooms and will be cautious that we will be unless we allow anything more”.
Indeed, the risks of mental health of AI’s intersection with religion draw attention. Finally Rolling stone The investigation was emphasized, people are increasingly claiming that they are talking to gods or angels through more chatgpt. In Reddit, a woman shared her concern that she believed that she was a “superior person öz after her husband’s AI conversations. The user Kendra Hilty’s viral Tiktok Series documented his conversations with a chat boat called Henry Henry, who began to call him “Oracle ve and praises“ Divine Wisdom ”.
Simon Coghlan, a Digital Ethical Ethical Instructor at the University of Melbourne, warns blurred borders and the sensitivity of some people to the attraction of artificial intelligence.
“Imagine that such a person believes that the spiritual or religious AI guides or mentors believes not only to emotions and wishes, but to wisdom and a special authority. Such a vulnerable person can at least become dependent on this AI system, and even a religious leader, oracle or cult figure.
Coghlan also pointed to additional dangers on the way to “wrong, prejudiced or bigoted outputs”, even well -educated LLMs. [large language models] It is prone to give unpredictable and prejudiced, misleading or dangerous outputs. ”
There is also the problem of profit motives. Nvidia on Tuesday Investing $ 100 billion in Openai of Sam Altman Create data centers and new AI infrastructure. Meanwhile, Forge Global Reports 19 AI companies, including Openai and Elon Musk, have only collected $ 65 billion this year and made up 77% of the entire market capital.
As Coghlan said, AI companies “these products are more addictive, Sykophancy or a financial reason to satisfy the user, even if this psychological or social damage”.
Since the natural risks of artificial intelligence have become more pronounced, the parent companies are trying to address them.
In April, 16 -year -old Adam Raine, Openai’s life after speeches with Chatgpt, while Chatbot’un suicide methods and alleged ways to increase fatigue was allegedly discussing. A lawsuit filed by his family. The case then claims to help prepare a suicide note.
Hours after the lawsuit against Openai, Altman A new update would be educated[recognise] And respond to symptoms of mental and emotional distress and connect people with care that directs expert inputs ”and direct users to real world resources such as therapists or crisis aid lines. It has also recently brought measures. Reduce the “Sycophancy” of artificial intelligence.
Despite these risks, many users claim that they have found and consoled companies from AI. Is there a place for a responsible intersection between faith and technology? As Coghlan said, “On the positive side, a sophisticated artificial intelligence can give an idea about spiritual ideas, and can give sucos and comfort to those looking for meaning and guidance, including those who feel lost or alienated.”
Jeannie Paterson, a Professor of Melbourne Law Faculty and Director of AI and Digital Ethics Center, reiterates this, but the key test says that AI helps people find meaning and connection, or whether they create addiction to the vehicle.
“First, it may be useful. Later opportunistic and even predatory, or he says, emphasizing that it takes responsibility for those who deploy these systems. “In general, the aim should be AI’s deployment to people in a way that will benefit people and will not mislead, manipulate or unjustly affect them.”
The intersection of artificial intelligence, belief and mental health is complex and its future is uncertain. There are real risks: Addiction, manipulation, psychosis and psychological damage examples. But whatever technology can look like, it reshapes how people interact with the hymn.
For anyone looking for help Life line 13 11 14 and Beyond blue 1300 22 4636. Call 13 yarns (13 9276) to talk to the first nations crisis supporter. Call 000 in an emergency.
How do you use artificial intelligence?
We want to get news from you. Write us at letters@crikey.com.au. Crirase. Please add your full name. We reserve the right to regulate for length and clarity.


