Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps

In the absence of stronger federal regulations, some states began to organize applications that offer AI “therapy” when more people return. Artificial intelligence for mental health advice.
However, all this year’s laws do not fully deal with the rapidly changing landscape of AI software development. Application developers, policy makers and mental health defenders say that the patchwork of the emerging state laws is not enough to protect users or to hold the creators of harmful technology responsible.
“The truth is that millions of people use these vehicles and they don’t come back,” he said.
___
Editor’s note – this story includes suicide debate. If you or if someone you know needs help, the national suicide and crisis in the United States can be used by calling or texting 988. There is also an online conversation at 988lifeline.org.
___
State laws adopt different approaches. Illinois And Nevada He banned the use of AI to treat mental health. Utah Place certain limits in the therapy chat boots that require users to protect health information and clearly disclose the chat boat. Pennsylvania– New Jersey And California He also thinks of ways to regulate AI therapy.
The effect on users changes. Some applications prevented access to states with prohibitions. Others say they haven’t made any changes while expecting further clarity.
And most of the laws do not cover General chatbots like chatgptIt is not explicitly marketed for therapy, but it is used by a number of people who are told for this. These boots suffered from users in terrible situations I lost its grip in reality or He took their own lives After interacting with them.
Vaile Wright, who supervised health innovation at the American Psychology Association, admitted that applications can meet a need and a nationwide Scarcity of mental health providersHigh maintenance and unequal access costs for insured patients.
Wright said that mental health chat boots, which are based on science, created with expert input and watched by people, could change the view.
“This could be something that helps people before entering the crisis,” he said. “This is not what’s on the commercial market right now.”
So federal regulation and supervision said it was necessary.
At the beginning of this month, the Federal Trade Commission Open questions to seven AI chatbot company – Instagram and Facebook’s parent companies, Google, Chatgpt, Gok (Chatbot on X), Charce.AI and Snapchat – “how to measure the potentially negative effects of this technology on children and young people, tested and followed”. And Food and Pharmaceutical Administration, a Advisory Committee is convened on November 6 Productive AI Mental Health Devices.
Federal agencies may consider restrictions on how chatbots are marketed, limit addictive practices, users may require that they are not medical providers, companies to monitor and report their suicide thoughts and offer legal protections for people who report bad practices by companies.
All applications did not prevent access
From “companion practices” to “AI therapists to“ mental health ”practices, AI’s use in mental health services is varied and difficult to define, but rather than writing laws.
This led to different regulatory approaches. For example, some states, EVACTER APPLICATIONS these Only designed for friendshipBut don’t go into mental health services. The laws in Illinois and Nevada claim that it explicitly provide mental health treatment and fined $ 10,000 in Illinois and up to $ 15,000 in Nevada.
However, even a single application can be difficult to categorize.
He said Earkick said that there were many things that were still “very muddy ında about the Illinois law, for example, and did not limit the company’s access to there.
Stephan and his team called on the chat boots, which were initially a cartoon Panda. However, when users began to use the word in reviews, they embraced the terminology to make the application appear in searches.
Last week, they retreated using the therapy and medical terms. Earkick’s website described Chatbot as “Equipped Empathic AI Consultant to support your mental health journey ,, but now a“ chat boat for your own care ”.
Nevertheless, Stephan defended, “We do not diagnose.”
Users can set a “panic button ve to look for someone who is reliable in the crisis, and the chat boot will” poke “to search for a therapist if mental health deteriorates. However, Stephan was never designed as a suicide prevention practice, and if someone gave information about his thoughts of self -harm, the police would not be called.
Stephan said he was happy that people looked at AI in a critical eye, but that states were concerned about their ability to keep up with innovation.
“The rate of development of everything is very big,” he said.
Other applications immediately prevented access. When Illinois users download AI therapy application Ash, a message invites them to send them to the legislators E -POSTA, while allegedly forbids applications such as “misrepresented legislation ,, leaving unrelated chatbotes.
ASH spokesman did not respond to multiple requests for the interview.
Illinois Secretary of Financial and Vocational Regulation Mario Treto Jr., the target is ultimately to make sure that licensed therapists are the only therapists, he said.
“Therapy is just more than word changes, T says Treto. He continued: “It requires empathy, requires clinical trial, ethical responsibility, none of them really copy.”
A chatbot company is trying to completely reproduce treatment
In March, a team based in Dartmouth University is the first known Randomized Clinical Study A Productive AI Chatbot for Mental Health Treatment.
The aim was to treat people who were diagnosed with anxiety, depression or eating disorders of the chat boat, called Thebraot. He was trained on viginations and transcripts written by the team to show evidence -based response.
The study found that users have gradually rated Thererabot in a therapist, and significantly reduced it compared to people who did not use eight weeks later. Every interaction was watched by a person who intervened if Chatbot’s response is not harmful or evidence -based.
Nicholas Jacobson, a clinical psychologist who leads to laboratory research, said that the results have promised early hope, but greater studies are needed to show whether Thebraot has worked for many people.
“The field is so dramatic that I think that the area should progress more carefully at the moment,” he said.
Many AI applications have been optimized for participation, and instead of force people’s thoughts as the therapists do, they have been built to support everything users say. While many of them walk the line of friendship and therapy, the therapists ethically blur the blurred boundaries.
Thelerabot team tried to avoid these problems.
The application is still in the test and is not widely available. But Jacobson is worried about what strict bans will mean for developers who adopt a careful approach. He said Illinois is not a clear way to provide evidence that an application is safe and effective.
“They want to protect people, but the traditional system is really unsuccessful,” he said. “So, trying to remain loyal to the status quo is really nothing to do.”
The regulators and defenders of the laws say that they are open to changes. However, today’s chat boots, national Social Workers Association’s relationship with the Association of Illinois and Nevada for the invoices in Nevada Kyle Hillman, mental health provider is not a solution.
“Everyone who feels sad doesn’t need a therapy,” he said. But for people with real mental health problems or suicide thoughts, “I know that there is a work of labor, but here is a boat ‘ – this is a very privileged position.”
___
The Associated Press Department of Health and Science receives support from the Howard Hughes Institute of Medicine Department of Science Education and Robert Wood Johnson Foundation. AP is only responsible for all content.



