The chatbot is now part of the mental health workflow

Patients are turning to chatbots before therapy and our mental health systems aren’t ready for it, writes Alexander Amatus.
CHATBOT is now part of the mental health workflow whether services are planned or not.
In mental health, we tend to think of the “front door” as the GP referral, the online booking form, the call for referrals or the first session.
For a growing number of people, it is neither of those things.
It’s a chatbot.
It’s not because people think a chatbot is a psychologist. Most don’t do this. That’s because chatbots are introduced at 11:40 p.m., they’re non-intrusive, they help people organize their thoughts, and they reduce the emotional friction of asking for help. This is important for someone who is overwhelmed, embarrassed, short on time, or just not ready to talk to a real person.
The problem is not that it happens.
The problem is that many services still act as if it doesn’t.
This gap is important for healthcare IT teams, digital leaders, practice owners, and clinicians as chatbot now impacts what reaches your systems, staff, and clinicians before the patient enters formal care.
New reality: “Artificial intelligence-shaped” patients are already arriving
We see a pattern in mental health services:
- People come with AI-generated symptom summaries
- they use chatbots to prepare messages for clinics or employers
- They paste chatbot language into purchase forms
- they want a specific diagnosis because “the bot said it might happen…”
- they use AI to journal, rehearse conversations, or seek reassurance between sessions
Some of these are really useful. In fact, some patients present better because they do the challenging work of putting chaotic thoughts into words.
However, some of these create new risks:
- False certainty (“I already know what this is”)
- Reassurance cycles (checking the same fear over and over)
- delaying seeking help (using AI instead of booking)
- Privacy bias (sharing sensitive personal information through tools they do not understand)
- workflow friction (staff and clinicians spending time deciphering the confusion between the patient’s voice and the generated text)
This edge is not a problem. This is now a matter of practical operation.
Why is this a health IT problem and not just a clinical problem?
Mental health services often treat the use of AI as a purely clinical conversation. It is, but it is also a discussion of systems and governance.
Once patients begin using chatbots prior to care, the effects appear as follows:
- digital input design
- prioritization scripts
- language of consent and privacy
- documentation workflows
- staff training
- ways to increase risk
- patient communication
In other words, this falls squarely into health informatics and service design.
If your service has improved telehealth, patient messaging, online reservations, and digital forms over the past few years, this is the next layer. Patients now come with a layer of consumer AI on top of your formal workflow.
You can ignore this or design accordingly.
What good services are starting to be provided?
The most useful response is not to ban artificial intelligence and not to blindly encourage it.
It’s about setting clear boundaries around what patients are already doing and building simple, person-centered pathways.
Here are five practical moves mental health providers and health IT leaders can make now.
1) Update “Digital front door” language
Most websites and booking forms still don’t say anything about AI. This leaves patients guessing.
A short, clear message can do many things:
- It is okay to bring your prepared notes or summaries (including artificial intelligence-supported ones) with you.
- AI tools can help you organize thoughts
- AI tools should not be used for diagnosis, emergency advice or crisis support
- If there is an immediate risk, contact crisis services or urgent care
This reduces embarrassment, reduces confusion, and helps patients come in with better expectations.
It also provides a consistent pipeline of use to your executive team and clinicians.
2) Create AI-aware triage scripts for non-clinical staff
Reception and front desk staff are often the first people to hear: “I put this on ChatGPT and it said…”
Without guidance, staff may become involved in clinical discussions or directly dismiss the patient.
Neither answer helps.
A better approach is a simple scripting framework:
- Acknowledge: “Thanks for sharing this.”
- Reframing: “Online tools can be useful for organizing thoughts, but they can’t assess risk or diagnose”
- Redirection: “Let us book you with the right clinician / triage pathway”
- Escalate if necessary: follow existing risk protocols
This is basic operational hygiene. Protects staff, supports patients and keeps role boundaries intact.
3) Treat AI-generated texts like any other patient-provided material
Some services aren’t sure what to do with AI-generated summaries in records.
The answer is often simple: treat them as patient-provided information, not clinical outcomes.
This means:
- Note the source where relevant (e.g. summary provided by the patient)
- clinically verify important details
- Avoid copying and pasting created content into notes without reviewing it
- Separate patient narrative from clinician’s formulation
It’s not just the quality of documentation. It is important for medical-legal clarity and continuity of care.
If a service is serious about governance, this point needs to be made clearly in internal documentation standards.
4) Add privacy notices where patients may need them
Most privacy notices are written for lawyers, not patients. They are also often placed in the wrong place.
If patients are using AI to prepare for care, the key message should appear in behavioral moments:
- About login forms
- in booking confirmations
- in pre-appointment emails
- in FAQ
- on telehealth engagement pages
Keep it simple:
- Avoid entering highly sensitive credentials into public AI tools
- Use AI to draft and organize, not for emergencies
- Submit your questions to your clinician
This is one of the easiest wins in digital health communication right now.
5) Stop seeing this as one team’s problem
AI in mental health workflows addresses:
- clinical governance
- privacy/compliance
- digital/product
- application operations
- workforce training
If a single team is handling this job, you will encounter blind spots.
The services that will handle this well are those that create a small, cross-functional working group and quickly decide on a few key considerations:
- What can patients be told?
- What can the staff say?
- what was documented
- what was escalated
- which tools (if any) are internally approved
- clearly what is not allowed
It doesn’t take a 40-page policy to get this started. It requires a usable, one-page protocol and staff training.
Bigger opportunity: AI can reduce friction if we design guardrails
There is a tendency to frame this as a threat or a revolution.
In practice, it is more ordinary than that.
Patients use AI because access to mental health is difficult. Reservations are difficult. It is difficult to name the symptoms. Asking for help is difficult. It’s hard to tell your story for the fifth time.
If AI can help someone move from “I’m in transition” to “I made an appointment and wrote down what I need to say,” that’s helpful.
It is harmful if it keeps them in a loop, delays maintenance, or creates poor precision.
The difference is often not the means. These are the guardrails around the tool.
That’s why this is such an important moment for health IT in mental health. The task is not to predict the future. The task is to design workflows that fit the reality patients are currently experiencing.
The chatbot is already on its way.
The question now is whether the rest of the system is ready for this.
Alexander Amatus works at the intersection of AI and mental healthcare leadership TherapyNearMe.com.auAustralia’s fastest growing mental health service, focusing on practical, person-centred integration of digital tools into real-world care pathways.
Support independent journalism Subscribe to IA.
Related Articles


