At a glance
-
AI chatbots are missing in mental health care: Research from Brown University found that mainstream linguistic models often fail to meet the ethical standards expected of professional psychotherapy.
-
Behavioral hazards in simulation therapy sessions: When tested in counseling situations, AI systems sometimes mishandled problems, reinforced dangerous beliefs, and produced seemingly empathetic responses without real insight.
-
The need for strong oversight and standards: Researchers say clear ethical guidelines, accountability, and rules are needed before AI chatbots can be safely relied on for mental health care.
-
As more and more people turn to similar tools ChatGPT and other language majors (LLMs) for mental health counseling, new research suggests these programs may not be ready to safely fill that role. A study conducted by researchers at Brown University found that AI chatbots often fail to meet the ethical standards expected of professional psychotherapy, even when they are told to follow established treatment methods.
Working with mental health professionals, the researchers tested the AI systems in simulated counseling sessions. They found repeated patterns of problematic behavior, including mishandling critical situations, reinforcing dangerous beliefs, and using language that creates the appearance of empathy without real understanding.
“In this work, we present a physician-informed framework of 15 ethical risks to show how LLM counselors violate ethical standards in mental health practice by mapping the behavior of a specific violated model,” the researchers wrote in their study.
“We call for future work to develop ethical, academic and legal standards for LLM counselors — standards that reflect the quality and rigor of care required in person-centered psychotherapy.”
The findings were presented in AAAI/ACM conference on artificial intelligence, ethics and society. To test the systems, trained peer counselors conducted simulation sessions with AI models who were encouraged to act as therapists. These include versions of OpenAI GPT Series, Claude again Llama. Three licensed clinical psychologists then reviewed the interviews and identified fifteen behavioral risks, including failure to adjust advice to the person’s circumstances, biased responses, and mishandling of sensitive issues such as suicidal thoughts.
Lead author Zainab Iftikhar he said the lack of accountability is a key concern when it comes to treating people. “For personal therapists, there are regulatory boards and mechanisms for providers to be held accountable for malpractice and malpractice,” she said. “But when LLM consultants commit these violations, there are no established regulatory frameworks.”
The researchers emphasize that AI tools can help improve access to mental health support, especially where cost or availability limits traditional care. However, they argue that stronger safeguards, clear ethical standards and better oversight are needed before AI chatbots can be trusted in high-level mental health situations.
Other ways you can support us
Raise money for mental health research by organizing a fundraising event.
Your donation supports research to better understand, treat and prevent serious mental illness.
The post Are AI Therapy Chatbots Safe? The post New Research Suggests Good Anxiety appeared first on MQ Mental Health Research.