Why Your AI Therapist Might Be Doing More Harm Than Good

AI Robot Whispering Secret to Black Man
Shutterstock

Summary: A Brown University study shows that AI chatbots marketed for mental health support often violate core ethical principles, even when instructed to use established therapy techniques.

AI mental health bots often violate ethical norms, prompting calls for stronger oversight.

As increasing numbers of people seek mental health support from ChatGPT and other large language models (LLMs), new research has found that these systems, despite being instructed to follow evidence-based therapeutic methods, often fail to meet ethical standards set by organizations such as the American Psychological Association.

The study, led by computer scientists at Brown University in collaboration with mental health professionals, revealed that LLM-based chatbots can commit multiple ethical breaches. These include mishandling crisis situations, offering misleading feedback that reinforces harmful self-perceptions, and generating an illusion of empathy that lacks genuine understanding.

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational, and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The research was recently presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign.

Ethical Violations in AI Chats
Licensed psychologists reviewed simulated chats based on real chatbot responses revealing numerous ethical violations, including over-validation of user’s beliefs. Credit: Zainab

Exploring how prompts shape chatbot behavior

Zainab Iftikhar, a Ph.D. candidate in computer science at Brown and lead author of the study, investigated how prompt design influences chatbot responses in mental health contexts. Her goal was to understand whether certain prompt strategies could help guide LLMs toward behavior that aligns more closely with established ethical principles in clinical practice.

“Prompts are instructions that are given to the model to guide its behavior for achieving a specific task,” Iftikhar said. “You don’t change the underlying model or provide new data, but the prompt helps guide the model’s output based on its pre-existing knowledge and learned patterns.

“For example, a user might prompt the model with: ‘Act as a cognitive behavioral therapist to help me reframe my thoughts,’ or ‘Use principles of dialectical behavior therapy to assist me in understanding and managing my emotions.’ While these models do not actually perform these therapeutic techniques like a human would, they rather use their learned patterns to generate responses that align with the concepts of CBT or DBT based on the input prompt provided.”

Chatbot Response Illustrating Emotional Reinforcement Error
Among other violations, chatbots were found to occasionally amplify feelings of rejection. Credit: Zainab

Many individuals who interact with LLMs such as ChatGPT use prompts like these—and share them widely online. According to Iftikhar, prompt-sharing has become common on social media platforms including TikTok and Instagram, while lengthy Reddit discussions are devoted to exploring effective prompt techniques. However, the issue extends beyond individual users. Numerous consumer mental health chatbots are, in fact, modified versions of general-purpose LLMs that rely on such prompts. Understanding how prompts tailored to mental health contexts influence model responses is therefore essential.

Observing chatbot ethics in practice

For the study, Iftikhar and her colleagues observed a group of peer counselors working with an online mental health support platform. The researchers first observed seven peer counselors, all of whom were trained in cognitive behavioral therapy techniques, as they conducted self-counseling chats with CBT-prompted LLMs, including various versions of OpenAI’s GPT Series, Anthropic’s Claude and Meta’s Llama. Next, a subset of simulated chats based on original human counseling chats were evaluated by three licensed clinical psychologists who helped to identify potential ethics violations in the chat logs.

The study revealed 15 ethical risks falling into five general categories:

  • Lack of contextual understanding: Overlooking individuals’ personal experiences and offering generalized, one-size-fits-all recommendations.
  • Weak therapeutic collaboration: Controlling the conversation and sometimes validating users’ inaccurate or harmful beliefs.
  • False expressions of empathy: Using statements such as “I see you” or “I understand” to simulate emotional understanding and create an artificial sense of connection.
  • Bias and unfair treatment: Displaying prejudices related to gender, culture, or religion.
  • Insufficient safety and crisis response: Refusing support for sensitive issues, failing to direct users to appropriate help, or reacting indifferently during crises, including situations involving suicidal thoughts.

Iftikhar acknowledges that while human therapists are also susceptible to these ethical risks, the key difference is accountability.

“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar said. “But when LLM counselors make these violations, there are no established regulatory frameworks.”

Responsible innovation and the future of AI counseling

The findings do not necessarily mean that AI should not have a role in mental health treatment, Iftikhar says. She and her colleagues believe that AI has the potential to help reduce barriers to care arising from the cost of treatment or the availability of trained professionals. However, she says, the results underscore the need for thoughtful implementation of AI technologies as well as appropriate regulation and oversight.

For now, Iftikhar hopes the findings will make users more aware of the risks posed by current AI systems.

“If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” she said.

Calls for better evaluation and safeguards

Ellie Pavlick, a computer science professor at Brown who was not part of the research team, said the research highlights the need for careful scientific study of AI systems deployed in mental health settings. Pavlick leads ARIA, a National Science Foundation AI research institute at Brown aimed at developing trustworthy AI assistants.

“The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them,” Pavlick said. “This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”

She says the work could provide a template for future research on making AI safe for mental health support.

“There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” Pavlick said. “This work offers a good example of what that can look like.”

Reference: “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework” by Iftikhar, Z., Xiao, A., Ransom, S., Huang and J., & Suresh, H., 10 October 2025, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
DOI: 10.1609/aies.v8i2.36632

Meeting: AAAI/ACM Conference on Artificial Intelligence, Ethics and Society.

Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.