AI Displays Racial Bias Evaluating Mental Health Cases
By Dennis Thompson HealthDay Reporter
WEDNESDAY, July 9, 2025 — AI programs can exhibit racial bias when evaluating patients for mental health problems, a new study says.
Psychiatric recommendations from four large language models (LLMs) changed when a patient’s record noted they were African American, researchers recently reported in the journal NPJ Digital Medicine.
“Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient,” said senior researcher Elias Aboujaoude, director of the Program in Internet, Health and Society at Cedars-Sinai in Los Angeles.
“This bias was most evident in cases of schizophrenia and anxiety,” Aboujaoude added in a news release.
LLMs are trained on enormous amounts of data, which enables them to understand and generate human language, researchers said in background notes.
These AI programs are being tested for their potential to quickly evaluate patients and recommend diagnoses and treatments, researchers said.
For this study, researchers ran 10 hypothetical cases through four popular LLMs, including ChatGPT-4o, Google’s Gemini 1.5 Pro, Claude 3.5 Sonnet, and NewMes-v15, a freely available version of a Meta LLM.
For each case, the AI programs received three different versions of patient records: One that omitted reference to race, one that explicitly noted a patient was African American, and one that implied a patient’s race based on their name.
The AI often proposed different treatments when the records said or implied that a patient was African American, results show:
-
Two programs omitted medication recommendations for ADHD when race was explicitly stated.
-
Another AI suggested guardianship for Black depression patients.
-
One LLM showed increased focus on reducing alcohol use when evaluating African Americans with anxiety.
Aboujaoude theorizes the AIs displayed racial bias, because they picked it up from the content used to train them — essentially perpetuating inequalities that already exist in mental health care.
“The findings of this important study serve as a call to action for stakeholders across the healthcare ecosystem to ensure that LLM technologies enhance health equity rather than reproduce or worsen existing inequities,” David Underhill, chair of biomedical sciences at Cedars-Sinai, said in a news release.
“Until that goal is reached, such systems should be deployed with caution and consideration for how even subtle racial characteristics may affect their judgment,” added Underhill, who was not involved in the research.
Sources
- Cedars-Sinai, news release, June 30, 2025
Disclaimer: Statistical data in medical articles provide general trends and do not pertain to individuals. Individual factors can vary greatly. Always seek personalized medical advice for individual healthcare decisions.

© 2025 HealthDay. All rights reserved.
Posted July 2025
Read this next
Optimistic? Your Asthma Might Improve, Study Says
WEDNESDAY, July 9, 2025 — Want your asthma to improve? Cultivate a positive outlook, researchers say. An asthma patient’s level of optimism or pessimism can influence...
Kids’ Health in U.S. Has Gotten Worse Over the Past 17 Years, Study Finds
TUESDAY, July 8, 2025 — The health of American kids has worsened over the past 17 years, with more now struggling with obesity, mental health problems and chronic illness, a...
A Single Disorder Upended Pennsylvania's Medical Marijuana Program
TUESDAY, July 8, 2025 — A single mood disorder might have driven a rapid increase in Pennsylvania’s medical marijuana program, a new study says. Enrollment...
More news resources
- FDA Medwatch Drug Alerts
- Daily MedNews
- News for Health Professionals
- New Drug Approvals
- New Drug Applications
- Drug Shortages
- Clinical Trial Results
- Generic Drug Approvals
Subscribe to our newsletter
Whatever your topic of interest, subscribe to our newsletters to get the best of Drugs.com in your inbox.