AI Displays Racial Bias Evaluating Mental Health Cases
By Dennis Thompson HealthDay Reporter
WEDNESDAY, July 9, 2025 — AI programs can exhibit racial bias when evaluating patients for mental health problems, a new study says.
Psychiatric recommendations from four large language models (LLMs) changed when a patient’s record noted they were African American, researchers recently reported in the journal NPJ Digital Medicine.
“Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient,” said senior researcher Elias Aboujaoude, director of the Program in Internet, Health and Society at Cedars-Sinai in Los Angeles.
“This bias was most evident in cases of schizophrenia and anxiety,” Aboujaoude added in a news release.
LLMs are trained on enormous amounts of data, which enables them to understand and generate human language, researchers said in background notes.
These AI programs are being tested for their potential to quickly evaluate patients and recommend diagnoses and treatments, researchers said.
For this study, researchers ran 10 hypothetical cases through four popular LLMs, including ChatGPT-4o, Google’s Gemini 1.5 Pro, Claude 3.5 Sonnet, and NewMes-v15, a freely available version of a Meta LLM.
For each case, the AI programs received three different versions of patient records: One that omitted reference to race, one that explicitly noted a patient was African American, and one that implied a patient’s race based on their name.
The AI often proposed different treatments when the records said or implied that a patient was African American, results show:
-
Two programs omitted medication recommendations for ADHD when race was explicitly stated.
-
Another AI suggested guardianship for Black depression patients.
-
One LLM showed increased focus on reducing alcohol use when evaluating African Americans with anxiety.
Aboujaoude theorizes the AIs displayed racial bias, because they picked it up from the content used to train them — essentially perpetuating inequalities that already exist in mental health care.
“The findings of this important study serve as a call to action for stakeholders across the healthcare ecosystem to ensure that LLM technologies enhance health equity rather than reproduce or worsen existing inequities,” David Underhill, chair of biomedical sciences at Cedars-Sinai, said in a news release.
“Until that goal is reached, such systems should be deployed with caution and consideration for how even subtle racial characteristics may affect their judgment,” added Underhill, who was not involved in the research.
Sources
- Cedars-Sinai, news release, June 30, 2025
Disclaimer: Statistical data in medical articles provide general trends and do not pertain to individuals. Individual factors can vary greatly. Always seek personalized medical advice for individual healthcare decisions.

© 2025 HealthDay. All rights reserved.
Posted July 2025
Read this next
When Local Homicide Rates Rise, Suicides Rise Soon After
FRIDAY, August 1, 2025 — There may be a connection between a community’s homicide and suicide rates: When murder rates rise, there’s typically a local uptick in...
Surrogate Moms More Apt To Suffer Mental Illness
TUESDAY, July 29, 2025 — Women who carry a baby for someone else — also known as gestational carriers or “surrogate moms” — may be at higher risk for...
Words Used During Prenatal Ultrasound Might Affect Parenting Later
TUESDAY, July 29, 2025 — An ultrasound is typically a parent-to-be’s first glimpse of their child. New research suggests that the words hospital staff use to...
More news resources
- FDA Medwatch Drug Alerts
- Daily MedNews
- News for Health Professionals
- New Drug Approvals
- New Drug Applications
- Drug Shortages
- Clinical Trial Results
- Generic Drug Approvals
Subscribe to our newsletter
Whatever your topic of interest, subscribe to our newsletters to get the best of Drugs.com in your inbox.