Research

Peer-reviewed articles advancing AI in healthcare, clinical decision support, and nursing science.

224
Publications
87+
Journals
15
Years Active
Top 2%
Stanford Scientists

Funded Research

Selected grants as PI, MPI, or Co-Investigator

$25M+
Total Funding
5
NIH R01 Grants as PI/MPI
30+
Total Awards
2025–2030
R01

ID-STIGMA

MPI

NLP for Stigmatizing Language in Birthing Care

$2.85M
NIH / NICHD
2025–2027
ANF

NurseAssist-AI

Contact PI

AI Documentation & Decision Support in Home Healthcare

$467K
American Nurses Foundation
2023–2027
R01

ENGAGE

Contact PI

Reducing Stigmatizing Language in Home Healthcare

$2.7M
NIH / NIMHD
2023–2027
R01

Speech Processing for Risk

Contact PI

Automated Speech to Predict Hospitalizations & ED Visits

$2.6M
NIH / NIA
2020–2024
R01

Homecare-CONCERN

PI

Risk Models for Preventable Hospitalizations

$1.5M
AHRQ
2019–2024
R01

PREVENT

PI

Patient Prioritization in Hospital-Homecare Transition

$3.4M
NIH / NINR
2024–2025
R56

M3AD Study

Co-I

Multimorbidity & Alzheimer's Disease Across 3 Cities

$5.4M
NIH / NIA
2020–2022
R21

NLP for Dementia

MPI

Improving Identification of Alzheimer's in Home Healthcare

$474K
NIH / NIA
2019–2023
Pilot

AI for Child Safety

Contact PI

AI-Assisted Identification of Child Abuse in Hospitals

$200K
Columbia Data Science Institute
2020–2022
Industry

Speech Recognition Feasibility

MPI

Audio-Recorded Nurse-Patient Communication for Risk

$174K
Amazon / Columbia CAIT

+ 20 additional grants including K-awards, pilot grants, and international collaborations

Publications

224+ peer-reviewed articles

Showing 224 publications
2026

Accelerating real-world prediction and research in Alzheimer's: The M3AD study.

Alzheimers Dement PubMed DOI

Desvarieux M, Rundek T, Ahsan H, Narvaez J, Diaz F, Malinsky D, Ruiz LM, Topaz M, Falconer T, Natarajan K, Noble J, Entwisle B, Puram D, Anand T, Chen HY, Jiang X, Gu Y, Cohen A, Terry MB, Pierce B, Andrews H, Rogalsky E, Farzana S, Gulotta G, Beard J, Landron D, Volchenboum SL, Ravaud P, Johnson J, Susser E, Rundle A, Wei Y, Tsinoremas N, Loewenstein D, Fried L, Aiello A, Mayeux R, Hripcsak G

2026

Leveraging patient and their surrogate caregiver communication with clinicians to predict palliative care decisions: A speech processing study.

Geriatr Nurs PubMed DOI

Song J, Beigi H, Davoudi A, Moon R, McDonald MV, Sridharan S, Stanley J, Bowles KH, Shang J, Stone PW, Topaz M

Why it matters: When a loved one is seriously ill, families often struggle with the question: Is it time for comfort care? This study found that by listening carefully to how patients and families talk with nurses, we can better recognize when someone is ready for palliative care—helping more people spend their final days in peace rather than in hospitals.

2026

The association between higher body weight and stigmatizing language documented in hospital birth admission notes.

Int J Obes (Lond) PubMed DOI

Harkins SE, Hazi AK, Hulchafo II, Kim Scroggins J, Topaz M, Barcelona V

Why it matters: New mothers deserve respect regardless of their weight. This study found that doctors and nurses sometimes write judgmental comments about heavier patients in their medical records—language that can follow women through their healthcare journey and affect how future providers treat them.

2026

Advancing healthcare with large language models: A scoping review of applications and future directions.

Int J Med Inform PubMed DOI

Zhang Z, Momeni Nezhad MJ, Bagher Hosseini SM, Zolnour A, Zonour Z, Hosseini SM, Topaz M, Zolnoori M

Why it matters: ChatGPT is already in your doctor's office—helping write prescriptions, summarize visits, and answer patient questions. This comprehensive review maps out exactly how these AI tools are transforming healthcare and where the technology still falls short.

2025

Identifying and Reducing Stigmatizing Language in Home Health Care With a Natural Language Processing-Based System (ENGAGE): Protocol for a Mixed Methods Study.

JMIR Res Protoc PubMed DOI

Zhang Z, Gupta P, Potts-Thompson S, Prescott L, Morrison M, Sittig S, McDonald MV, Raymond C, Taylor JY, Topaz M

Why it matters: What if software could catch the moment a nurse writes something hurtful in a patient's chart—and suggest kinder words instead? This project is building that tool, working to eliminate bias in medical records one note at a time.

2025

Patient Disability Status and the Use of Stigmatizing Language in Clinical Notes During Hospital Admission for Birth.

J Obstet Gynecol Neonatal Nurs PubMed DOI

Harkins SE, Hulchafo II, Scroggins JK, Walsh C, Didier M, Topaz M, Barcelona V

Why it matters: Mothers with disabilities deserve the same respect as anyone else in the delivery room. But this study found their medical charts often contain more negative, judgmental language—words that can shape how nurses and doctors treat them for years to come.

2025

Understanding Gender-Specific Daily Care Preferences for Person-Centered Care: A Topic Modeling Study.

Stud Health Technol Inform PubMed DOI

Woo K, Min SH, Kim A, Choi S, Alexander GL, O'Malley TA, Moen MD, Topaz M

Why it matters: Your grandfather and grandmother may need help with the same tasks—but they want that help delivered differently. By listening to what seniors actually say about their care, researchers discovered that men and women have surprisingly different preferences. One-size-fits-all care doesn't work.

2025

Voice for All: Evaluating the Accuracy and Equity of Automatic Speech Recognition Systems in Transcribing Patient Communications in Home Healthcare.

Stud Health Technol Inform PubMed DOI

Xu Z, Vergez S, Esmaeili E, Zolnour A, Briggs KA, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV, Zolnoori M

Why it matters: When Siri or Alexa mishears you, it's annoying. When medical AI mishears a patient, it could be dangerous. This study tested voice recognition across different patients and accents—finding troubling gaps that could put some people at risk.

2025

Toward equitable documentation: Evaluating ChatGPT's role in identifying and rephrasing stigmatizing language in electronic health records.

Nurs Outlook PubMed DOI

Zhang Z, Scroggins JK, Harkins S, Hulchafo II, Moen H, Tadiello M, Barcelona V, Topaz M

Why it matters: Can ChatGPT spot bias that humans miss? Researchers put it to the test, asking the AI to find stigmatizing words in medical charts and suggest respectful replacements. The results show promise for making healthcare fairer—one chart at a time.

2025

Stigmatizing and Positive Language in Birth Clinical Notes Associated With Race and Ethnicity.

JAMA Netw Open PubMed DOI

Hulchafo II, Scroggins JK, Harkins SE, Moen H, Tadiello M, Cato K, Davoudi A, Goffman D, Aubey JJ, Green C, Topaz M, Barcelona V

Why it matters: A landmark study published in JAMA found stark differences in how doctors and nurses write about Black and Hispanic mothers versus White mothers during childbirth. The biased language—often invisible to those writing it—may explain why maternal mortality rates differ so dramatically by race.

2025

Nonlinear Relationship Between Vital Signs and Hospitalization/Emergency Department Visits Among Older Home Healthcare Patients and Critical Vital Sign Cutoff for Adverse Outcomes: Application of Generalized Additive Model.

Clin Nurs Res PubMed DOI

Min SH, Song J, Evans L, Bowles KH, McDonald MV, Chae S, Sridharan S, Barrón Y, Topaz M

Why it matters: A blood pressure of 140 might be fine for one patient and a red flag for another. This study found the exact vital sign cutoffs that signal danger for older adults at home—giving visiting nurses a better roadmap for when to act fast versus when to wait and watch.

2025

Comparing the influence of social risk factors on machine learning model performance across racial and ethnic groups in home healthcare.

Nurs Outlook PubMed DOI

Hobensack M, Davoudi A, Song J, Cato K, Bowles KH, Topaz M

Why it matters: AI that predicts hospital risk doesn't work equally well for everyone—and that's a problem. This study revealed how social factors like unstable housing or low income throw off predictions for some patients, and offers solutions to make the technology fairer.

2025

The Overlooked Dark Side of Generative AI in Nursing: An International Think Tank's Perspective.

J Nurs Scholarsh PubMed DOI

Topaz M, Peltonen LM, Michalowski M, Pruinelli L, Ronquillo CE, Zhang Z, Babic A

Why it matters: Everyone's talking about AI's promise in healthcare. But what about the risks nobody wants to mention? An international team of nursing AI experts pulls back the curtain on the dangers—from privacy violations to over-reliance on machines—and what we must do to protect patients.

2025

Symptom Burden: A Concept Analysis.

Nurs Sci Q PubMed DOI

Scharp D, Harkins SE, Topaz M

Why it matters: Researchers clarified what 'symptom burden' means in healthcare - the combined weight of all symptoms a patient experiences - helping nurses better assess and address patient suffering.

2025

Building a Time-Series Model to Predict Hospitalization Risks in Home Health Care: Insights Into Development, Accuracy, and Fairness.

J Am Med Dir Assoc PubMed DOI

Topaz M, Davoudi A, Evans L, Sridharan S, Song J, Chae S, Barrón Y, Hobensack M, Scharp D, Cato K, Rossetti SC, Kapela P, Xu Z, Gupta P, Zhang Z, Mcdonald MV, Bowles KH

Why it matters: What if your grandmother's home nurse could know—days in advance—that she was heading for the hospital? This AI does exactly that, tracking subtle changes over time to spot trouble brewing. And unlike many AI tools, this one works equally well for patients of every race and background.

2025

Beyond electronic health record data: leveraging natural language processing and machine learning to uncover cognitive insights from patient-nurse verbal communications.

J Am Med Inform Assoc PubMed DOI

Zolnoori M, Zolnour A, Vergez S, Sridharan S, Spens I, Topaz M, Noble JM, Bakken S, Hirschberg J, Bowles K, Onorato N, McDonald MV

Why it matters: The earliest signs of dementia often hide in plain sight—in how someone answers a question, the words they choose, the stories they tell. AI can now detect these subtle clues in ordinary nurse-patient conversations, potentially catching memory decline years before traditional tests would.

2024

Social Determinants of Health in Digital Health Policies: an International Environmental Scan.

Yearb Med Inform PubMed DOI

Song J, Hobensack M, Sequeira L, Shin HD, Davies S, Peltonen LM, Alhuwail D, Alnomasy N, Block LJ, Chae S, Cho H, von Gerich H, Lee J, Mitchell J, Ozbay I, Lozada-Perezmitre E, Ronquillo CE, You SB, Topaz M

Why it matters: An international team reviewed how different countries' digital health policies address social factors like poverty and housing that affect health, finding many gaps that need attention.

2024

Decoding disparities: evaluating automatic speech recognition system performance in transcribing Black and White patient verbal communication with nurses in home healthcare.

JAMIA Open PubMed DOI

Zolnoori M, Vergez S, Xu Z, Esmaeili E, Zolnour A, Anne Briggs K, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV

Why it matters: The AI transcribing your doctor's visit might work perfectly for some patients—and fail dangerously for others. This study uncovered significant accuracy gaps between how well voice recognition works for Black versus White patients, exposing a hidden bias baked into healthcare technology.