Research

Over 200 peer-reviewed articles advancing AI in healthcare, clinical decision support, and nursing science.

Showing 217 publications

Leveraging patient and their surrogate caregiver communication with clinicians to predict palliative care decisions: A speech processing study.

Geriatr Nurs

Song J, Beigi H, Davoudi A, Moon R, McDonald MV, Sridharan S, Stanley J, Bowles KH, Shang J, Stone PW, Topaz M

When a loved one is seriously ill, families often struggle with the question: Is it time for comfort care? This study found that by listening carefully to how patients and families talk with nurses, we can better recognize when someone is ready for palliative care—helping more people spend their final days in peace rather than in hospitals.

The association between higher body weight and stigmatizing language documented in hospital birth admission notes.

Int J Obes (Lond)

Harkins SE, Hazi AK, Hulchafo II, Kim Scroggins J, Topaz M, Barcelona V

New mothers deserve respect regardless of their weight. This study found that doctors and nurses sometimes write judgmental comments about heavier patients in their medical records—language that can follow women through their healthcare journey and affect how future providers treat them.

Identifying and Reducing Stigmatizing Language in Home Health Care With a Natural Language Processing-Based System (ENGAGE): Protocol for a Mixed Methods Study.

JMIR Res Protoc

Zhang Z, Gupta P, Potts-Thompson S, Prescott L, Morrison M, Sittig S, McDonald MV, Raymond C, Taylor JY, Topaz M

What if software could catch the moment a nurse writes something hurtful in a patient's chart—and suggest kinder words instead? This project is building that tool, working to eliminate bias in medical records one note at a time.

Understanding Gender-Specific Daily Care Preferences for Person-Centered Care: A Topic Modeling Study.

Stud Health Technol Inform

Woo K, Min SH, Kim A, Choi S, Alexander GL, O'Malley TA, Moen MD, Topaz M

Your grandfather and grandmother may need help with the same tasks—but they want that help delivered differently. By listening to what seniors actually say about their care, researchers discovered that men and women have surprisingly different preferences. One-size-fits-all care doesn't work.

Voice for All: Evaluating the Accuracy and Equity of Automatic Speech Recognition Systems in Transcribing Patient Communications in Home Healthcare.

Stud Health Technol Inform

Xu Z, Vergez S, Esmaeili E, Zolnour A, Briggs KA, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV, Zolnoori M

When Siri or Alexa mishears you, it's annoying. When medical AI mishears a patient, it could be dangerous. This study tested voice recognition across different patients and accents—finding troubling gaps that could put some people at risk.

Toward equitable documentation: Evaluating ChatGPT's role in identifying and rephrasing stigmatizing language in electronic health records.

Nurs Outlook

Zhang Z, Scroggins JK, Harkins S, Hulchafo II, Moen H, Tadiello M, Barcelona V, Topaz M

Can ChatGPT spot bias that humans miss? Researchers put it to the test, asking the AI to find stigmatizing words in medical charts and suggest respectful replacements. The results show promise for making healthcare fairer—one chart at a time.

Stigmatizing and Positive Language in Birth Clinical Notes Associated With Race and Ethnicity.

JAMA Netw Open

Hulchafo II, Scroggins JK, Harkins SE, Moen H, Tadiello M, Cato K, Davoudi A, Goffman D, Aubey JJ, Green C, Topaz M, Barcelona V

A landmark study published in JAMA found stark differences in how doctors and nurses write about Black and Hispanic mothers versus White mothers during childbirth. The biased language—often invisible to those writing it—may explain why maternal mortality rates differ so dramatically by race.

Nonlinear Relationship Between Vital Signs and Hospitalization/Emergency Department Visits Among Older Home Healthcare Patients and Critical Vital Sign Cutoff for Adverse Outcomes: Application of Generalized Additive Model.

Clin Nurs Res

Min SH, Song J, Evans L, Bowles KH, McDonald MV, Chae S, Sridharan S, Barrón Y, Topaz M

A blood pressure of 140 might be fine for one patient and a red flag for another. This study found the exact vital sign cutoffs that signal danger for older adults at home—giving visiting nurses a better roadmap for when to act fast versus when to wait and watch.

Symptom Burden: A Concept Analysis.

Nurs Sci Q

Scharp D, Harkins SE, Topaz M

Researchers clarified what 'symptom burden' means in healthcare - the combined weight of all symptoms a patient experiences - helping nurses better assess and address patient suffering.

Building a Time-Series Model to Predict Hospitalization Risks in Home Health Care: Insights Into Development, Accuracy, and Fairness.

J Am Med Dir Assoc

Topaz M, Davoudi A, Evans L, Sridharan S, Song J, Chae S, Barrón Y, Hobensack M, Scharp D, Cato K, Rossetti SC, Kapela P, Xu Z, Gupta P, Zhang Z, Mcdonald MV, Bowles KH

What if your grandmother's home nurse could know—days in advance—that she was heading for the hospital? This AI does exactly that, tracking subtle changes over time to spot trouble brewing. And unlike many AI tools, this one works equally well for patients of every race and background.

Beyond electronic health record data: leveraging natural language processing and machine learning to uncover cognitive insights from patient-nurse verbal communications.

J Am Med Inform Assoc

Zolnoori M, Zolnour A, Vergez S, Sridharan S, Spens I, Topaz M, Noble JM, Bakken S, Hirschberg J, Bowles K, Onorato N, McDonald MV

The earliest signs of dementia often hide in plain sight—in how someone answers a question, the words they choose, the stories they tell. AI can now detect these subtle clues in ordinary nurse-patient conversations, potentially catching memory decline years before traditional tests would.

Social Determinants of Health in Digital Health Policies: an International Environmental Scan.

Yearb Med Inform

Song J, Hobensack M, Sequeira L, Shin HD, Davies S, Peltonen LM, Alhuwail D, Alnomasy N, Block LJ, Chae S, Cho H, von Gerich H, Lee J, Mitchell J, Ozbay I, Lozada-Perezmitre E, Ronquillo CE, You SB, Topaz M

An international team reviewed how different countries' digital health policies address social factors like poverty and housing that affect health, finding many gaps that need attention.

Decoding disparities: evaluating automatic speech recognition system performance in transcribing Black and White patient verbal communication with nurses in home healthcare.

JAMIA Open

Zolnoori M, Vergez S, Xu Z, Esmaeili E, Zolnour A, Anne Briggs K, Scroggins JK, Hosseini Ebrahimabad SF, Noble JM, Topaz M, Bakken S, Bowles KH, Spens I, Onorato N, Sridharan S, McDonald MV

The AI transcribing your doctor's visit might work perfectly for some patients—and fail dangerously for others. This study uncovered significant accuracy gaps between how well voice recognition works for Black versus White patients, exposing a hidden bias baked into healthcare technology.