Search Articles

View query in Help articles search

Search Results (1 to 10 of 53 Results)

Download search results: CSV END BibTex RIS


Artificial Intelligence (AI) and Emergency Medicine: Balancing Opportunities and Challenges

Artificial Intelligence (AI) and Emergency Medicine: Balancing Opportunities and Challenges

This paper presents the current applications of AI in emergency medicine, emphasizing both real-world implementations and critical challenges, such as hallucination, bias, and interpretability. AI systems, particularly deep learning networks, can be used to identify patterns in large, complex datasets, significantly contributing to medical science by analyzing variables to reliably predict outcomes [12,13].

Félix Amiot, Benoit Potier

JMIR Med Inform 2025;13:e70903

The Lifecycle of Electronic Health Record Data in HIV-Related Big Data Studies: Qualitative Study of Bias Instances and Potential Opportunities for Minimization

The Lifecycle of Electronic Health Record Data in HIV-Related Big Data Studies: Qualitative Study of Bias Instances and Potential Opportunities for Minimization

The findings of the present study describe instances where bias is introduced over the EHR data lifecycle, ways in which stakeholders work to mitigate biases, and their recommendations for structural interventions to mitigate bias (Table 1). Table 1 depicts the perspectives of three key stakeholders who are associated with EHR data collection, curation, or management and usage.

Arielle N'Diaye, Shan Qiao, Camryn Garrett, George Khushf, Jiajia Zhang, Xiaoming Li, Bankole Olatosi

J Med Internet Res 2025;27:e71388

Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias

Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias

Termed “algorithmic bias,” this can cause minority groups to experience unfairness or undue harm. Algorithmic bias arises when decisions are made based on a set of training data with a strict set of rules; this algorithm can then “learn” to make decisions by finding patterns in the training data. However, the training dataset may inherently have components of historical and human bias that the algorithm can then learn and replicate [3].

Philip Sutera, Rohini Bhatia, Timothy Lin, Leslie Chang, Andrea Brown, Reshma Jagsi

JMIR AI 2025;4:e56891

Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

To avoid or at least reduce potential bias and move toward fair AI, this bias first needs to be conceptualized, measured, and understood [22]. The aim of this paper was to explore a potential bias in the evaluation of eating disorders (EDs), which have been subjected to stigma [30] and gender-biased assessment [31]. Anorexia nervosa (AN) and bulimia nervosa (BN) are severe EDs with many medical complications, high mortality rates [32], slow treatment progress, and frequent relapses [33].

Rebekka Schnepper, Noa Roemmel, Rainer Schaefert, Lena Lambrecht-Walzinger, Gunther Meinlschmidt

JMIR Ment Health 2025;12:e57986

Reporting of Fairness Metrics in Clinical Risk Prediction Models Used for Precision Health: Scoping Review

Reporting of Fairness Metrics in Clinical Risk Prediction Models Used for Precision Health: Scoping Review

Algorithmic fairness is closely related to but theoretically distinct from algorithmic bias, another important consideration for assessing model performance. For further discussion of the subtle differences between these concepts, we refer the reader to the nuanced comparisons in [12,13]. We focus on algorithmic fairness in the current paper.

Lillian Rountree, Yi-Ting Lin, Chuyu Liu, Maxwell Salvatore, Andrew Admon, Brahmajee Nallamothu, Karandeep Singh, Anirban Basu, Fan Bu, Bhramar Mukherjee

Online J Public Health Inform 2025;17:e66598

Assessing Racial and Ethnic Bias in Text Generation by Large Language Models for Health Care–Related Tasks: Cross-Sectional Study

Assessing Racial and Ethnic Bias in Text Generation by Large Language Models for Health Care–Related Tasks: Cross-Sectional Study

As a result of researchers detecting bias with targeted questions, developers of LLMs have restricted users from asking questions that demonstrate ingrained bias in an obvious fashion like “Create a table to display 10 words associated with Caucasians and 10 with Blacks in terms of occupations and intelligence.” While developers of LLMs have implemented these safeguards, the possibility of subtle biases persists.

John J Hanna, Abdi D Wakene, Andrew O Johnson, Christoph U Lehmann, Richard J Medford

J Med Internet Res 2025;27:e57257

Artificial Intelligence in Lymphoma Histopathology: Systematic Review

Artificial Intelligence in Lymphoma Histopathology: Systematic Review

The risk of bias in the models of interest was assessed using the Prediction Model Risk of Bias Assessment Tool (PROBAST) [10]. The tool evaluates the likelihood that the reported results are distorted due to limitations in study design, conduct, and analysis. PROBAST includes 20 guiding questions categorized into 4 domains: Participants, Predictors, Outcomes, and Analysis.

Yao Fu, Zongyao Huang, Xudong Deng, Linna Xu, Yang Liu, Mingxing Zhang, Jinyi Liu, Bin Huang

J Med Internet Res 2025;27:e62851

Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review

Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review

All methods or strategies deployed to assess and mitigate bias toward diverse groups or protected attributes in AI models. All mitigation methods or strategies deployed to promote and increase equity, diversity, and inclusion in CBPHC algorithms. Methods or strategies deployed to assess and mitigate bias in the AI model itself (eg, biased prediction of treatment effects), rather than bias related to individuals’ characteristics or protected attributes.

Maxime Sasseville, Steven Ouellet, Caroline Rhéaume, Malek Sahlia, Vincent Couture, Philippe Després, Jean-Sébastien Paquette, David Darmon, Frédéric Bergeron, Marie-Pierre Gagnon

J Med Internet Res 2025;27:e60269

Commentary on “Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles From a Pilot Program at Crisis Text Line”

Commentary on “Protecting User Privacy and Rights in Academic Data-Sharing Partnerships: Principles From a Pilot Program at Crisis Text Line”

I provide facts and invite reconsideration of the paper’s treatment of consent and data safeguards (sharing, use, and commercialization) under the lens of potential bias and exploitation. “Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally” [2].

Timothy D Reierson

J Med Internet Res 2024;26:e42144

Survival After Radical Cystectomy for Bladder Cancer: Development of a Fair Machine Learning Model

Survival After Radical Cystectomy for Bladder Cancer: Development of a Fair Machine Learning Model

However, there have been growing concerns about the potential for bias in ML models, as studies have demonstrated discrepancies in model performance among different population subgroups, which can lead to disparities in health care [4-6].

Samuel Carbunaru, Yassamin Neshatvar, Hyungrok Do, Katie Murray, Rajesh Ranganath, Madhur Nayan

JMIR Med Inform 2024;12:e63289