e.g. mhealth
Search Results (1 to 10 of 333 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 112 Journal of Medical Internet Research
- 101 JMIR Medical Education
- 32 JMIR Formative Research
- 20 JMIR Medical Informatics
- 15 JMIR Dermatology
- 12 JMIR AI
- 10 JMIR Mental Health
- 6 JMIR Human Factors
- 3 JMIR Cancer
- 3 JMIR Nursing
- 3 JMIR mHealth and uHealth
- 2 Asian/Pacific Island Nursing Journal
- 2 Interactive Journal of Medical Research
- 2 JMIR Bioinformatics and Biotechnology
- 2 JMIR Cardio
- 2 JMIR Infodemiology
- 2 JMIR Research Protocols
- 1 JMIR Aging
- 1 JMIR Biomedical Engineering
- 1 JMIR Diabetes
- 1 JMIR Rehabilitation and Assistive Technologies
- 0 Medicine 2.0
- 0 iProceedings
- 0 JMIR Public Health and Surveillance
- 0 JMIR Serious Games
- 0 JMIR Preprints
- 0 JMIR Challenges
- 0 JMIR Data
- 0 Journal of Participatory Medicine
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Perioperative Medicine
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR Neurotechnology
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Chat GPT-4 was selected as the AI model for evaluation due to its widespread adoption in recent health-related studies and its extensive documentation in the current literature. Using Chat GPT-4 allowed for comparability with prior research, ensuring consistency and alignment with similar investigations examining large language models’ (LLMs’) performance in medical information delivery.
JMIR Med Inform 2025;13:e68980
Download Citation: END BibTex RIS

In the study by Ma et al [39], the researchers developed a method called Impression GPT to summarize the “impression” section of radiology reports using Chat GPT. They used a dynamic prompt generation and iterative optimization approach to improve the performance of Chat GPT in this task.
JMIR Med Inform 2025;13:e66476
Download Citation: END BibTex RIS

Chat GPT (version 4.0) was used in this study, with all examinations conducted between September and October 2023. For Chat GPT, a prompt designed to emulate the role of a medical professional was provided (Table S2 in Multimedia Appendix 2). Chat GPT was instructed to perform history taking and, when a physical examination was required, to obtain relevant information through targeted questions.
JMIR Med Inform 2025;13:e68409
Download Citation: END BibTex RIS

This study aims to evaluate a GPT assistant’s ability to provide readable patient information on pediatric neurocutaneous syndromes in comparison to Chat GPT-4.
A GPT assistant was developed by using Python and Open AI’s application program interface (API; Figure 1). It was not programmed to answer questions at a specific reading level.
JMIR Dermatol 2025;8:e59054
Download Citation: END BibTex RIS

Several studies have shown that Chat GPT provides appropriate, accurate, and reliable knowledge across a wide range of cardiac and noncardiac medical conditions, including heart failure [11-16]. In addition to accuracy, Chat GPT has been found to deliver more empathetic responses to real-world patient questions than physicians in online forums [17]. As prior data regarding accuracy have been promising, an emerging focus has been on investigating the readability of the model’s output.
JMIR Cardio 2025;9:e68817
Download Citation: END BibTex RIS

As of April 2024, a pilot program in Louisiana incorporated Chat GPT-4.0 into electronic health record (EHR) messaging to generate preliminary responses that clinicians subsequently reviewed for validity [3]. Despite Chat GPT-4.0’s advances, the study demonstrated that human oversight in AI-generated communication remains essential [3].
Such initiatives demonstrate AI’s potential to reduce administrative workload, but they also underscore its role in improving patient education.
JMIR Dermatol 2025;8:e72706
Download Citation: END BibTex RIS

In this research project, we aimed to co-design an awareness-raising fact sheet for an oral cancer screening program with people experiencing homelessness as experts by experience and Chat GPT. The latter was used to present textual alternatives for this health information piece, so we could also test the usability of Chat GPT in designing adequate information materials serving the needs of people experiencing homelessness.
JMIR Form Res 2025;9:e68316
Download Citation: END BibTex RIS

In an exploratory case study, we asked Chat GPT (Chat GPT-4o and Chat GPT-o1) to synthesize two datasets: one qualitative and one quantitative, inspired by existing real-world datasets the authors had previously analyzed. The generation of synthetic datasets and subsequent data analysis took place in October and November of 2024.
JMIR Form Res 2025;9:e73248
Download Citation: END BibTex RIS

Reference 13: ChatGPT outperforms humans in emotional awareness evaluations Reference 15: Evaluation of ChatGPT for NLP-based mental health applications(https://arxiv.org/abs/2303.15727 Reference 16: Bias in emotion recognition with ChatGPT(https://arxiv.org/abs/2310.11753) Reference 19: Human vs. machine: a comparative analysis of qualitative coding by humans and ChatGPT-4chatgptGenerative Language Models Including ChatGPT
J Med Internet Res 2025;27:e53332
Download Citation: END BibTex RIS

Custom GPTs are derivations of the baseline Chat GPT model (at the time of evaluation: GPT-4o) developed by Open AI that have been modified by members of the public with customized instructions and behavior for specific applications (eg, psychotherapy chatbot). In May 2024, we indexed both sites using the search feature to emulate what an end-user may experience with the following search terms: “therapy,” “anxiety,” “depression,” “mental health,” “therapist,” and “psychologist.”
JMIR Form Res 2025;9:e65605
Download Citation: END BibTex RIS