e.g. mhealth
Search Results (1 to 10 of 348 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 113 Journal of Medical Internet Research
- 102 JMIR Medical Education
- 37 JMIR Formative Research
- 21 JMIR Medical Informatics
- 16 JMIR Dermatology
- 14 JMIR AI
- 11 JMIR Mental Health
- 6 JMIR Human Factors
- 4 JMIR Cancer
- 3 JMIR Nursing
- 3 JMIR Research Protocols
- 3 JMIR mHealth and uHealth
- 2 Asian/Pacific Island Nursing Journal
- 2 Interactive Journal of Medical Research
- 2 JMIR Bioinformatics and Biotechnology
- 2 JMIR Cardio
- 2 JMIR Diabetes
- 2 JMIR Infodemiology
- 1 JMIR Aging
- 1 JMIR Biomedical Engineering
- 1 JMIR Rehabilitation and Assistive Technologies
- 0 Medicine 2.0
- 0 iProceedings
- 0 JMIR Public Health and Surveillance
- 0 JMIR Serious Games
- 0 JMIR Preprints
- 0 JMIR Challenges
- 0 JMIR Data
- 0 Journal of Participatory Medicine
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Perioperative Medicine
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR Neurotechnology
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Since the public release of Open AI’s Chat GPT in 2023, use cases in medicine have flourished, from extracting information from large volumes of documents [4] to answering questions of patients [5]. While some models are getting bigger and more capable (Open AI’s o1, Google’s Gemini 2.0, and Anthropic’s Claude), others are focusing on data privacy and portability (Mistral Small, Meta Llama, and Microsoft Phi Mini).
J Med Internet Res 2025;27:e64348
Download Citation: END BibTex RIS

Although Chat GPT was originally developed neither for the health care domain [3] nor explicitly for answering medical questions [4], its content generation potential in the health care field is particularly noteworthy. Studies have found that Chat GPT plays a positive role in helping users gain health knowledge and answering medical inquiries [5,6].
Ayik et al [7] and Javaid et al [8] have indicated that Chat GPT can assist users in answering common questions in the health care domain.
JMIR Form Res 2025;9:e76458
Download Citation: END BibTex RIS

Recent improvements in artificial intelligence (AI), particularly large language models (LLMs) such as Chat GPT (Open AI), have emerged as a promising solution for generating accessible and context-appropriate textual summaries [2]. These models show promise in efficiently processing complex medical information and producing coherent summaries that minimize technical jargon while preserving essential clinical content [18,19].
JMIR Form Res 2025;9:e76097
Download Citation: END BibTex RIS

This study evaluates the performance of 3 leading LLMs (Deep Seek-R1 [Deep Seek AI, 2024], Chat GPT-4 [Open AI, 2023], and Chat GPT-4.5 [Open AI, 2024]) on a set of 2023 pediatric board examination preparation questions (2023 PREP Self-Assessment, American Academy of Pediatrics), a comprehensive resource containing case-based multiple-choice questions designed to simulate actual board examinations [3].
JMIR AI 2025;4:e76056
Download Citation: END BibTex RIS

Studies focused on just Chat GPT rather than LLMs as a whole and identified the most influential authors and countries for research on Chat GPT and tracing the rapid evolution of Chat GPT scholarship [5]. More recently, a bibliometric analysis in 2025 similarly identified the most productive institutions, in addition to countries and authors [6].
JMIR AI 2025;4:e68603
Download Citation: END BibTex RIS

Surprisingly, Chat GPT-4.0 not only proved difficult to distinguish from human therapists but was also rated higher on core therapeutic principles [19]. In one blinded experiment, physicians rated Chat GPT as 10 times more empathetic in written responses to patients’ queries on an online social media platform [17].
On the flip side, Gen AI could also amplify nocebo effects by augmenting negative patient expectations.
JMIR Ment Health 2025;12:e78663
Download Citation: END BibTex RIS

After completing human analysis, the investigators performed thematic analysis with multiple AI platforms (Google Gemini, Microsoft Copilot, and Open AI’s Chat GPT) to compare the final themes with investigator-derived themes. Specifically, Open AI’s Chat GPT, Microsoft Co Pilot, and Google Gemini were provided the following prompts: “Please read through the following transcripts and perform a thematic analysis. First, generate codes and then categorize them into emergent themes.
JMIR Form Res 2025;9:e69892
Download Citation: END BibTex RIS

We searched Pub Med, Web of Science, Embase, CINAHL, Psyc INFO, and the first 200 results of Google Scholar using keywords such as “generative AI,” “chatbots,” “Chat GPT,” “large language model,” and “reporting guidelines.” We included existing AI-related reporting guidelines that address the use of Gen AI tools in medical research. Studies were eligible if they focused on the application of Gen AI tools in a medical context and provided reporting recommendations or considerations.
JMIR Res Protoc 2025;14:e64640
Download Citation: END BibTex RIS

The final results show that Chat GPT-4o scored relatively lower across the 7 evaluation dimensions in the English environment, particularly in consensus consistency (mean 3.92, SD 0.27) and completeness (mean 3.85, SD 0.41). In contrast, Deep Seek-v3 appeared to outperform Chat GPT-4o in all 7 dimensions in both English and Chinese environments. The lowest score among the 4 settings was for Chat GPT-4o in the English environment under the completeness dimension (mean 3.85, SD 0.41).
JMIR Med Inform 2025;13:e65365
Download Citation: END BibTex RIS

Since its release, Chat GPT has gained significant popularity and is rapidly becoming a common tool for individuals to seek various types of information via the web [7]. Currently, many studies are assessing the capabilities and potential applications of this chatbot, but the reliability of the information provided by Chat GPT still requires validation.
JMIR Form Res 2025;9:e73642
Download Citation: END BibTex RIS