JMIR AI

A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.

Editor-in-Chief:

Khaled El Emam, PhD,  Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada

Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA


Impact Factor 2.0 CiteScore 2.5

JMIR AI is a new journal that focuses on the applications of AI in health settings. This includes contemporary developments as well as historical examples, with an emphasis on sound methodological evaluations of AI techniques and authoritative analyses. It is intended to be the main source of reliable information for health informatics professionals to learn about how AI techniques can be applied and evaluated. 

JMIR AI is indexed in DOAJ, PubMed and PubMed CentralWeb of Science Core Collection and Scopus

JMIR AI received an inaugural Journal Impact Factor of 2.0 according to the latest release of the Journal Citation Reports from Clarivate, 2025.

JMIR AI received an inaugural Scopus CiteScore of 2.5 (2024), placing it in the 68th percentile as a Q2 journal.

 

Recent Articles

Article Thumbnail
Reviews in AI

The impact of surgical complications is substantial and multifaceted, affecting patients, families, surgeons, and healthcare systems. Despite the remarkable progress in artificial intelligence (AI), there remains a notable gap in the prospective implementation of AI models in surgery that use real-time data to support decision-making and enable proactive intervention to reduce the risk of surgical complications.

|
Article Thumbnail
Applications of AI

Peer review remains central to ensuring research quality, yet it is constrained by reviewer fatigue and human bias. The rapid rise in scientific publishing has worsened these challenges, prompting interest in whether large language models (LLMs) can support or improve the peer review process.

|
Article Thumbnail
Reviews in AI

Large language models (LLMs) have fundamentally transformed approaches to natural language processing tasks across diverse domains. In health care, accurate and cost-efficient text classification is crucial—whether for clinical note analysis, diagnosis coding, or other related tasks—and LLMs present promising potential. Text classification has long faced multiple challenges, including the need for manual annotation during training, the handling of imbalanced data, and the development of scalable approaches. In health care, additional challenges arise, particularly the critical need to preserve patient data privacy and the complexity of medical terminology. Numerous studies have leveraged LLMs for automated health care text classification and compared their performance with traditional machine learning–based methods, which typically require embedding, annotation, and training. However, existing systematic reviews of LLMs either do not specialize in text classification or do not focus specifically on the health care domain.

|
Article Thumbnail
Research Letter

Large language models (LLMs) are increasingly used by patients and families to interpret complex medical documentation, yet most evaluations focus only on clinician-judged accuracy. In this study, 50 pediatric cardiac intensive care unit notes were summarized using GPT-4o mini and reviewed by both physicians and parents, who rated readability, clinical fidelity, and helpfulness. There were important discrepancies between parents and clinicians in the realm of helpfulness, along with important insights by clinicians assessing clinical accuracy and parents assessing readability. This study highlights the need for dual-perspective frameworks that balance clinical precision with patient understanding.

|
Article Thumbnail
Applications of AI

Patient experience is a critical consideration for any healthcare institution. Leveraging Artificial Intelligence (AI) to improve healthcare delivery has rapidly become an institutional priority across the nation. Ambient AI documentation systems, such as the Dragon Ambient Experience (DAX), may influence patient perception of provider communication and overall experience.

|
Article Thumbnail
Viewpoints and Perspectives in AI

Artificial intelligence (AI) is increasingly used to support medical interpreting and public health communication, yet current systems introduce serious risks to accuracy, confidentiality, and equity, particularly for speakers of low-resource languages. Automatic translation models often struggle with regional varieties, figurative language, culturally embedded meanings, and emotionally sensitive conversations about reproductive health or chronic disease, which can lead to clinically significant misunderstandings. These limitations threaten patient safety, informed consent, and trust in health systems when clinicians rely on AI as if it were a professional interpreter. At the same time, the large data sets required to train and maintain these systems create new concerns about surveillance, secondary use of linguistic data, and gaps in existing privacy protections. This Viewpoint examines the ethical and structural implications of AI–mediated interpreting in clinical and public health settings, arguing that its routine use as a replacement for qualified interpreters would normalize a lower standard of care for people with limited English proficiency and reinforce existing health disparities. Instead, AI tools should be treated as optional, carefully evaluated supplements that operate under the supervision of trained clinicians and professional interpreters, within clear regulatory guardrails for transparency, accountability, and community oversight. The paper concludes that language access must remain grounded in human expertise, language rights, and structural commitments to equity, rather than in cost-saving promises of automated systems.

|
Article Thumbnail
Foundation Models and Their Applications in AI

Early-stage clinical findings often appear only as conference posters circulated on social media. Because posters rarely carry structured metadata, their citations are invisible to bibliometric and alternative metric tools, limiting real-time research discovery.

|
Article Thumbnail
Reviews in AI

Recent advances have highlighted the potential of artificial intelligence (AI) systems in assisting clinicians with administrative and clinical tasks, but concerns regarding biases, lack of regulation, and potential technical issues pose significant challenges. The lack of a clear definition of AI, combined with a limited focus on qualitative research exploring clinicians' perspectives has limited the understanding of perspectives on AI in primary health care settings.

|
Article Thumbnail
Research Letter

This study examined how interactions with ChatGPT about flu vaccination and climate change influenced users’ beliefs and attitudes.

|
Article Thumbnail
Applications of AI

Artificial intelligence (AI) chatbots have become prominent tools in health care to enhance health knowledge and promote healthy behaviors across diverse populations. However, factors influencing the perception of AI chatbots and human-AI interaction are largely unknown.

|
Article Thumbnail
Reviews in AI

Artificial intelligence (AI) has, in the recent past, experienced a rebirth with the growth of generative AI systems such as ChatGPT and Bard. These systems are trained with billions of parameters and have enabled widespread accessibility and understanding of AI among different user groups. Widespread adoption of AI has led to the need for understanding how machine learning (ML) models operate to build trust in them. An understanding of how these models generate their results remains a huge challenge that explainable AI seeks to solve. Federated learning (FL) grew out of the need to have privacy-preserving AI by having ML models that are decentralized but still share model parameters with a global model.

|
Article Thumbnail
Drug Discovery and Clinical Trials

Leukemia treatment remains a major challenge in oncology. While thiadiazolidinone analogs show potential to inhibit leukemia cell proliferation, they often lack sufficient potency and selectivity. Traditional drug discovery struggles to efficiently explore the vast chemical landscape, highlighting the need for innovative computational strategies. Machine learning (ML)–enhanced quantitative structure-activity relationship (QSAR) modeling offers a promising route to identify and optimize inhibitors with improved activity and specificity.

|

We are working in partnership with