JMIR AI

A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.

Editor-in-Chief:

Khaled El Emam, PhD,  Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada

Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA


Impact Factor [2025]

JMIR AI is a new journal that focuses on the applications of AI in health settings. This includes contemporary developments as well as historical examples, with an emphasis on sound methodological evaluations of AI techniques and authoritative analyses. It is intended to be the main source of reliable information for health informatics professionals to learn about how AI techniques can be applied and evaluated. 

JMIR AI is indexed in DOAJ, PubMed and PubMed Central and has been selected for inclusion in the Web of Science Core Collection as well as Scopus. 

 

Recent Articles

Article Thumbnail
Viewpoints and Perspectives in AI

With the explosion of innovation driven by generative and traditional AI, comes the necessity to understand and regulate products that often defy current regulatory classification. Tradition, and lack of regulatory expediency, imposes the notion of force fitting novel innovations into pre-existing product classifications or into the essentially unregulated domains of wellness and/or consumer electronics. Further, regulatory requirements, levels of risk tolerance, and capabilities vary greatly across the spectrum of technology innovators. For example, currently unregulated information and consumer electronic suppliers set their own editorial and communication standards without extensive federal regulation. However, industries like biopharma companies are held to a higher standard in the same space given current direct to consumer regulations that govern the interactions between biopharmaceutical companies, healthcare providers and patients, like the Sunshine Act (also known as Open Payments), the federal Anti-Kickback Statute (AKS), the federal False Claims Act (FCA) and others. Clear and well-defined regulations not only reduce ambiguity but facilitate scale, showcasing the importance of regulatory clarity in fostering innovation and growth. To avoid highly regulated industries like healthcare and biopharma from being discouraged from developing AI to improve patient care, there is a need for a specialized framework to establish regulatory evidence for AI-based medical solutions. In this paper, we review the current regulatory environment considering current innovations but also pre-existing legal and regulatory responsibilities of the biopharma industry and propose a novel, hybridized approach for the assessment of novel AI-based patient solutions. Further, we will elaborate the proposed concepts via case studies. This paper explores the challenges posed by the current regulatory environment, emphasizing the need for a specialized framework for AI medical devices. By reviewing existing regulations and proposing a hybridized approach, we aim to ensure that the potential of AI in biopharmaceutical innovation is not hindered by uneven regulatory landscapes.

|
Article Thumbnail
Applications of AI

The application of large language models (LLMs) in analyzing expert textual online data is a topic of growing importance in computational linguistics and qualitative research within healthcare settings.

|
Article Thumbnail
Applications of AI

Most online and social media discussions about birth control methods for women center on side effects, highlighting a demand for shared experiences with these products. Online user reviews and ratings of birth control products offer a largely untapped supplementary resource that could assist women and their partners in making informed contraception choices.

|
Article Thumbnail
Ethical, Legal, and Social Issues in AI

The digitization of healthcare, facilitated by the adoption of electronic health records (EHRs) systems, has revolutionized data-driven medical research and patient care. While this digital transformation offers substantial benefits in healthcare efficiency and accessibility, it concurrently raises significant concerns over privacy and data security. Initially, the journey towards protecting patient data de-identification saw the transition from rule-based systems to more mixed approaches including machine learning for de-identifying patient data. Subsequently, the emergence of Large Language Models (LLMs) has represented a further opportunity in this domain, offering unparalleled potential for enhancing the accuracy of context-sensitive de-identification. However, despite LLMs offering significant potential, the deployment of the most advanced models in hospital environments is frequently hindered by data security issues and the extensive hardware resources required.

|
Article Thumbnail
Foundations of AI

A major challenge in using electronic health records (EHR) is the inconsistency of patient follow-up, resulting in right-censored outcomes. This becomes particularly problematic in long-horizon event predictions, such as autism and attention-deficit/hyperactivity disorder (ADHD) diagnoses, where a significant number of patients are lost to follow-up before the outcome can be observed. Consequently, fully supervised methods like binary classification (BC), which are trained to predict observed diagnoses, are substantially affected by the probability of sufficient follow-up, leading to biased results.

|
Article Thumbnail
Applications of AI

Large language models (LLMs) have demonstrated powerful capabilities in natural language tasks and are increasingly being integrated into health care for tasks like disease risk assessment. Traditional machine learning methods rely on structured data and coding, limiting their flexibility in dynamic clinical environments. This study presents a novel approach to disease risk assessment using generative LLMs through conversational artificial intelligence (AI), eliminating the need for programming.

|
Article Thumbnail
Applications of AI

People with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition. Education platforms powered by large language models (LLMs) have the potential to improve the accessibility of mental health information. However, the black-box nature of LLMs raises ethical and safety concerns regarding the controllability of chatbots. In particular, prompt-engineered chatbots may drift from their intended role as the conversation progresses and become more prone to hallucinations.

|
Article Thumbnail
Applications of AI

The application of machine learning methods to data generated by ubiquitous devices like smartphones presents an opportunity to enhance the quality of health care and diagnostics. Smartphones are ideal for gathering data easily, providing quick feedback on diagnoses, and proposing interventions for health improvement.

|
Article Thumbnail
Foundations of AI

Spirometry can be performed in the office setting or even remotely from portable spirometers. Although basic spirometry can be diagnostic for obstructive lung disease, clinically pertinent information such as restriction, hyperinflation, and air-trapping require additional testing, such as body plethysmography, which is not as readily available. We hypothesize that spirometry data contains information that will allow estimation of static lung volumes in certain circumstances, leveraging machine learning techniques.

|
Article Thumbnail
Foundations of AI

Language barriers contribute significantly to healthcare disparities in the United States, where a sizeable proportion of patients are exclusively Spanish speaking. In orthopaedic surgery, such barriers impact both patient comprehension and patient engagement with available resources. Previous studies have explored the utility of large language models (LLMs) for medical translation but have yet to robustly evaluate AI-driven translation and simplification of orthopaedic materials for Spanish speakers.

|
Article Thumbnail
AI for Synthetic Data

Recent advancements in Generative Adversarial Networks and large language models (LLMs) have significantly advanced the synthesis and augmentation of medical data. These and other deep learning–based methods offer promising potential for generating high-quality, realistic datasets crucial for improving machine learning applications in health care, particularly in contexts where data privacy and availability are limiting factors. However, challenges remain in accurately capturing the complex associations inherent in medical datasets.

|
Article Thumbnail
Foundations of AI

Deep learning techniques have shown promising results in the automatic classification of respiratory sounds. However, accurately distinguishing these sounds in real-world noisy conditions poses challenges for clinical deployment. In addition, predicting signals with only background noise could undermine user trust in the system.

|

Preprints Open for Peer-Review

There are no preprints available for open peer-review at this time. Please check back later.

We are working in partnership with