JMIR AI
A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.
Editor-in-Chief:
Khaled El Emam, PhD, Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA
Impact Factor 2.0 CiteScore 2.5
Recent Articles

Artificial intelligence (AI) is a rapidly evolving technology with the potential to revolutionize the healthcare industry. In Saudi Arabia, the healthcare sector has adopted AI technologies over the past decade to enhance service efficiency and quality, aligning with the country's technological thrust under Vision 2030.

Choosing a transplant program impacts a patient’s likelihood of receiving a kidney transplant. Most patients are unaware of the factors influencing their candidacy. As patients increasingly rely on online resources for health care decisions, this study quantifies the available online patient-level information on kidney transplant recipient (KTR) selection criteria across US kidney transplant centers.

The adaptive nature of artificial intelligence (AI), with its ability to improve performance through continuous learning, offers substantial benefits across various sectors. However, current regulatory frameworks are not intended to accommodate this adaptive nature and they have prolonged approval timelines, sometimes exceeding one year for some AI-enabled devices. This creates significant challenges for manufacturers who must deal with lengthy waits and submit multiple approval requests for AI-enabled device software functions as they are updated. In response, regulatory agencies like the U.S. Food and Drug Administration (FDA) have introduced guidelines to better support the approval process for continuously evolving AI technologies. This article explores the FDA’s concept of predetermined change control plans (PCCPs) and how they can streamline regulatory oversight by reducing the need for repeated approvals, while ensuring safety and compliance. This can help reduce the burden for regulatory bodies and decrease waiting times for approval decisions, therefore fostering innovation, increasing market uptake, and exploiting the benefits of AI/ML technologies.

The European Union's Artificial Intelligence Act (AI Act), adopted in 2024, establishes a landmark regulatory framework for AI systems, with significant implications for healthcare. The Act classifies medical AI as "high-risk," imposing stringent requirements for transparency, data governance, and human oversight. While these measures aim to safeguard patient safety, they may also hinder innovation, particularly for smaller healthcare providers and startups. Concurrently, geopolitical instability—marked by rising military expenditures, trade tensions, and supply chain disruptions—threatens healthcare innovation and access.

Generative artificial intelligence (GenAI) is increasingly being integrated into health care, offering a wide array of benefits. Currently, GenAI applications are useful in disease risk prediction and preventive care, diagnostics via imaging, artificial intelligence (AI)–assisted devices and point-of-care tools, drug discovery and design, patient and disease monitoring, remote monitoring and wearables, integration of multimodal data and personalized medicine, on-site and remote patient and disease monitoring and device integration, robotic surgery, and health system efficiency and workflow optimization, among other aspects of disease prevention, control, diagnosis, and treatment. Recent breakthroughs have led to the development of reliable and safer GenAI systems capable of handling the complexity of health care data. The potential of GenAI to optimize resource use and enhance productivity underscores its critical role in patient care. However, the use of AI in health is not without critical gaps and challenges, including (but not limited to) AI-related environmental concerns, transparency and explainability, hallucinations, inclusiveness and inconsistencies, cost and clinical workflow integration, and safety and security of data (ETHICS). In addition, the governance and regulatory issues surrounding GenAI applications in health care highlight the importance of addressing these aspects for responsible and appropriate GenAI integration. Building on AI’s promising start necessitates striking a balance between technical advancements and ethical, equity, and environmental concerns. Here, we highlight several ways in which the transformative power of GenAI is revolutionizing public health practice and patient care, acknowledge gaps and challenges, and indicate future directions for AI adoption and deployment.

The widespread adoption of artificial intelligence (AI)-powered search engines has transformed how people access health information. Microsoft Copilot, formerly Bing Chat, offers real-time web-sourced responses to user queries, raising concerns about the reliability of its health content. This is particularly critical in the domain of dietary supplements, where scientific consensus is limited and online misinformation is prevalent. Despite the popularity of supplements in Japan, little is known about the accuracy of AI-generated advice on their effectiveness for common diseases.

Despite public health efforts, tobacco remains the leading cause of preventable death in the U.S., disproportionately impacting underrepresented populations. Public policies are needed to improve health equity in tobacco-related outcomes. One strategy for promoting public support for these policies is through health messaging. Improvements in artificial intelligence (AI) technology present a new opportunity to create tailored policy messages quickly; however, there is limited research on how the public might perceive the use of AI for public health messages.

Large language models (LLMs) are increasingly applied in healthcare for documentation, patient education, and clinical decision support. However, their factual reliability can be compromised by hallucinations and lack of source traceability. Retrieval-augmented generation (RAG) enhances response accuracy by combining generative models with document retrieval mechanisms. While promising in medical contexts, RAG-based systems remain underexplored in orthopedic and trauma surgery patient education - particularly in non-English settings.


Mpox (monkeypox) outbreaks since 2022 have emphasized the importance of accessible health education materials. However, many Japanese online resources on mpox are difficult to understand, creating barriers for public health communication. Recent advances in artificial intelligence (AI) such as ChatGPT-4o show promise in generating more comprehensible and actionable health education content.
Preprints Open for Peer-Review
Open Peer Review Period:
-








