JMIR AI
A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.
Editor-in-Chief:
Khaled El Emam, PhD, Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA
Impact Factor 2.0 CiteScore 2.5
Recent Articles

Mental disorders are frequently evaluated using questionnaires, which have been developed over the past decades for the assessment of different conditions. Despite the rigorous validation of these tools, high levels of content divergence have been reported for questionnaires measuring the same construct of psychopathology. Previous studies that examined the content overlap required manual symptom labeling which is observer-dependent and time-consuming.

Systematic Literature Reviews (SLR) build the foundation for evidence synthesis, but they are exceptionally demanding in terms of time and resources. While recent advances in Artificial Intelligence (AI), particularly Large Language Models (LLMs), offer the potential to accelerate this process, their use introduces challenges to transparency and reproducibility. Developing reporting guidelines like PRISMA-AI primarily focus on AI as a subject of research, not as a tool in the review process itself.

Neglected tropical diseases (NTDs) are the most prevalent diseases and comprise 21 different conditions. One-half of these conditions have skin manifestations, known as skin NTDs. The diagnosis of skin NTDs incorporates visual examination of patients, and deep learning (DL)–based diagnostic tools can be used to assist the diagnostic procedures. The use of advanced DL-based methods, including multimodal data fusion (MMDF) functionality, could be a potential approach to enhance the diagnostic procedures of these diseases. However, little has been done toward the application of such tools, as confirmed by the very few studies currently available that implemented MMDF for skin NTDs.

Artificial intelligence (AI) is revolutionizing digital health, driving innovation in care delivery and operational efficiency. Despite its potential, many AI systems fail to meet real-world expectations due to limited evaluation practices that focus narrowly on short-term metrics like efficiency and technical accuracy. Ignoring factors such as usability, trust, transparency, and adaptability hinders AI adoption, scalability, and long-term impact in health care. This paper emphasizes the importance of embedding scientific evaluation as a core operational layer throughout the AI lifecycle. We outline practical guidelines for digital health companies to improve AI integration and evaluation, informed by over 35 years of experience in science, the digital health industry, and AI development. It describes a multi-step approach, including stakeholder analysis, real-time monitoring, and iterative improvement, that digital health companies can adopt to ensure robust AI integration. Key recommendations include assessing stakeholder needs, designing AI systems that can check its own work, conducting testing to address usability and biases, and ensuring continuous improvement to keep systems user-centered and adaptable. By integrating these guidelines, digital health companies can improve AI reliability, scalability, and trustworthiness, driving better health care delivery and stakeholder alignment.

The proliferation of both general-purpose and healthcare-specific Large Language Models (LLMs) has intensified the challenge of effectively evaluating and comparing them. Data contamination plagues the validity of public benchmarks; self-preference distorts LLM-as-a-judge approaches; and there’s a gap between the tasks used to test models and those used in clinical practice.

Cisplatin resistance remains a significant obstacle in cancer therapy, frequently driven by translesion DNA synthesis (TLS) mechanisms that utilize specialized polymerases such as human DNA polymerase η (hpol η). Although small-molecule inhibitors like PNR-7-02 have demonstrated potential to disrupt hpol η activity, current compounds often lack sufficient potency and specificity to effectively combat chemoresistance. The vastness of chemical space further limits traditional drug discovery approaches, underscoring the need for advanced computational strategies such as machine learning (ML)-enhanced Quantitative Structure-Activity Relationship (QSAR) modeling.

Recent advances in large language models (LLMs), such as GPT-4o, offer a transformative opportunity to extract nuanced linguistic, emotional, and social features from campaign texts at scale. These models enable a deeper understanding of the factors influencing campaign success—far beyond what structured data alone can reveal. Given these advancements, there is a pressing need for an integrated modeling framework that leverages both LLM-derived features and machine learning algorithms to more accurately predict and explain success in medical crowdfunding.

The introduction of Artificial Intelligence (AI) in healthcare holds great promise, offering the potential to alleviate physicians’ workloads and allocate more time for patient interactions. After the emergence of Large Language Models (LLMs), interest in AI has surged in the healthcare sector, including within primary care. However, patients have expressed concerns about the ethical implications and use of AI in primary care. Understanding patients’ perspectives on using AI in primary care is crucial for its effective integration. Despite this, few studies have addressed patients’ perspectives on using AI in primary care.










