Viewpoint
Abstract
Generative artificial intelligence (GenAI) is increasingly being integrated into health care, offering a wide array of benefits. Currently, GenAI applications are useful in disease risk prediction and preventive care, diagnostics via imaging, artificial intelligence (AI)–assisted devices and point-of-care tools, drug discovery and design, patient and disease monitoring, remote monitoring and wearables, integration of multimodal data and personalized medicine, on-site and remote patient and disease monitoring and device integration, robotic surgery, and health system efficiency and workflow optimization, among other aspects of disease prevention, control, diagnosis, and treatment. Recent breakthroughs have led to the development of reliable and safer GenAI systems capable of handling the complexity of health care data. The potential of GenAI to optimize resource use and enhance productivity underscores its critical role in patient care. However, the use of AI in health is not without critical gaps and challenges, including (but not limited to) AI-related environmental concerns, transparency and explainability, hallucinations, inclusiveness and inconsistencies, cost and clinical workflow integration, and safety and security of data (ETHICS). In addition, the governance and regulatory issues surrounding GenAI applications in health care highlight the importance of addressing these aspects for responsible and appropriate GenAI integration. Building on AI’s promising start necessitates striking a balance between technical advancements and ethical, equity, and environmental concerns. Here, we highlight several ways in which the transformative power of GenAI is revolutionizing public health practice and patient care, acknowledge gaps and challenges, and indicate future directions for AI adoption and deployment.
JMIR AI 2025;4:e67626doi:10.2196/67626
Keywords
Introduction
Artificial intelligence (AI), also referred to as augmented intelligence, currently plays multiple critical roles in public health and medical practice, the rapid implementation and profound impact of which were unforeseen just a few years ago [-]. The emergence of generative AI (GenAI) through the release of a popular large language model in late 2022 made AI readily accessible to the general population and brought transformational shifts in several sectors, including health care [,]. GenAI has changed how people interact with each other—how they communicate, exercise, work, do business, relate, and lead. As part of this societal seismic shift, GenAI is revolutionizing global health care systems.
Digital technology is fast becoming an integral part of public health and medical practices, providing validated tools for detecting, screening, diagnosing, caring for patients, and monitoring health-related parameters. GenAI has measurably improved patient care and enabled individuals to self-identify issues, thereby leading to better management of their health and well-being []. According to a 2025 survey of senior health care leaders, 95% of respondents believed that GenAI will transform the industry, with 85% of health care providers and 83% of “payer leaders” stating that it will “reshape clinical decision-making within three to five years” []. In total, 54% of all respondents reported that they were already seeing a meaningful return on investment in their organization after the first year of GenAI adoption.
The introduction of GenAI into public health and medical ecosystems offers enormous opportunities for training, research, patient care, and resource management []. Nevertheless, the potential benefits of AI are accompanied by profound ethical considerations and substantial implementation challenges. In this viewpoint, we contend that the effective adoption of AI within health care contexts is contingent upon systematically addressing these concerns. We further delineate recommendations intended to inform stakeholders seeking to foster the responsible development and deployment of innovative AI systems.
Current Trends in Health Care
GenAI is currently used as a powerful tool to provide diverse services to health care and public health providers. This includes the delivery of personalized services to patients and accurate information to health care leaders, enabling them to improve the quality of services, efficiency, and effectiveness of care and to combat the increasingly widespread online dissemination of health misinformation and disinformation ().
| Application area | Role of AI | Current or expected benefits |
| Disease risk prediction and preventive care [-] | Predicts future susceptibility to many diseases using health records, lifestyle factors, and other data sources (eg, Delphi-2M and BlueDot). | Helps public health professionals plan. Has the potential to inform early interventions and long-term personalized risk estimates. |
| Diagnostics via imaging [-] | Supports automated detection of anomalies in medical imaging (x-ray, CTa, and MRIb) for tuberculosis, cancer, and other diagnoses. | Supports faster triage to reduce radiologist workload and ensures earlier detection with fewer missed cases. |
| AI-assisted devices and point-of-care tools [,] | Enables faster diagnosis by providing diagnostic insights in real time for cardiomyopathies and other abnormalities. | Supports faster diagnosis with potential use in nonspecialist settings, freeing specialists’ time and reducing delays. |
| Drug discovery and design [-] | Helps in identifying or designing new drug molecules, predicting toxicity, optimizing clinical trials, and repurposing existing drugs, thereby accelerating the drug development process. | Reduces time and cost of drug development compared to the traditional drug discovery process, potentially compressing decades into months and saving billions of dollars. |
| Remote monitoring [,] | Helps with continuous collection and analysis of physiological and behavioral data for chronic disease management and early detection of signs and symptoms (eg, wearables for PGHDc). | Supports early detection of complications, reduces hospitalizations, promotes better disease management, and provides more proactive care. |
| Integrating multimodal data and personalized medicine [,] | Combines genomics, imaging, and EHRsd to tailor treatments to patients’ specific needs. | Allows more precise treatments with the potential to reduce adverse reactions and achieve better patient outcomes. |
| Patient and disease monitoring [] | Provides tools and devices that help track disease progression to detect early disease symptoms. | Supports care in nonclinical settings to improve patients’ quality of life through early detection and diagnosis. |
| Robotic surgery [,] | Uses GenAIe-based algorithms to improve precision and control during surgical procedures. | Improves the effectiveness and efficiency of surgical procedures by enhancing precision, reducing surgeon fatigue, and improving safety. |
| Health system efficiency and workflow optimization [,,] | Automates administrative tasks, prioritization, and resource allocation. | Reduces delays in administrative tasks by improving allocation of resources, leading to cost savings and fewer preventable complications. |
aCT: computed tomography.
bMRI: magnetic resonance imaging.
cPGHD: personally generated health data.
dEHR: electronic health record.
eGenAI: generative artificial intelligence.
Current Gaps in GenAI in Health Care and Mitigation Strategies
AI is expected to improve health care outcomes by facilitating early diagnosis, reducing the medical administrative burden, aiding drug development, personalizing medical and oncological management, and monitoring health care parameters on an individual basis, thereby allowing clinicians to spend more time with their patients []. Although the integration of AI into health care has the potential to transform the industry, it also raises ethical, regulatory, and safety concerns []. AI can rapidly analyze large and complex datasets; extract tailored recommendations; support decision-making; and improve the efficiency of many tasks that involve processing data, text, or images []. As the operability of GenAI in public health and medicine advances, significant gaps remain. AI systems risk perpetuating or amplifying existing health disparities when trained on current available data, which are largely nonrepresentative and noninclusive in nature [,]. AI tools and resources also lack explainability and transparency, which undermines clinician trust and introduces legal and ethical issues in safety-critical care [,]. Data silos, poor data quality, and limited interoperability remain major technical and organizational barriers to AI integration and use across the health care sector []. Regulatory, governance, and evaluation frameworks for safe clinical deployment are often incomplete or inconsistent across jurisdictions. Furthermore, models can degrade in new settings (dataset shift), and routine monitoring and maintenance of deployed models are frequently inadequate [-]. Other challenges include environmental concerns, hallucinations, inconsistent outputs, and various forms of cultural insensitivities across models in health. summarizes some of the current gaps in AI adoption in health care.
| Gap in AI technologies | Clinical and public health implications | Mitigation strategies |
| Bias and fairness [,] |
|
|
| Explainability and transparency [,] |
|
|
| Data access, quality, and interoperability [] |
|
|
| Generalizability and reproducibility [,] |
|
|
| Regulation, governance, and evaluation standards [,] |
|
|
| Continuous monitoring and model predictive maintenance [,] |
|
|
| Evidence-based clinical evaluation [,] |
|
|
| Data privacy, security, and governance [,] |
|
|
| Clinical workflow integration and usability [] |
|
|
| Workforce skills and trust [,] |
|
|
| Equity in public health contexts [,] |
|
|
| Conflicts of interest and transparency [] |
|
|
| Computational cost and environmental impact, including carbon footprint [-] |
|
|
| Hallucination with fabricated outputs [-] |
|
|
| Inconsistent outputs (stochasticity and reproducibility) [-] |
|
|
| Cultural insensitivity and lack of contextual grounding [] |
|
|
aML: machine learning.
bXAI: explainable artificial intelligence.
cEHR: electronic health record.
dRCT: randomized controlled trial.
eRAG: retrieval-augmented generation.
fLLM: large language model.
To address these gaps, it is essential to be deliberate and proactive in the design, development, and deployment of new AI models by ensuring representative data, appropriate validation, transparency, and prospective clinical evaluation before deployment; implementing continuous monitoring for dataset shifts, retraining policies, and incident reporting after deployment; and encouraging stakeholders to invest in interoperable data infrastructure, clinician training, clear regulation, and system-level equity assessments. This underscores the ethical imperative to integrate ethics into all stages of AI design, development, training, and deployment.
Although AI has tremendous potential in public and clinical health care, there is an urgent need to mitigate these challenges to effectively harness these benefits for more effective and efficient health care delivery systems. Leaders and health care providers must address all or most of the issues identified in . This is achievable, as there are documented steps for the design, development, and deployment of AI models that are largely free (or relatively free) of these challenges.
Maximizing AI Opportunities in Health Care
To maximize the opportunities that AI provides, health care leaders, public health specialists, and providers must work with biomedical engineers, computer scientists, and AI experts to develop interoperable data solutions, address biases, and ensure equity and fairness. In developing and deploying the next generation of health care AI tools, they must build transparency and ensure the development of explainable GenAI tools that enhance health care providers’ trust in AI, thus improving its use and subsequent better patient-related decision-making. These modifications will promote responsible, appropriate, and ethical practices in health care. For example, GenAI-driven tools such as Woebot (Woebot Health) [], AI-powered mental health chatbots [], and wearable electrocardiogram (ECG) apps such as the Apple Watch (Apple Inc) ECG feature [] demonstrate how AI is already transforming health care by improving accessibility, decision-making, and transparency in patient care and data delivery. AI-powered chatbots and virtual health assistants, such as Babylon Health (eMed), provide patients with 24/7 access to health care advice, symptom checks, and appointment scheduling []. Imagine what could happen if these tools were trained with inclusive datasets that greatly minimize or eliminate bias, improve generalizability, and are open to providers. Such unbiased tools will accelerate adoption and use, saving providers’ time, enabling opportunity for better provider-patient interactions, and enhancing the accuracy of diagnosis and treatment, as well as the safety of hospital procedures.
Creating platforms that ensure high-quality, interoperable data can significantly enhance GenAI applications in health care. This will facilitate seamless data integration across different systems. Currently, GenAI-powered wearable health monitoring devices such as Apple Watch (Apple Inc), Fitbit (Google Inc), and Garmin (Google Inc) include pedometers, blood oxygen sensors, pulse oximeters, and electrodermal activity sensors to monitor skin temperature and stress. These personally generated health data, when analyzed, can be used to predict potential health risks and encourage preventive measures. Most of these devices are stand-alone products. However, system interoperability can bridge the gap between real-time monitoring and clinical decision-making. For instance, the ECG feature of a smartwatch that leverages GenAI to detect heart health anomalies can enable remote patient monitoring and provide explainable alerts to both lay users and health care professionals. Such alerts will help the wearer and the physician make timely, informed decisions [].
Ensuring tools are trained using representative datasets is essential to ensure that GenAI model outputs appropriately reflect the entire population. Standardizing data collection protocols can enable consistency to be achieved across sources. Developing tools that detect and measure levels of bias in AI models and incorporating fairness constraints during the development process may help reduce biases [].
Focusing on developing explainable GenAI models can help build trust among clinicians and patients because such models enable users to trust and understand how decisions are made, thereby fostering transparency and accountability. Integrating GenAI into clinical decision support systems that assist health care providers at the point of care can improve decision-making and patient outcomes. For example, the AI-driven Woebot mental health chatbot provides users with clear explanations for its therapeutic recommendations []. When suggesting cognitive behavioral therapy exercises, it explains their evidence-based benefits, such as reducing anxiety by addressing unhelpful thought patterns []. Similarly, the Apple Watch’s ECG feature builds user confidence and empowers individuals by providing instantaneous, actionable information, while stored data offer clinicians detailed insights into detected irregularities []. These applications demonstrate the value of transparent AI in improving user engagement and trust.
Making AI tools more user centered and integrated into health system workflows is essential to ensure a good user experience. Such tools will also be able to provide real-time monitoring and early warnings for health challenges, such as cardiac issues, thereby facilitating early professional evaluation and reducing morbidity and mortality [,]. In addition, updating and streamlining ethical guidelines and regulatory frameworks for AI in health care that prioritize data privacy, inclusivity, and transparency will facilitate appropriate, responsible, and equitable use of AI technology [,]. To achieve this, health leaders and biomedical engineers must collaborate with policymakers and other dominant stakeholders [].
Moreover, early exposure of future health care professionals to GenAI at secondary and tertiary levels of education is critical to producing an AI-astute health workforce for primary, secondary, and tertiary care. Therefore, it is imperative to incorporate hands-on training and practical application sessions into both graduate and undergraduate curricula so that future health care professionals can work seamlessly with GenAI tools and datasets. Simulation exercises, case studies, and project-based learning are pedagogical approaches that can be tailored to enhance practical understanding. To improve engagement and effectiveness, learning pathways should be customizable to meet the needs of individual users, thereby promoting AI literacy among the emerging health care workforce. Some companies currently offer excellent case studies for health care students to learn about GenAI applications. Similarly, several colleges and institutions have created new courses on AI at the graduate level. However, more courses should be created at the undergraduate level, especially in historically minority-serving institutions. Each program should consider the diverse backgrounds, expertise levels, and specialties of participants [,].
Fostering interdisciplinary collaboration through joint programs and projects will enhance stakeholders’ awareness of the potential and limitations of GenAI technologies and thus identify their most appropriate use. Interdisciplinary knowledge sharing between health care professionals, data scientists, biomedical engineers, policymakers, and computer scientists can accelerate the discovery of more innovative, appropriate, user-friendly, inclusive, and applicable solutions [,].
Bridging Equity Gaps in AI Adoption and Use
The future of GenAI in health care is poised to be transformative, fundamentally altering the landscape of public health and clinical practice. Similar to wielding a hammer, GenAI in the hands of trained and experienced health care providers will augment, rather than replace, skilled and experienced operators. It is a force for positive change when appropriately developed, modeled, and used properly. Making GenAI available to all individuals, irrespective of ethnicity, race, gender, or socioeconomic status, will reduce inequity and improve health outcomes. However, the current adoption and use of GenAI is not equitable across the health care industry, as large systems in high-income countries have unhindered access, while small organizations struggle to afford the tools they need most. Similarly, AI penetration in low- and middle-income countries remains limited due to inadequate infrastructure and insufficient financial resources. First-generation scholars and students from populations considered historically marginalized are also behind in AI adoption and use.
Health care systems adopting GenAI must prioritize people over profits to prevent inequity and its associated adverse outcomes, as the integration of advanced machine learning algorithms and big data analytics is not merely a trend but a paradigm shift that promises to enhance public health, improve clinical decision-making and patient experiences, and address systemic inefficiencies in resource allocation. To leverage these technological advances equitably and effectively, several key issues must be considered.
First, limited engagement by key stakeholders on how best to embed GenAI into health care provision poses a significant risk []. This challenge is exacerbated when medical, nursing, and allied professions are excluded from conversations that potentially impact health care services and professional practices. Establishing formal GenAI leadership roles will help ensure ethical and equitable use []. Health care leaders must drive the ethical and equitable integration of GenAI into health care services and ensure proper oversight to promote holistic, patient-centered, and professional care []. They can achieve this through a deliberate proactive leadership approach that thinks, plans, provides, processes, and communicates ahead to ensure seamless and timely transition of health care systems from a pre-GenAI era to one that is fully GenAI integrated [,].
Second, as the success of GenAI in health care depends on acceptance by both patients and providers, transparent communication about the benefits and limitations of GenAI, as well as demonstrations of its value, is essential for building trust. Our recent studies have revealed that very few employees are aware of the process, cost, and implications of GenAI adoption in their organizations [,]. This is worse for minority populations and underserved communities. Thus, proper and timely communication systems must be developed, adopted, and operationalized in accordance with the deliberate proactive leadership approach [].
Third, to translate AI research into clinical practice across all populations, there is an urgent need for system-wide AI education, including a professional development component tailored to local contexts, with emphasis on underserved communities. Limited access to resources, including skilled and equipped AI trainers and the required infrastructure, hinders such on-the-job training. Therefore, there is a need to develop and popularize both accredited instructor-led and self-directed learning courses that provide introductory content on AI [], as its opacity limits widespread adoption. Furthermore, as the complexities of GenAI and its implementation can negatively impact its use in health care practice [], identifying discrepancies in priorities between health care managers and GenAI developers will lead to better collaboration. For instance, the development of GenAI applications with inclusive data that focus on health care leadership and management priorities should be a unified goal for all stakeholders []. These innovations must incorporate both the in-out (from providers to industry) and out-in (from industry to providers) approaches, placing providers and industry at the center of innovation, development, and deployment of new GenAI tools [].
Finally, as much of the early adoption of GenAI has been concentrated in better-resourced provider settings, such as hospitals, academic medical centers, and large health system networks, deliberate steps must be taken to overcome barriers such as data infrastructure, technical capacity, investment, governance, and risk management, which tend to disproportionately impact resource-limited settings [-]. In the United States, for example, this gap is especially apparent in resource-limited settings, such as essential community providers, including federally qualified health centers, tribal or urban Indian clinics, and community or free clinics, which serve underserved populations in medically disadvantaged areas. Essential community providers face well-established challenges, including limited resources and health information technologies, and they exhibit lower rates of deployment of advanced digital tools compared to private systems []. The adoption of GenAI by community clinics and hospital departments should be supported financially and otherwise by governments and relevant foundations. Some countries, such as Vietnam, are ahead of the curve in this regard, illustrating how GenAI could enhance service efficiency, improve outcomes of interventions, and raise the quality of care provided by the health care industry [,]. Efforts to expand the availability of GenAI applications to underserved health care units across regions should be intensified, and the global health care community should also collaborate to ensure that further GenAI developments are tailored to address identified needs.
Conclusions
GenAI technologies have the potential to transform health care by improving public health practices, enhancing diagnostic accuracy, personalizing treatments, automating services, and increasing administrative efficiency. Future developments in GenAI should be guided by the need to address health care’s most pressing AI-related challenges, especially environmental concerns, transparency and explainability, hallucinations, inclusiveness and inconsistencies, cost and clinical workflow integration, and safety and security of data (ETHICS). Similarly, AI regulation, governance, and clinical validation processes should be streamlined and strengthened to ensure the responsible and effective integration of AI in health care settings. Priority should also be given to establishing appropriate leadership and management structures and developing interoperability of data systems. By ensuring fairness, ethical practices, and appropriate educational and infrastructural initiatives, the global health community can strengthen the positive impact of GenAI, driving more efficient health care delivery systems and leading to improved patient outcomes.
Acknowledgments
OOO conceived the paper and developed the initial draft. All authors collated articles for the literature review and contributed significantly to writing the manuscript. OOO, SDT-R, and AWT-R revised the manuscript critically for important intellectual content, approved the final version, and agreed to its submission. All authors agree to be accountable for the content of the work. OOO receives institutional support from California State University, Dominguez Hills. SDT-R is supported by the Wellcome Trust Institutional Strategic Support Fund awarded to Imperial College London.
Conflicts of Interest
None declared.
References
- Ullah W, Ali Q. Role of artificial intelligence in healthcare settings: a systematic review. J Med Artif Intell. Sep 30, 2025;8:24. [FREE Full text] [CrossRef]
- Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. 2019;7:e7702. [FREE Full text] [CrossRef] [Medline]
- Singh K, Prabhu A, Kaur N. The impact and role of artificial intelligence (AI) in healthcare: systematic review. Curr Top Med Chem. Mar 03, 2025. [CrossRef] [Medline]
- Vargas-Santiago M, León-Velasco DA, Maldonado-Sifuentes CE, Chanona-Hernandez L. A state-of-the-art review of artificial intelligence (AI) applications in healthcare: advances in diabetes, cancer, epidemiology, and mortality prediction. Computers. Apr 10, 2025;14(4):143. [CrossRef]
- Bharel M, Auerbach J, Nguyen V, DeSalvo KB. Transforming public health practice with generative artificial intelligence. Health Aff (Millwood). Jun 01, 2024;43(6):776-782. [CrossRef] [Medline]
- Fontenot J. Spotlight on leadership: what nurse leaders need to know about artificial intelligence. J Nurs Adm. Feb 01, 2024;54(2):74-76. [CrossRef] [Medline]
- Bohler F, Aggarwal N, Peters G, Taranikanti V. Future implications of artificial intelligence in medical education. Cureus. Jan 2024;16(1):e51859. [FREE Full text] [CrossRef] [Medline]
- Leclercq C, Witt H, Hindricks G, Katra RP, Albert D, Belliger A, et al. Wearables, telemedicine, and artificial intelligence in arrhythmias and heart failure: Proceedings of the European Society of Cardiology Cardiovascular Round Table. Europace. Oct 13, 2022;24(9):1372-1383. [CrossRef] [Medline]
- May EL. Intelligent care: how AI is driving healthcare transformation: healthcare executive. American College of Healthcare Executives. 2025. URL: https://healthcareexecutive.org/archives/september-october-2025/intelligent-care [accessed 2025-09-22]
- Reich C, Meder B. The heart and artificial intelligence-how can we improve medicine without causing harm. Curr Heart Fail Rep. Aug 2023;20(4):271-279. [FREE Full text] [CrossRef] [Medline]
- Cookson C. New AI model predicts susceptibility to over 1,000 diseases. Financial Times. URL: https://www.ft.com/content/598e07ec-954f-49b7-9bc5-ce77f9fff934 [accessed 2025-09-22]
- Gregory A. New AI tool can predict a person’s risk of more than 1,000 diseases, say experts. The Guardian. Sep 17, 2025. URL: https://www.theguardian.com/science/2025/sep/17/new-ai-tool-can-predict-a-persons-risk-of-more-than-1000-diseases-say-experts [accessed 2025-09-22]
- Don't fight tomorrow's outbreaks with yesterday's tools. BlueDot. URL: https://bluedot.global/ [accessed 2025-09-22]
- Health officials pilot AI-aided tech for diagnosing diseases. The Times of India. URL: https://timesofindia.indiatimes.com/city/chennai/health-officials-pilot-ai-aided-tech-for-diagnosing-diseases/articleshow/123907991.cms [accessed 2025-09-22]
- 10 real-world examples of AI in healthcare. Philips. Nov 24, 2022. URL: https://www.philips.com/a-w/about/news/archive/features/2022/20221124-10-real-world-examples-of-ai-in-healthcare.html%2C%20%5B15-10-2023%5D [accessed 2025-09-22]
- Liu PR, Lu L, Zhang JY, Huo TT, Liu SX, Ye ZW. Application of artificial intelligence in medicine: an overview. Curr Med Sci. Dec 2021;41(6):1105-1115. [FREE Full text] [CrossRef] [Medline]
- Gregory A. Doctors develop AI stethoscope that can detect major heart conditions in 15 seconds. The Guardian. Aug 30, 2025. URL: https://www.theguardian.com/technology/2025/aug/30/doctors-ai-stethoscope-heart-disease-london [accessed 2025-09-22]
- Hayward E. Global first as NHS hospital uses AI for instant skin cancer checks. The Times. URL: https://www.thetimes.com/uk/healthcare/article/global-first-as-nhs-hospital-uses-ai-for-instant-skin-cancer-checks-3clspdmk0 [accessed 2025-09-22]
- Sharma R. AI in drug discovery: transforming medicine and research. Markovate. Feb 24, 2025. URL: https://markovate.com/ai-in-drug-discovery [accessed 2025-09-22]
- Kalotra S. Role of AI in drug discovery: how it's impacting the healthcare industry. Signity. Nov 20, 2024. URL: https://www.signitysolutions.com/blog/role-of-ai-in-drug-discovery [accessed 2025-09-22]
- 10 best examples of AI in healthcare. Science News Today. Aug 9, 2025. URL: https://www.sciencenewstoday.org/10-best-examples-of-ai-in-healthcare [accessed 2025-09-22]
- How AI is improving diagnostics and health outcomes. Healthcare Readers. Feb 2, 2025. URL: https://healthcarereaders.com/medical-devices/ai-in-diagnostics [accessed 2025-09-22]
- Shah M. 10 real-world examples of artificial intelligence in healthcare. ECOSMOB. Dec 5, 2024. URL: https://www.ecosmob.com/ai-in-healthcare-examples/ [accessed 2025-09-22]
- Malik A. CareYaya is enabling affordable home care by connecting healthcare students with elders. TechCrunch. Nov 2, 2024. URL: https://techcrunch.com/2024/11/02/careyaya-is-enabling-affordable-home-care-by-connecting-healthcare-students-with-elders/ [accessed 2025-09-22]
- Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg. Mar 01, 2024;18(1):102. [FREE Full text] [CrossRef] [Medline]
- AI-powered bedsore prevention: Bayesian health AI platform for pressure ulcer prevention. TIME. Oct 30, 2024. URL: https://time.com/7095010/bayesian-health-ai-platform-for-pressure-ulcer-prevention/ [accessed 2025-09-22]
- Ahmed MI, Spooner B, Isherwood J, Lane M, Orrock E, Dennison A. A systematic review of the barriers to the implementation of artificial intelligence in healthcare. Cureus. Oct 2023;15(10):e46454. [FREE Full text] [CrossRef] [Medline]
- Chustecki M. Benefits and risks of AI in health care: narrative review. Interact J Med Res. Nov 18, 2024;13:e53616. [FREE Full text] [CrossRef] [Medline]
- Panteli D, Adib K, Buttigieg S, Goiana-da-Silva F, Ladewig K, Azzopardi-Muscat N, et al. Artificial intelligence in public health: promises, challenges, and an agenda for policy makers and public health institutions. Lancet Public Health. May 2025;10(5):e428-e432. [CrossRef]
- Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Prev Chronic Dis. Aug 22, 2024;21:E64. [FREE Full text] [CrossRef] [Medline]
- Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol. Jan 04, 2024;42(1):3-15. [FREE Full text] [CrossRef] [Medline]
- Sadeghi Z, Alizadehsani R, Cifci MA, Kausar S, Rehman R, Mahanta P, et al. A review of explainable artificial intelligence in healthcare. Comput Electr Eng. Aug 2024;118:109370. [FREE Full text] [CrossRef]
- Marey A, Arjmand P, Alerab AD, Eslami MJ, Saad AM, Sanchez N, et al. Explainability, transparency and black box challenges of AI in radiology: impact on patient care in cardiovascular radiology. Egypt J Radiol Nucl Med. Sep 13, 2024;55:183. [CrossRef]
- Li YH, Li YL, Wei MY, Li GY. Innovation and challenges of artificial intelligence technology in personalized healthcare. Sci Rep. Aug 16, 2024;14(1):18994. [FREE Full text] [CrossRef] [Medline]
- Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. Feb 29, 2024;10(4):e26297. [FREE Full text] [CrossRef] [Medline]
- Subasri V, Krishnan A, Kore A, Dhalla A, Pandya D, Wang B, et al. Detecting and remediating harmful data shifts for the responsible deployment of clinical AI models. JAMA Netw Open. Jun 02, 2025;8(6):e2513685. [FREE Full text] [CrossRef] [Medline]
- Hughes L. Regulation and ‘poor alignment’ are stymying health innovation, says report. Financial Times. URL: https://www.ft.com/content/b4dd8b0a-5328-454b-8657-769b02852dee [accessed 2025-09-22]
- Guan H, Bates D, Zhou L. Keeping medical AI healthy: a review of detection and correction methods for system degradation. ArXix. Preprint posted online on June 20, 2025. [FREE Full text]
- Mienye ID, Obaido G, Jere N, Mienye E, Aruleba K, Emmanuel ID, et al. A survey of explainable artificial intelligence in healthcare: concepts, applications, and challenges. Informat Med Unlocked. 2024;51:101587. [CrossRef]
- Sohn E. The reproducibility issues that haunt health-care AI. Nature. Jan 09, 2023;613(7943):402-403. [CrossRef] [Medline]
- Olawade DB, David-Olawade AC, Wada OZ, Asaolu AJ, Adereni T, Ling J. Artificial intelligence in healthcare delivery: prospects and pitfalls. J Med Surg Public Health. Aug 2024;3:100108. [CrossRef]
- Hassan M, Kushniruk A, Borycki E. Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Hum Factors. Aug 29, 2024;11:e48633. [FREE Full text] [CrossRef] [Medline]
- Harishbhai Tilala M, Kumar Chenchala P, Choppadandi A, Kaur J, Naguri S, Saoji R, et al. Ethical considerations in the use of artificial intelligence and machine learning in health care: a comprehensive review. Cureus. Jun 2024;16(6):e62443. [CrossRef] [Medline]
- Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021. Presented at: FAccT '21; March 3-10, 2021; Virtual Event. URL: https://dl.acm.org/doi/10.1145/3442188.3445922 [CrossRef]
- Patterson D, Gonzalez J, Le Q, Liang C, Munguia LM, Rothchild D, et al. Carbon emissions and large neural network training. ArXiv. Preprint posted online on April 21, 2021. [FREE Full text] [CrossRef]
- Strubell E, Ganesh A, McCallum A. Energy and policy considerations for modern deep learning research. Proc AAAI Conf Artif Intell. 2020;34(09):13693-13696. [FREE Full text] [CrossRef]
- Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. Jun 01, 2023;183(6):589-596. [FREE Full text] [CrossRef] [Medline]
- Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. Feb 9, 2023;2(2):e0000198. [FREE Full text] [CrossRef] [Medline]
- Thirunavukarasu AJ, Ting DS, Elangovan K, Gutierrez L, Tan TF, Ting DS. Large language models in medicine. Nat Med. Aug 17, 2023;29(8):1930-1940. [CrossRef] [Medline]
- Johnson D, Goodman R, Patrinely J, Stone C, Zimmerman E, Donald R, et al. Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the chat-GPT model. Res Sq. Feb 28, 2023:rs.3.rs-2566942. [FREE Full text] [CrossRef] [Medline]
- Kasthuri VS, Glueck J, Pham H, Daher M, Balmaceno-Criss M, McDonald CL, et al. Assessing the accuracy and reliability of AI-generated responses to patient questions regarding spine surgery. J Bone Joint Surg Am. Jun 19, 2024;106(12):1136-1142. [CrossRef] [Medline]
- Shen Y, Heacock L, Elias J, Hentel KD, Reig B, Shih G, et al. ChatGPT and other large language models are double-edged swords. Radiology. Apr 2023;307(2):e230163. [CrossRef] [Medline]
- UNESCO’s recommendation on the ethics of artificial intelligence: key facts. UNESCO. Jul 20, 2023. URL: https://www.unesco.org/en/articles/unescos-recommendation-ethics-artificial-intelligence-key-facts [accessed 2025-09-22]
- Gallagher J, Srinivasan S. AI at Woebot health – our core principles. Woebot Health. Aug 2023. URL: https://woebothealth.com/ai-core-principles/ [accessed 2025-09-22]
- Alotaibi A, Sas C. Review of AI-based mental health apps. In: Proceedings of the 36th International BCS Human-Computer Interaction Conference. 2023. Presented at: BCS HCI '23; August 28-29, 2023; York, UK. URL: https://doi.org/10.14236/ewic/BCSHCI2023.27 [CrossRef]
- Isakadze N, Martin SS. How useful is the smartwatch ECG? Trends Cardiovasc Med. Oct 2020;30(7):442-448. [FREE Full text] [CrossRef] [Medline]
- Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med. 2020;3:65. [FREE Full text] [CrossRef] [Medline]
- Healthcare Apple (UK). Apple. URL: https://www.apple.com/uk/healthcare/apple-watch/ [accessed 2025-10-23]
- Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. Jul 13, 2021;54(6):1-35. [CrossRef]
- Ayorinde A, Mensah DO, Walsh J, Ghosh I, Ibrahim SA, Hogg J, et al. Health care professionals' experience of using AI: systematic review with narrative synthesis. J Med Internet Res. Oct 30, 2024;26:e55766. [FREE Full text] [CrossRef] [Medline]
- Oleribe OO. Leveraging and harnessing generative artificial intelligence to mitigate the burden of neurodevelopmental disorders (NDDs) in children. Healthcare (Basel). Aug 04, 2025;13(15):1898. [FREE Full text] [CrossRef] [Medline]
- Oleribe OO, Taylor-Robinson SD. Leveraging artificial intelligence tools and resources in leadership decisions. Am J Healthc Strateg. Aug 21, 2025;1(3). [CrossRef]
- Ronquillo CE, Peltonen LM, Pruinelli L, Chu CH, Bakken S, Beduschi A, et al. Artificial intelligence in nursing: priorities and opportunities from an international invitational think-tank of the Nursing and Artificial Intelligence Leadership Collaborative. J Adv Nurs. Sep 18, 2021;77(9):3707-3717. [FREE Full text] [CrossRef] [Medline]
- Ennis-O'Connor M, O'Connor WT. Charting the future of patient care: a strategic leadership guide to harnessing the potential of artificial intelligence. Healthc Manage Forum. Jul 05, 2024;37(4):290-295. [FREE Full text] [CrossRef] [Medline]
- Oleribe OO. Leading the next pandemics. Public Health Pract (Oxf). Jun 2025;9:100605. [FREE Full text] [CrossRef] [Medline]
- Oleribe O, Nwosu F, Taylor Robinson SD. Leadership and Management for Health Workers. Concepts. Theories. Practices. New York, NY. Europa Edizioni; 2024.
- Oleribe OO, Taylor-Robinson AW, Agala VR, Sobande OO, Izurieta R, Taylor-Robinson SD. Global adoption, promotion, impact, and deployment of AI in patient care, health care delivery, management, and health care systems leadership: cross-sectional survey. J Med Internet Res. Oct 22, 2025;27:e70805. [FREE Full text] [CrossRef] [Medline]
- Oleribe OO, Sabado P, Begum KT, Mutchler MG, Piccoli B, Taylor-Robinson AW, et al. Artificial intelligence adoption, adaptation, integration, and use in training healthcare workers at California State University, Dominguez Hills, California, USA. In: Proceedings of the 2025 Conference on American Public Health Association. 2025. Presented at: APHA '25; November 2-5, 2025:1-2; Washington, DC. URL: https://apha.confex.com/apha/2025/meetingapi.cgi/Paper/572299?filename=2025_Abstract572299.pdf&template=Word
- Pianykh OS, Langs G, Dewey M, Enzmann DR, Herold CJ, Schoenberg SO, et al. Continuous learning AI in radiology: implementation principles and early applications. Radiology. Oct 2020;297(1):6-14. [CrossRef] [Medline]
- Li D, Morkos J, Gage D, Yi PH. Artificial intelligence educational and research initiatives and leadership positions in academic radiology departments. Curr Probl Diagn Radiol. 2022;51(4):552-555. [CrossRef] [Medline]
- Chen Y, Moreira P, Liu WW, Monachino M, Nguyen TL, Wang A. Is there a gap between artificial intelligence applications and priorities in health care and nursing management? J Nurs Manag. Nov 24, 2022;30(8):3736-3742. [FREE Full text] [CrossRef] [Medline]
- Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D. Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges. J Am Med Inform Assoc. Jul 01, 2025;32(7):1093-1100. [CrossRef] [Medline]
- Woodie A. GenAI adoption, by the numbers. BigDATAwire. Sep 12, 2023. URL: https://www.datanami.com/2023/09/12/genai-adoption-by-the-numbers/ [accessed 2025-09-22]
- Five key trends of GenAI adoption. Guidehouse. Apr 9, 2024. URL: https://guidehouse.com/insights/advanced-solutions/2024/5-key-trends-of-genai-adoption [accessed 2025-09-22]
- Scaling patient engagement with multilingual GenAI tools in an underserved community clinic. Thinkitive. URL: https://www.thinkitive.com/case-studies/multilingual-genai-patient-engagement-community-clinic.html [accessed 2025-09-22]
- Quan NK, Taylor-Robinson AW. Vietnam's evolving healthcare system: notable successes and significant challenges. Cureus. Jun 14, 2023;15(6):e40414. [FREE Full text] [CrossRef] [Medline]
- Doan Thu TN, Nguyen QK, Taylor-Robinson AW. Healthcare in Vietnam: harnessing artificial intelligence and robotics to improve patient care outcomes. Cureus. Sep 2023;15(9):e45006. [FREE Full text] [CrossRef] [Medline]
Abbreviations
| AI: artificial intelligence |
| ECG: electrocardiogram |
| ETHICS: environmental concerns, transparency and explainability, hallucinations, inclusiveness and inconsistencies, cost and clinical workflow integration, and safety and security of data |
| GenAI: generative AI |
Edited by K El Emam; submitted 16.Oct.2024; peer-reviewed by S Fitzek, J Lopes; comments to author 15.Nov.2024; revised version received 10.Dec.2024; accepted 19.Oct.2025; published 30.Oct.2025.
Copyright©Obinna O Oleribe, Andrew W Taylor-Robinson, Christian C Chimezie, Simon D Taylor-Robinson. Originally published in JMIR AI (https://ai.jmir.org), 30.Oct.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR AI, is properly cited. The complete bibliographic information, a link to the original publication on https://www.ai.jmir.org/, as well as this copyright and license information must be included.

