Accessibility settings

Published on in Vol 5 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/77393, first published .
Unlocking the Full Potential of Health Care Teams: How Artificial Intelligence Can Help

Unlocking the Full Potential of Health Care Teams: How Artificial Intelligence Can Help

Unlocking the Full Potential of Health Care Teams: How Artificial Intelligence Can Help

1Faculty of Medicine, University of British Columbia, 317-2194 Health Sciences Mall, Vancouver, BC, Canada

2Digital Trials Center, Scripps Research Translational Institute, La Jolla, CA, United States

3Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, United States

4Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada

5Department of Family Practice, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada

Corresponding Author:

Manchi (Monica) Hsu, BSc


Developing effective health care teams is critical to meet the rising complexity in patient care. However, optimizing team composition, interpersonal dynamics, and care processes in complex health care systems requires processing vast amounts of data that capture fluid interactions among professionals—a task that has been cumbersome, costly, and avoided by most organizations. Well-designed artificial intelligence (AI) tools can meaningfully advance the frontier of health care teamwork, but the application of AI in this regard has been lagging. To support this development, we outline the potential for AI to help optimize team composition, strengthen norms and relationships among professionals, and standardize team-based clinical care processes. These applications can improve the integration of health care teams. Given the importance of relevant data for realizing such advances, we describe the potential types and sources of data that can support AI development. Furthermore, we highlight enabling strategies, including data-sharing alliances and leadership engagement to address privacy, interoperability, and ethical considerations. We propose a sequenced roadmap for piloting these applications based on technological readiness and clinical feasibility, ensuring that human oversight remains central as AI tools are introduced into complex care environments.

JMIR AI 2026;5:e77393

doi:10.2196/77393

Keywords



Amid growing patient complexity, health professionals must increasingly rely on one another to deliver integrated, high-quality care [1]. Patients with multiple chronic illnesses often require care from a network of specialized providers—and more than 50% of the Canadian population aged 65 years and older have multimorbidity [2]. They may visit as many as 16 different clinicians in a single year, spanning multiple roles and care venues [3]. At the same time, a typical primary care physician will interact annually with 229 other physicians across 117 practices while managing 1200 to 1900 patients [4]. This presents a daunting coordination environment, and because of its complexity, teamwork has been vexingly difficult for health care organizations to efficiently and effectively improve.

The very complexity and distributed interdependence that make teams attractive for health care environments also make them difficult to define and implement in practice. However, organizational leaders often lack the time and cognitive capacity to optimize the design and operation of such teams because the team composition and tasks are often fluid and complex.

Artificial intelligence (AI) techniques and tools can help improve the design and operations of health care teams. Throughout this paper, we use “AI” as an umbrella term referring to a set of technologies including machine learning and deep learning, artificial neural networks, natural language processing, generative AI including large language models, and agentic AI systems. We exclude rule-based expert systems, physical robots, and robotic process automation [5]. The methods we consider in scope are included in 2 papers cited here [6,7] for readers who are interested.

Although AI tools have been introduced in many clinical areas [8], their application to arranging and organizing labor in health care remains largely untapped. Previous literature has largely focused on 2 separate domains. The work on teamwork in health care has examined the importance of organizational structures, collaboration, norms, and leadership styles without systematically drawing on AI tools [9-12]. On the flipside, AI development has largely focused on tools that directly influence administrative, academic, and clinical tasks, without directly influencing the team structure, dynamics, and operations [13-15]. At the intersection of these 2 topics lies a novel field of research that is needed to explore the role of AI in improving teamwork in health care.

Given the myriad ways AI may help team development, we drew on empirical and theoretical literature on care integration to focus on three key AI applications to health care teams: (1) optimizing team composition, (2) strengthening norms and relationships among professionals, and (3) standardizing team-based clinical care processes [1,16]. To illustrate the feasibility, we describe data sources (Table 1) and target tasks for AI techniques to unlock these 3 applications. We also suggest that data-sharing alliances—perhaps strengthened with federated learning or horizontal integration programs—may facilitate access to the necessary data for training AI models. When such applications become realized, ensuring that humans maintain the autonomy to interpret and implement the AI recommendations is also ethical and practical.

Table 1. Overview of data sources for artificial intelligence–driven team optimization.
Steps (definition) and dimensions capturedData sources and examples
Application 1: identify optimal team members to care for target patient population
Patient needs
  • EMRa: demographics, comorbidities, visit records, geographic data
Provider characteristics
  • Employment registry: provider demographics, training history, caseload records
Teamwork effectiveness
  • EMR: patient outcomes
  • Validated survey of team performanceb [17]
Care outcomes
  • EMR: lab results, dates of patient-provider interactions, treatment records
Application 2: integrate new team members to cultivate effective organizational culture
Provider preferences
  • EMR and voice recordings: provider-patient and provider-provider interactions.
  • Validated surveys of personalities and preferencesb [18-20]
Organizational culture
Application 3: standardize clinical decisions to improve care quality and efficiency
Clinical status
  • EMR: date, time, and results from patient-provider conversations, physical exams, lab tests, and imaging
Clinical decisions
  • EMR: treatment orders, care protocols
Patient-generated health data
  • Wearable devices, home monitoring devices, mobile tracking apps

aEMR: electronic medical record.

bThe surveys are listed in Multimedia Appendix 1 [17-21] and have demonstrated good psychometric properties.

cMost surveys target the respondents’ view of the organizations. Wording may need to be adjusted to capture individual respondent’s views of themselves, which can then be analyzed to understand the respondents’ fit with each other.


Because a patient’s care is often spread across units and shifts over time [11], observing and measuring the relationship between team composition and performance can be challenging. Without clearly defining such relationships, promptly optimizing team composition becomes problematic, which limits health care quality [22].

Training AI models to provide timely recommendations of appropriate members for a given patient’s care team requires learning relationships between the clinical environment, patient needs, and indicators of successful team performance. Not only can such research identify latent patterns driving referrals, but also researchers are directly piloting when multidisciplinary, proactive electronic consultations can be helpful. In 2 instances, proactive AI-driven predictions regarding the need for palliative care consultations improved access to palliative care while reducing subsequent hospitalizations [23,24]. Furthermore, a large quantity of relevant data is already available. For example, input variables may include existing and potential new provider profiles and patient clinical history including demographics, morbidities, and past experiences with providers from electronic medical records (EMRs). Outcome variables may include care performance based on EMR-derived metrics and team functions from validated surveys [17,25-27]. Users and experts can further refine the output based on disease-specific guidelines.

Within an episode of care, updated patient information in EMRs can dynamically adjust their optimal care team structure, potentially automating the consult process for relevant specialties. For instance, a patient admitted for pneumonia may initially only require a general internist, but if blood pressure drops precipitously, the AI tool may send out automatic alerts to engage an intensivist with tailored patient summaries that streamline the handoff process. Such AI tools may even discern the best person within a specialty to involve based on previous team-based interactions and care quality outcomes [26,27].

Beyond supporting individual patient care team assignment, patient panel data in EMRs can project health care needs [28-31], which can inform proactive workforce planning [26,27].


As health care teams expand to address varied and complex patient needs, aligning providers’ values and communication norms is essential for effective coordination and decision-making [1]. However, meaningfully, promptly, and systematically acculturating new members can be challenging given the financial and cognitive barriers facing purely human-led efforts.

AI tools can build on existing approaches to help identify the underlying organizational culture, team climate, and individual behavior within teams [12,32,33]. Such work has historically relied on time-consuming manual qualitative coding and quantification or numeric scale survey instruments that can be biased and context inappropriate. EMRs and recordings of providers’ interactions with patients and providers can reveal a team’s underlying attitude and preferences, in addition to past safety events and interpersonal conflicts [34,35]. Validated survey instruments can also supplement the identification of underlying personalities and preferences. Furthermore, AI tools can efficiently generate summarized versions of these organizational culture and climate factors for teams to consider potential improvements [36,37]. Further still, such discussions can be enhanced using currently available AI tools that introduce new members to effective communication skills [38,39]. Recent empirical studies on TeamVision, an AI-enhanced multimodal analytics platform, piloted the idea of capturing team dynamics using AI-powered tools into data-informed debriefs that can improve team cohesion and communication. This platform captures voice presence, automated transcripts, spatial orientation, and team interactions during health care simulations to identify communication patterns in real time. This was deployed across 56 teams (221 students) in a nursing curriculum. Both learners and educators rated the system as useful and trustworthy for strengthening relational dynamics [40].

AI-based onboarding is also generating momentum, although not yet widely applied [41]. AI tools can help analyze new members’ personalities and preferences based on surveys and previous patient interactions. Customized simulation exercises can then help new members align with the team [42].


Despite clinical guidelines’ supposed benefits, clinicians may voluntarily or inadvertently reject guideline recommendations [43,44]. These situations can lead to confusion and problematic variation across care teams where individual clinicians exhibit highly varying practice patterns. Standardizing clinical processes on demand can set a floor for care quality and efficiency, but its effectiveness is limited by the constant human oversight required to implement them well.

EMRs already help to standardize clinical workflow via clinical decision support systems that codify clinical decisions, allowing different care providers to understand and continue subsequent care delivery. Furthermore, many AI tools already use EMR and multimodal data for pattern identification and risk predictions, ranging from disease progression and clinical care complications to treatment outcomes [45-51]. EMR data can capture patient needs in demographics, diagnoses, tests, and images; clinical interventions in drugs, procedures, and psychotherapies; and clinical outcomes in tests and images. Even more up-to-date insights into a patient’s health status can leverage patient-generated data from wearable devices, home monitoring devices, and mobile tracking apps [52-61].

AI tools may recommend preventative actions, disease management courses, and diagnostic recommendations based on up-to-date patient information and clinical guidelines. If multiple providers can consistently access the AI recommendations, care standardization may improve alongside better care coordination and minimize information loss.

Recent large-scale empirical studies demonstrate that AI-assisted decision support can meaningfully improve adherence to evidence-based care while reducing variability and diagnostic errors. In a prospective study spanning 39,849 primary care visits across 15 clinics in Nairobi, an AI consult tool provided real-time background diagnostic support. Clinicians using the tool experienced substantial error reductions in taking history (32%), ordering investigations (10%), diagnosing (16%), and treating (13%), and improvements persisted throughout the study period [62].


In what follows, we discuss the practical and ethical implications of applying AI to teamwork in health care. We start with the need for relevant data, highlighting the importance of and policy supports for data-sharing alliances. We then describe how the data need to be adjusted to achieve optimal results and end with a discussion of ethical implications.


Large datasets that capture sufficient variability will be key for AI tools to learn reliable and useful patterns. Smaller health care organizations with limited information management capabilities, little patient variability, and few employees may have difficulty generating such datasets independently. Three solutions may encourage the development of data-sharing alliances in the current landscape where data are prized as a valuable resource.

First, federated learning enables health care organizations to collaborate on AI model development by maintaining control over their own data without pooling sensitive patient data into a single location, thereby addressing data privacy concerns. Each organization keeps its data securely on-site while sharing only model updates, so the combined system learns from all participants [63]. For instance, this approach is used by the Mayo Clinic Platform Connect and has proven to be successful in generating a prediction model forecasting patient responses to chemotherapy [64].

Second, accountable care organization policy encourages integration across health care organizations. Financial subsidies can incentivize uniform infrastructure standards or regulatory safe harbors for innovation pilots [65]. For example, they can also drive data sharing by adopting interoperability standards like Fast Healthcare Interoperability Resources (FHIR), which outlines a set of data formats and elements for digital health data transfer,

Third, strengthening health care organization leaders’ trust, appreciation, and context-appropriate adaptation of AI tools’ capacity for informing teamwork will be crucial. Executive champions can set organizational priorities, allocate resources, and communicate a clear vision for AI adoption. Structured change management strategies—including clinician training on digital literacy, iterative feedback loops, and transparent communication—help build trust and facilitate culture change across clinical teams. Finally, multidisciplinary governance committees can integrate ethical oversight, technical standards, and clinical priorities into operational and clinical decision-making, ensuring that AI implementation remains accountable, safe, and sustainable over time.


For AI tools to provide up-to-date recommendations on team compositions, the data need to reflect day-to-day changes in patient acuity and clinician availability. AI developers will need to optimize how frequently to update the data to best reflect real-time clinical needs and clinician resources (eg, caseload and emergent expertise) without driving alert fatigue or administrative burnout.

Given the potential for AI tools to consider diverse types of variables, ensuring that multiple outcomes are assessed and incorporated into the models can be helpful. For example, AI developers can parameterize care continuity to minimize care disruptions alongside other variables that may affect care quality.


Integrating new technology into existing workflows can face considerable resistance [66]. The range of factors can include—but are not limited to—concern over loss of power, past experiences of novel technologies, and general resistance to change. Integration of AI tools into teamwork in health care can reveal similar challenges [67] and may especially present concerns about surveillance and privacy in work interactions. These challenges require thoughtful AI implementation strategies that take team members’ concerns seriously and seek to address them. This can include transparent communication from leadership, engagement of frontline voices in tool development and testing, and responsive governance structures. These efforts may be enhanced by (1) focusing on common values, (2) fostering collaborative vision, (3) garnering buy-in around the technology, and (4) supporting psychological safety [68,69]. Targeted training, incentives, and ongoing workflow adjustments would be key to supporting additional uptake [68]. Finally, given the complex ethical and legal risks around AI, providing a well-developed legal and ethical framework would also minimize associated anxiety regarding AI use [68].


The applications outlined previously show that AI can inform team-based care, but ultimately, teamwork is a human process. This has both ethical and practical implications [70].

First, because AI tools used in team-based care directly shape human work experiences, health professionals must retain the autonomy to accept, refine, or reject AI-generated recommendations. However, this autonomy may be problematic due to the phenomenon of automation bias—the tendency to overrely on algorithmic outputs even when flawed. Users may mechanically adhere to AI recommendations without critical engagement [71]. Helping health care providers develop the relevant skills to safely use AI for clinical work may require time and investment.

Second, AI systems trained on historical data risk propagating structural biases, particularly if marginalized groups are underrepresented in training datasets [72]. Without careful development, evaluation, and oversight, these biases could influence team composition, decision-making hierarchies, and ultimately patient outcomes, reinforcing existing power asymmetries within health care [73]. Explicit attention to this potential for bias requires reviewing and critically assessing model output for bias potential.

Third, questions of responsibility and liability arise when AI recommendations shape team-based decisions, but final accountability remains with clinicians [74]. Clear governance structures and continuous evaluation processes are therefore essential to delineate professional responsibility, ensure transparency, and maintain trust.

Fourth, the impact on professional roles must be considered: while AI can standardize workflows and enhance efficiency, it could also shift authority from clinicians to algorithm developers or organizational leaders, reshaping professional boundaries in ways that demand scrutiny [75].

Finally, because teamwork involves interpretation, negotiation, and shared responsibility, AI-generated recommendations should be seen not as prescriptive directives but as tools to initiate dialogue—surfacing problems, questions, and options for deeper engagement rather than replacing human judgment.


Despite efforts to enhance team-based care, humans are inherently limited by the quantity of data they can meaningfully experience and analyze. AI tools can leverage the considerable data that human interactions in complex health care systems generate to help optimize team composition, improve interpersonal interactions, and standardize care processes. Given the current lack of empirical data and systematic synthesis on this topic, this paper conceptually frames the discussion around leveraging AI tools to improve team-based care. This can motivate further inquiry, and we offer tangible recommendations about how such work can proceed, based on technological readiness.

The varying approaches offer different opportunities for a variety of groups. Policymakers can draw on evidence around privacy, ethics, and related concerns to develop legislation, regulations, guidelines, and incentives. Practitioners play a key role by engaging with policymakers and organizations to codevelop pilot initiatives and provide feedback on feasibility and impact. Meanwhile, AI developers can build into their tools interoperable elements while adhering to emerging policies.

Finally, sequencing these applications according to technological readiness and clinical need may be helpful. In the short term, standardizing clinical processes is immediately actionable, as AI-assisted order sets and decision support tools have already demonstrated significant value [76,77]. In the midterm, optimizing team compositions can be operationalized by leveraging existing datasets such as bed management dashboards and scheduling tools. Targeted pilots may provide early evidence of feasibility and impact. Strengthening norms and relationships may be a longer-term goal, given the ethical, cultural, and privacy challenges of collecting and analyzing interpersonal data within health care teams.

Funding

This study was funded by Michael Smith Health Research BC Research Trainee Award (RT-2023-3307).

Conflicts of Interest

JK consults on topics related to artificial intelligence (AI) in health care. No financial compensation was provided for work on this manuscript. SH-TT serves as a member of the AI Advisory Council for The College of Family Physicians of Canada. SH-TT is also an associate editor of JMIR Medical Education. All other authors declare no conflicts of interest.

Multimedia Appendix 1

Validated surveys for artificial intelligence–driven team optimization.

DOCX File, 21 KB

  1. Singer SJ, Kerrissey M, Friedberg M, Phillips R. A comprehensive theory of integration. Med Care Res Rev. Apr 2020;77(2):196-207. [CrossRef] [Medline]
  2. Health reports. Government of Canada. 2025. URL: https://www150.statcan.gc.ca/n1/pub/82-003-x/82-003-x2025003-eng.htm [Accessed 2026-04-14]
  3. Pham HH, Schrag D, O’Malley AS, Wu B, Bach PB. Care patterns in Medicare and their implications for pay for performance. N Engl J Med. Mar 15, 2007;356(11):1130-1139. [CrossRef] [Medline]
  4. Pham HH, O’Malley AS, Bach PB, Saiontz-Martinez C, Schrag D. Primary care physicians’ links to other physicians through Medicare patients: the scope of care coordination. Ann Intern Med. Feb 17, 2009;150(4):236-242. [CrossRef] [Medline]
  5. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. Jun 2019;6(2):94-98. [CrossRef] [Medline]
  6. Howell MD, Corrado GS, DeSalvo KB. Three epochs of artificial intelligence in health care. JAMA. Jan 16, 2024;331(3):242-244. [CrossRef] [Medline]
  7. Angus DC, Khera R, Lieu T, et al. AI, health, and health care today and tomorrow: the JAMA Summit report on artificial intelligence. JAMA. Nov 11, 2025;334(18):1650-1664. [CrossRef] [Medline]
  8. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. Sep 22, 2023;23(1):689. [CrossRef] [Medline]
  9. Singer SJ, Kerrissey MJ. Leading health care teams beyond Covid-19: marking the moment and shifting from recuperation to regeneration. NEJM Catal. Jul 27, 2021. [CrossRef]
  10. Satterstrom P, Kerrissey M, DiBenigno J. The voice cultivation process: how team members can help upward voice live on to implementation. Adm Sci Q. 2020;66(2):380-425. [CrossRef]
  11. Kerrissey MJ, Satterstrom P, Edmondson AC. Into the fray: adaptive approaches to studying novel teamwork forms. Organ Psychol Rev. 2020;10(2):62-86. [CrossRef]
  12. Kerrissey M, Novikov Z. Joint problem-solving orientation, mutual value recognition, and performance in fluid teamwork environments. Front Psychol. 2024;15:1288904. [CrossRef] [Medline]
  13. Sasseville M, Yousefi F, Ouellet S, et al. The impact of AI scribes on streamlining clinical documentation: a systematic review. Healthcare (Basel). Jun 16, 2025;13(12):1447. [CrossRef] [Medline]
  14. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel). Mar 19, 2023;11(6):887. [CrossRef] [Medline]
  15. Lu MY, Chen B, Williamson DF, et al. A visual-language foundation model for computational pathology. Nat Med. Mar 2024;30(3):863-874. [CrossRef] [Medline]
  16. Ridgely MS, Buttorff C, Wolf LJ, et al. The importance of understanding and measuring health system structural, functional, and clinical integration. Health Serv Res. Dec 2020;55 Suppl 3(Suppl 3):1049-1061. [CrossRef] [Medline]
  17. Valentine MA, Nembhard IM, Edmondson AC. Measuring teamwork in health care settings: a review of survey instruments. Med Care. Apr 2015;53(4):e16-e30. [CrossRef] [Medline]
  18. Baker DP, Amodeo AM, Krokos KJ, Slonim A, Herrera H. Assessing teamwork attitudes in healthcare: development of the TeamSTEPPS teamwork attitudes questionnaire. Qual Saf Health Care. Dec 2010;19(6):e49. [CrossRef] [Medline]
  19. Marsh HW, Lüdtke O, Muthén B, et al. A new look at the big five factor structure through exploratory structural equation modeling. Psychol Assess. Sep 2010;22(3):471-491. [CrossRef] [Medline]
  20. Heinemann GD, Schmitt MH, Farrell MP, Brallier SA. Development of an attitudes toward health care teams Scale. Eval Health Prof. Mar 1999;22(1):123-142. [CrossRef] [Medline]
  21. Scott T, Mannion R, Davies H, Marshall M. The quantitative measurement of organizational culture in health care: a review of the available instruments. Health Serv Res. Jun 2003;38(3):923-945. [CrossRef] [Medline]
  22. Ayanian JZ, Landrum MB, Guadagnoli E, Gaccione P. Specialty of ambulatory care physicians and mortality among elderly patients after myocardial infarction. N Engl J Med. Nov 21, 2002;347(21):1678-1686. [CrossRef] [Medline]
  23. He JC, Moffat GT, Podolsky S, et al. Machine learning to allocate palliative care consultations during cancer treatment. J Clin Oncol. May 10, 2024;42(14):1625-1634. [CrossRef] [Medline]
  24. Wilson PM, Ramar P, Philpot LM, et al. Effect of an artificial intelligence decision support tool on palliative care referral in hospitalized patients: a randomized clinical trial. J Pain Symptom Manage. Jul 2023;66(1):24-32. [CrossRef] [Medline]
  25. Vleminckx S, Van Bogaert P, De Meulenaere K, Willem L, Haegdorens F. Factors influencing the formation of balanced care teams: the organisation, performance, and perception of nursing care teams and the link with patient outcomes: a systematic scoping review. BMC Health Serv Res. Sep 27, 2024;24(1):1129. [CrossRef] [Medline]
  26. Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C. EHR audit logs: a new goldmine for health services research? J Biomed Inform. Jan 2020;101:103343. [CrossRef] [Medline]
  27. Rose C, Thombley R, Noshad M, et al. Team is brain: leveraging EHR audit log data for new insights into acute care processes. J Am Med Inform Assoc. Dec 13, 2022;30(1):8-15. [CrossRef] [Medline]
  28. Twick I, Zahavi G, Benvenisti H, et al. Towards interpretable, medically grounded, EMR-based risk prediction models. Sci Rep. Jun 15, 2022;12:9990. [CrossRef] [Medline]
  29. Mahmoudi E, Kamdar N, Kim N, Gonzales G, Singh K, Waljee AK. Use of electronic medical records in development and validation of risk prediction models of hospital readmission: systematic review. BMJ. Apr 8, 2020;369:m958. [CrossRef] [Medline]
  30. Xiao C, Choi E, Sun J. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. J Am Med Inform Assoc. Oct 1, 2018;25(10):1419-1428. [CrossRef] [Medline]
  31. Amirahmadi A, Ohlsson M, Etminani K. Deep learning prediction models based on EHR trajectories: a systematic review. J Biomed Inform. Aug 2023;144:104430. [CrossRef] [Medline]
  32. Corritore M, Goldberg A, Srivastava SB. Duality in diversity: how intrapersonal and interpersonal cultural heterogeneity relate to firm performance. Adm Sci Q. 2020;65(2):359-394. URL: https:/​/www.​gsb.stanford.edu/​faculty-research/​publications/​duality-diversity-how-intrapersonal-interpersonal-cultural [Accessed 2026-04-14]
  33. Schachner M, Ardag MM, Holtz P, et al. Extracting organizational culture from text: the development and validation of a theory-driven tool for digital data. Eur J Work Organ Psychol. 2024;33(5):571-582. [CrossRef]
  34. Manzo G, Celi LA, Shabazz Y, Mulcahey R, Flores LJ, Demner-Fushman D. Caregivers attitude detection from clinical notes. AMIA Annu Symp Proc. 2024;2023:1125-1134. [Medline]
  35. Himmelstein G, Bates D, Zhou L. Examination of stigmatizing language in the electronic health record. JAMA Netw Open. Jan 4, 2022;5(1):e2144967. [CrossRef] [Medline]
  36. Alvesson M, Sveningsson S. Changing Organizational Culture: Cultural Change Work in Progress. 2nd ed. Routledge; 2015. [CrossRef]
  37. Warrick DD. What leaders need to know about organizational culture. Bus Horiz. 2017;60(3):395-404. [CrossRef]
  38. Liaw SY, Tan JZ, Lim S, et al. Artificial intelligence in virtual reality simulation for interprofessional communication training: mixed method study. Nurse Educ Today. Mar 2023;122:105718. [CrossRef] [Medline]
  39. Argyle LP, Bail CA, Busby EC, et al. Leveraging AI for democratic discourse: chat interventions can improve online political conversations at scale. Proc Natl Acad Sci U S A. Oct 10, 2023;120(41):e2311627120. [CrossRef] [Medline]
  40. Echeverria V, Zhao L, Alfredo R, et al. TeamVision: an AI-powered learning analytics system for supporting reflection in team-based healthcare simulation. Presented at: CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems; Apr 26 to May 1, 2025. [CrossRef]
  41. Votto AM, Valecha R, Najafirad P, Rao HR. Artificial intelligence in tactical human resource management: a systematic literature review. Int J Inf Manage Data Insights. Nov 2021;1(2):100047. [CrossRef]
  42. Ritz E, Donisi F, Elshan E, Rietsche R. Artificial socialization? How artificial intelligence applications can shape a new era of employee onboarding practices. Presented at: Hawaii International Conference on System Sciences; Jan 3-6, 2023. [CrossRef]
  43. Baiardini I, Braido F, Bonini M, Compalati E, Canonica GW. Why do doctors and patients not follow guidelines? Curr Opin Allergy Clin Immunol. Jun 2009;9(3):228-233. [CrossRef] [Medline]
  44. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. Oct 20, 1999;282(15):1458-1465. [CrossRef] [Medline]
  45. Ferrara M, Bertozzi G, Di Fazio N, et al. Risk management and patient safety in the artificial intelligence era: a systematic review. Healthcare (Basel). Feb 27, 2024;12(5):549. [CrossRef] [Medline]
  46. Goh KH, Wang L, Yeow AY, et al. Artificial intelligence in sepsis early prediction and diagnosis using unstructured data in healthcare. Nat Commun. Jan 29, 2021;12(1):711. [CrossRef] [Medline]
  47. Soenksen LR, Ma Y, Zeng C, et al. Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digit Med. Sep 20, 2022;5(1):149. [CrossRef] [Medline]
  48. Tong L, Shi W, Isgut M, et al. Integrating multi-omics data with EHR for precision medicine using advanced artificial intelligence. IEEE Rev Biomed Eng. 2024;17:80-97. [CrossRef] [Medline]
  49. Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. Discov Artif Intell. 2023;3(1):5. [CrossRef] [Medline]
  50. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. Oct 2018;2(10):719-731. [CrossRef] [Medline]
  51. Wasserman RL, Edrees HH, Seger DL, et al. Development of a drug allergy alert tiering algorithm for penicillins and cephalosporins. Int J Med Inform. Mar 2025;195:105789. [CrossRef] [Medline]
  52. Witt D, Kellogg R, Snyder M, Dunn J. Windows into human health through wearables data analytics. Curr Opin Biomed Eng. Mar 2019;9:28-46. [CrossRef] [Medline]
  53. Dellinger EP. Patient-directed active surgical incisions surveillance may lead to further surgical site infection reduction. Surg Infect (Larchmt). Oct 2019;20(7):584-587. [CrossRef] [Medline]
  54. Peake JM, Kerr G, Sullivan JP. A critical review of consumer wearables, mobile applications, and equipment for providing biofeedback, monitoring stress, and sleep in physically active populations. Front Physiol. 2018;9:743. [CrossRef] [Medline]
  55. Peyroteo M, Ferreira IA, Elvas LB, Ferreira JC, Lapão LV. Remote monitoring systems for patients with chronic diseases in primary health care: systematic review. JMIR Mhealth Uhealth. Dec 21, 2021;9(12):e28285. [CrossRef] [Medline]
  56. Bashi N, Karunanithi M, Fatehi F, Ding H, Walters D. Remote monitoring of patients with heart failure: an overview of systematic reviews. J Med Internet Res. Jan 20, 2017;19(1):e18. [CrossRef] [Medline]
  57. Philip NY, Rodrigues J, Wang H, Fong SJ, Chen J. Internet of things for in-home health monitoring systems: current advances, challenges and future directions. IEEE J Sel Areas Commun. Feb 2021;39(2):300-310. [CrossRef]
  58. Hackelöer M, Schmidt L, Verlohren S. New advances in prediction and surveillance of preeclampsia: role of machine learning approaches and remote monitoring. Arch Gynecol Obstet. Dec 2023;308(6):1663-1677. [CrossRef] [Medline]
  59. Rohmetra H, Raghunath N, Narang P, Chamola V, Guizani M, Lakkaniga NR. AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges. Computing. 2021;105(4):783-809. [CrossRef]
  60. Taylor ML, Thomas EE, Snoswell CL, Smith AC, Caffery LJ. Does remote patient monitoring reduce acute care use? A systematic review. BMJ Open. Mar 2, 2021;11(3):e040232. [CrossRef] [Medline]
  61. Del Din S, Kirk C, Yarnall AJ, Rochester L, Hausdorff JM. Body-worn sensors for remote monitoring of Parkinson’s disease motor symptoms: vision, state of the art, and challenges ahead. J Parkinsons Dis. 2021;11(s1):S35-S47. [CrossRef] [Medline]
  62. Korom R, Kiptinness S, Adan N, Said K, Ithuli C, Rotich O, et al. AI-based clinical decision support for primary care: a real-world study. arXiv. Preprint posted online on Jul 22, 2025. [CrossRef]
  63. Yurdem B, Kuzlu M, Gullu MK, Catak FO, Tabassum M. Federated learning: overview, strategies, applications, tools and future directions. Heliyon. Sep 20, 2024;10(19):e38137. [CrossRef] [Medline]
  64. Halamka J. Exploring a federated approach to data management. Mayo Clinic Platform. 2023. URL: https://www.mayoclinicplatform.org/2023/09/14/exploring-a-federated-approach-to-data-management/ [Accessed 2026-04-14]
  65. Trombley MJ, Fout B, Brodsky S, McWilliams JM, Nyweide DJ, Morefield B. Early effects of an accountable care organization model for underserved areas. N Engl J Med. Aug 8, 2019;381(6):543-551. [CrossRef] [Medline]
  66. Laumer S, Eckhardt A. Why do people reject technologies: a review of user resistance theories. In: Dwivedi YK, Wade MR, Schneberger SL, editors. Information Systems Theory: Explaining and Predicting Our Digital Society. Springer; 2012:63-86. [CrossRef]
  67. Muller SH, van Delden JJ, van Thiel GJ, Hypermarker Consortium. Towards responsible surveillance in preventive health data-AI research. PLOS Digit Health. Dec 2025;4(12):e0001146. [CrossRef] [Medline]
  68. Nair M, Svedberg P, Larsson I, Nygren JM. A comprehensive overview of barriers and strategies for AI implementation in healthcare: mixed-method design. PLoS One. 2024;19(8):e0305949. [CrossRef] [Medline]
  69. Kerrissey MJ, Hayirli TC, Bhanja A, Stark N, Hardy J, Peabody CR. How psychological safety and feeling heard relate to burnout and adaptation amid uncertainty. Health Care Manage Rev. 2022;47(4):308-316. [CrossRef] [Medline]
  70. Corfmat M, Martineau JT, Régis C. High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare. BMC Med Ethics. Jan 15, 2025;26(1):4. [CrossRef] [Medline]
  71. Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inform Assoc. 2012;19(1):121-127. [CrossRef] [Medline]
  72. Ntoutsi E, Fafalios P, Gadiraju U, et al. Bias in data‐driven artificial intelligence systems—an introductory survey. WIREs Data Min Knowl Discovery. 2020;10(3):e1356. [CrossRef]
  73. Nazer LH, Zatarah R, Waldrip S, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health. Jun 2023;2(6):e0000278. [CrossRef] [Medline]
  74. Tsuei SH. How are Canadians regulating artificial intelligence for healthcare? A brief analysis of the current legal directions, challenges and deficiencies. Healthc Pap. Apr 2025;22(4):44-51. [CrossRef] [Medline]
  75. Cohen IG, Ajunwa I, Parikh RB. Medical AI and clinician surveillance - the risk of becoming quantified workers. N Engl J Med. Jun 19, 2025;392(23):2289-2291. [CrossRef] [Medline]
  76. Lång K, Josefsson V, Larsson AM, et al. Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study. Lancet Oncol. Aug 2023;24(8):936-944. [CrossRef] [Medline]
  77. Tu T, Schaekermann M, Palepu A, et al. Towards conversational diagnostic artificial intelligence. Nature. Jun 2025;642(8067):442-450. [CrossRef] [Medline]


AI: artificial intelligence
EMR: electronic medical record
FHIR: Fast Healthcare Interoperability Resources


Edited by Khaled El Emam; submitted 13.May.2025; peer-reviewed by Jesu Marcus Immanuvel Arockiasamy, Sylvia J Hysong; final revised version received 08.Feb.2026; accepted 27.Feb.2026; published 11.May.2026.

Copyright

© Manchi (Monica) Hsu, Benny Bikash Pokharel, Jacqueline Kueper, Michaela Kerrissey, Sian Hsiang-Te Tsuei. Originally published in JMIR AI (https://ai.jmir.org), 11.May.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR AI, is properly cited. The complete bibliographic information, a link to the original publication on https://www.ai.jmir.org/, as well as this copyright and license information must be included.