%0 Journal Article %@ 1438-8871 %I JMIR Publications %V 26 %N %P e53986 %T A Taxonomy and Archetypes of AI-Based Health Care Services: Qualitative Study %A Blaß,Marlene %A Gimpel,Henner %A Karnebogen,Philip %+ FIM Research Center for Information Management, University of Hohenheim, Branch Business & Information Systems Engineering of the Fraunhofer FIT, Schloss Hohenheim 1, Stuttgart, 70599, Germany, 49 0711 459 24051, marlene.blass@fit.fraunhofer.de %K healthcare %K artificial intelligence %K AI %K taxonomy %K services %K cluster analysis %K archetypes %D 2024 %7 27.11.2024 %9 Original Paper %J J Med Internet Res %G English %X Background: To cope with the enormous burdens placed on health care systems around the world, from the strains and stresses caused by longer life expectancy to the large-scale emergency relief actions required by pandemics like COVID-19, many health care companies have been using artificial intelligence (AI) to adapt their services. Nevertheless, conceptual insights into how AI has been transforming the health care sector are still few and far between. This study aims to provide an overarching structure with which to classify the various real-world phenomena. A clear and comprehensive taxonomy will provide consensus on AI-based health care service offerings and sharpen the view of their adoption in the health care sector. Objective: The goal of this study is to identify the design characteristics of AI-based health care services. Methods: We propose a multilayered taxonomy created in accordance with an established method of taxonomy development. In doing so, we applied 268 AI-based health care services, conducted a structured literature review, and then evaluated the resulting taxonomy. Finally, we performed a cluster analysis to identify the archetypes of AI-based health care services. Results: We identified 4 critical perspectives: agents, data, AI, and health impact. Furthermore, a cluster analysis yielded 13 archetypes that demonstrate our taxonomy’s applicability. Conclusions: This contribution to conceptual knowledge of AI-based health care services enables researchers as well as practitioners to analyze such services and improve their theory-led design. %M 39602787 %R 10.2196/53986 %U https://www.jmir.org/2024/1/e53986 %U https://doi.org/10.2196/53986 %U http://www.ncbi.nlm.nih.gov/pubmed/39602787 %0 Journal Article %@ 2817-1705 %I JMIR Publications %V 3 %N %P e55957 %T Toward Clinical Generative AI: Conceptual Framework %A Bragazzi,Nicola Luigi %A Garbarino,Sergio %+ Human Nutrition Unit, Department of Food and Drugs, University of Parma, Via Volturno 39, Parma, 43125, Italy, 39 0521 903121, nicolaluigi.bragazzi@unipr.it %K clinical intelligence %K artificial intelligence %K iterative process %K abduction %K benchmarking %K verification paradigms %D 2024 %7 7.6.2024 %9 Viewpoint %J JMIR AI %G English %X Clinical decision-making is a crucial aspect of health care, involving the balanced integration of scientific evidence, clinical judgment, ethical considerations, and patient involvement. This process is dynamic and multifaceted, relying on clinicians’ knowledge, experience, and intuitive understanding to achieve optimal patient outcomes through informed, evidence-based choices. The advent of generative artificial intelligence (AI) presents a revolutionary opportunity in clinical decision-making. AI’s advanced data analysis and pattern recognition capabilities can significantly enhance the diagnosis and treatment of diseases, processing vast medical data to identify patterns, tailor treatments, predict disease progression, and aid in proactive patient management. However, the incorporation of AI into clinical decision-making raises concerns regarding the reliability and accuracy of AI-generated insights. To address these concerns, 11 “verification paradigms” are proposed in this paper, with each paradigm being a unique method to verify the evidence-based nature of AI in clinical decision-making. This paper also frames the concept of “clinically explainable, fair, and responsible, clinician-, expert-, and patient-in-the-loop AI.” This model focuses on ensuring AI’s comprehensibility, collaborative nature, and ethical grounding, advocating for AI to serve as an augmentative tool, with its decision-making processes being transparent and understandable to clinicians and patients. The integration of AI should enhance, not replace, the clinician’s judgment and should involve continuous learning and adaptation based on real-world outcomes and ethical and legal compliance. In conclusion, while generative AI holds immense promise in enhancing clinical decision-making, it is essential to ensure that it produces evidence-based, reliable, and impactful knowledge. Using the outlined paradigms and approaches can help the medical and patient communities harness AI’s potential while maintaining high patient care standards. %M 38875592 %R 10.2196/55957 %U https://ai.jmir.org/2024/1/e55957 %U https://doi.org/10.2196/55957 %U http://www.ncbi.nlm.nih.gov/pubmed/38875592 %0 Journal Article %@ 2817-1705 %I JMIR Publications %V 2 %N %P e48123 %T Effect of Benign Biopsy Findings on an Artificial Intelligence–Based Cancer Detector in Screening Mammography: Retrospective Case-Control Study %A Zouzos,Athanasios %A Milovanovic,Aleksandra %A Dembrower,Karin %A Strand,Fredrik %+ Department of Oncology and Pathology, Karolinska Institute, Solnavagen 1, Stockholm, 171 77, Sweden, 46 729142636, athanasios.zouzos@ki.se %K artificial intelligence %K AI %K mammography %K breast cancer %K benign biopsy %K screening %K cancer screening %K diagnostic %K radiology %K detection system %D 2023 %7 31.8.2023 %9 Original Paper %J JMIR AI %G English %X Background: Artificial intelligence (AI)–based cancer detectors (CAD) for mammography are starting to be used for breast cancer screening in radiology departments. It is important to understand how AI CAD systems react to benign lesions, especially those that have been subjected to biopsy. Objective: Our goal was to corroborate the hypothesis that women with previous benign biopsy and cytology assessments would subsequently present increased AI CAD abnormality scores even though they remained healthy. Methods: This is a retrospective study applying a commercial AI CAD system (Insight MMG, version 1.1.4.3; Lunit Inc) to a cancer-enriched mammography screening data set of 10,889 women (median age 56, range 40-74 years). The AI CAD generated a continuous prediction score for tumor suspicion between 0.00 and 1.00, where 1.00 represented the highest level of suspicion. A binary read (flagged or not flagged) was defined on the basis of a predetermined cutoff threshold (0.40). The flagged median and proportion of AI scores were calculated for women who were healthy, those who had a benign biopsy finding, and those who were diagnosed with breast cancer. For women with a benign biopsy finding, the interval between mammography and the biopsy was used for stratification of AI scores. The effect of increasing age was examined using subgroup analysis and regression modeling. Results: Of a total of 10,889 women, 234 had a benign biopsy finding before or after screening. The proportions of flagged healthy women were 3.5%, 11%, and 84% for healthy women without a benign biopsy finding, those with a benign biopsy finding, and women with breast cancer, respectively (P<.001). For the 8307 women with complete information, radiologist 1, radiologist 2, and the AI CAD system flagged 8.5%, 6.8%, and 8.5% of examinations of women who had a prior benign biopsy finding. The AI score correlated only with increasing age of the women in the cancer group (P=.01). Conclusions: Compared to healthy women without a biopsy, the examined AI CAD system flagged a much larger proportion of women who had or would have a benign biopsy finding based on a radiologist’s decision. However, the flagging rate was not higher than that for radiologists. Further research should be focused on training the AI CAD system taking prior biopsy information into account. %M 38875554 %R 10.2196/48123 %U https://ai.jmir.org/2023/1/e48123 %U https://doi.org/10.2196/48123 %U http://www.ncbi.nlm.nih.gov/pubmed/38875554 %0 Journal Article %@ 2817-1705 %I JMIR Publications %V 2 %N %P e40167 %T Application of Artificial Intelligence to the Monitoring of Medication Adherence for Tuberculosis Treatment in Africa: Algorithm Development and Validation %A Sekandi,Juliet Nabbuye %A Shi,Weili %A Zhu,Ronghang %A Kaggwa,Patrick %A Mwebaze,Ernest %A Li,Sheng %+ Global Health Institute, College of Public Health, University of Georgia, 100 Foster Road, Athens, GA, 30602, United States, 1 706 542 5257, jsekandi@uga.edu %K artificial intelligence %K deep learning %K machine learning %K medication adherence %K digital technology %K digital health %K tuberculosis %K video directly observed therapy %K video therapy %D 2023 %7 23.2.2023 %9 Original Paper %J JMIR AI %G English %X Background: Artificial intelligence (AI) applications based on advanced deep learning methods in image recognition tasks can increase efficiency in the monitoring of medication adherence through automation. AI has sparsely been evaluated for the monitoring of medication adherence in clinical settings. However, AI has the potential to transform the way health care is delivered even in limited-resource settings such as Africa. Objective: We aimed to pilot the development of a deep learning model for simple binary classification and confirmation of proper medication adherence to enhance efficiency in the use of video monitoring of patients in tuberculosis treatment. Methods: We used a secondary data set of 861 video images of medication intake that were collected from consenting adult patients with tuberculosis in an institutional review board–approved study evaluating video-observed therapy in Uganda. The video images were processed through a series of steps to prepare them for use in a training model. First, we annotated videos using a specific protocol to eliminate those with poor quality. After the initial annotation step, 497 videos had sufficient quality for training the models. Among them, 405 were positive samples, whereas 92 were negative samples. With some preprocessing techniques, we obtained 160 frames with a size of 224 × 224 in each video. We used a deep learning framework that leveraged 4 convolutional neural networks models to extract visual features from the video frames and automatically perform binary classification of adherence or nonadherence. We evaluated the diagnostic properties of the different models using sensitivity, specificity, F1-score, and precision. The area under the curve (AUC) was used to assess the discriminative performance and the speed per video review as a metric for model efficiency. We conducted a 5-fold internal cross-validation to determine the diagnostic and discriminative performance of the models. We did not conduct external validation due to a lack of publicly available data sets with specific medication intake video frames. Results: Diagnostic properties and discriminative performance from internal cross-validation were moderate to high in the binary classification tasks with 4 selected automated deep learning models. The sensitivity ranged from 92.8 to 95.8%, specificity from 43.5 to 55.4%, F1-score from 0.91 to 0.92, precision from 88% to 90.1%, and AUC from 0.78 to 0.85. The 3D ResNet model had the highest precision, AUC, and speed. Conclusions: All 4 deep learning models showed comparable diagnostic properties and discriminative performance. The findings serve as a reasonable proof of concept to support the potential application of AI in the binary classification of video frames to predict medication adherence. %M 38464947 %R 10.2196/40167 %U https://ai.jmir.org/2023/1/e40167 %U https://doi.org/10.2196/40167 %U http://www.ncbi.nlm.nih.gov/pubmed/38464947 %0 Journal Article %@ 2292-9495 %I JMIR Publications %V 9 %N 2 %P e35421 %T Toward an Ecologically Valid Conceptual Framework for the Use of Artificial Intelligence in Clinical Settings: Need for Systems Thinking, Accountability, Decision-making, Trust, and Patient Safety Considerations in Safeguarding the Technology and Clinicians %A Choudhury,Avishek %+ Industrial and Management Systems Engineering, Benjamin M Statler College of Engineering and Mineral Resources, West Virginia University, 1306 Evansdale Drive, PO Box 6107, Morgantown, WV, 26506-6107, United States, 1 5156080777, avishek.choudhury@mail.wvu.edu %K health care %K artificial intelligence %K ecological validity %K trust in AI %K clinical workload %K patient safety %K AI accountability %K reliability %D 2022 %7 21.6.2022 %9 Viewpoint %J JMIR Hum Factors %G English %X The health care management and the medical practitioner literature lack a descriptive conceptual framework for understanding the dynamic and complex interactions between clinicians and artificial intelligence (AI) systems. As most of the existing literature has been investigating AI’s performance and effectiveness from a statistical (analytical) standpoint, there is a lack of studies ensuring AI’s ecological validity. In this study, we derived a framework that focuses explicitly on the interaction between AI and clinicians. The proposed framework builds upon well-established human factors models such as the technology acceptance model and expectancy theory. The framework can be used to perform quantitative and qualitative analyses (mixed methods) to capture how clinician-AI interactions may vary based on human factors such as expectancy, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns for patient safety. If leveraged, the proposed framework can help to identify factors influencing clinicians’ intention to use AI and, consequently, improve AI acceptance and address the lack of AI accountability while safeguarding the patients, clinicians, and AI technology. Overall, this paper discusses the concepts, propositions, and assumptions of the multidisciplinary decision-making literature, constituting a sociocognitive approach that extends the theories of distributed cognition and, thus, will account for the ecological validity of AI. %M 35727615 %R 10.2196/35421 %U https://humanfactors.jmir.org/2022/2/e35421 %U https://doi.org/10.2196/35421 %U http://www.ncbi.nlm.nih.gov/pubmed/35727615