Review
Abstract
Background: Endometriosis is a chronic gynecological condition that affects a significant portion of women of reproductive age, leading to debilitating symptoms such as chronic pelvic pain and infertility. Despite advancements in diagnosis and management, patient education remains a critical challenge. With the rapid growth of digital platforms, artificial intelligence (AI) has emerged as a potential tool to enhance patient education and access to information.
Objective: This systematic review aims to explore the role of AI in facilitating education and improving information accessibility for individuals with endometriosis.
Methods: This review followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines to ensure rigorous and transparent reporting. We conducted a comprehensive search of PubMed; Embase; the Regional Online Information System for Scientific Journals of Latin America, the Caribbean, Spain and Portugal (LATINDEX); Latin American and Caribbean Literature in Health Sciences (LILACS); Institute of Electrical and Electronics Engineers (IEEE) Xplore, and the Cochrane Central Register of Controlled Trials using the terms “endometriosis” and “artificial intelligence.” Studies were selected based on their focus on AI applications in patient education or information dissemination regarding endometriosis. We included studies that evaluated AI-driven tools for assessing patient knowledge and addressed frequently asked questions related to endometriosis. Data extraction and quality assessment were conducted independently by 2 authors, with discrepancies resolved through consensus.
Results: Out of 400 initial search results, 11 studies met the inclusion criteria and were fully reviewed. We ultimately included 3 studies, 1 of which was an abstract. The studies examined the use of AI models, such as ChatGPT (OpenAI), machine learning, and natural language processing, in providing educational resources and answering common questions about endometriosis. The findings indicated that AI tools, particularly large language models, offer accurate responses to frequently asked questions with varying degrees of sufficiency across different categories. AI’s integration with social media platforms also highlights its potential to identify patients’ needs and enhance information dissemination.
Conclusions: AI holds promise in advancing patient education and information access for endometriosis, providing accurate and comprehensive answers to common queries, and facilitating a better understanding of the condition. However, challenges remain in ensuring ethical use, equitable access, and maintaining accuracy across diverse patient populations. Future research should focus on developing standardized approaches for evaluating AI’s impact on patient education and exploring its integration into clinical practice to enhance support for individuals with endometriosis.
doi:10.2196/64593
Keywords
Introduction
Endometriosis, a chronic gynecological condition characterized by the presence of endometrial-like tissue outside the uterus, affects 6% to 10% of reproductive-aged women [
, ]. This disease has a high degree of morbidity due to chronic pelvic pain and infertility [ , ]. It is likely polygenic and multifactorial, but the exact pathogenic mechanisms remain unclear [ , ]. Endometriosis not only causes debilitating symptoms such as chronic pelvic pain, dysmenorrhea, and infertility but also poses substantial challenges in diagnosis, management, and patient education [ , ].Quality of life in women with endometriosis is a widely debated topic within the medical community, as it is influenced by the unpredictability of symptom progression, varying treatment outcomes, and the psychosocial impact of living with a chronic illness [
- ]. Recent studies have highlighted the association of endometriosis with psychiatric comorbidities, such as anxiety, eating disorders, and mood disorders [ ]. This exacerbates the multifaceted burden faced by women with endometriosis, highlighting the need to measure and understand their uncertainties and questions related to the condition.In the digital age, where information dissemination and patient empowerment are increasingly facilitated through online platforms, social media has emerged as a prominent avenue for individuals seeking support, information, and community engagement [
]. The exponential growth of social media use, coupled with advancements in artificial intelligence (AI), presents new opportunities and challenges in how patients access and interpret information related to their health. Access to information through social platforms by patients needs to be assessed, since guidance based on high quality evidence is necessary [ ].AI technologies such as natural language processing and machine learning algorithms have revolutionized data analysis capabilities, enabling the extraction of meaningful insights from vast amounts of unstructured data generated on social media platforms [
- ]. These tools not only enhance the efficiency of processing large datasets but also offer potential solutions to mitigate the risks of misinformation and improve the dissemination of evidence-based medical knowledge [ ].Despite these advancements, significant gaps remain in understanding how AI can best serve the needs of patients with endometriosis, particularly in facilitating informed decision-making, enhancing health literacy, and addressing the unique informational needs of diverse patient populations. The ethical implications of AI-driven interventions in patient education and support must be carefully considered to ensure equitable access and privacy protection [
, ].Therefore, this systematic review aims to critically evaluate the current literature on the role of AI in patient education and information access for endometriosis. By synthesizing existing evidence, we seek to elucidate the potential benefits, challenges, and future directions of AI integration in improving the quality of care and support for individuals affected by this complex condition.
Methods
End Points, Eligibility, and Selection Criteria
This systematic review was performed according to the recommendations of the Cochrane Collaboration [
] and the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement [ ].Studies were only included in this review if they addressed (1) knowledge about endometriosis evaluated through AI or (2) AI platform answers on common questions regarding endometriosis. We excluded studies (1) with patients who had not received an endometriosis diagnosis, (2) in which the evaluation was not performed using AI, and (3) that did not apply language models for acquiring answers or knowledge about the disease. Time of follow-up, language of publication, and type of study were not limited as a means of approaching as many subjects as possible. We collected and analyzed common data from the studies for comparison purposes.
Search Strategy and Data Extraction
We systematically searched PubMed, Embase, the Regional Online Information System for Scientific Journals of Latin America, the Caribbean, Spain and Portugal (LATINDEX), Latin American and Caribbean Literature in Health Sciences (LILACS), Institute of Electrical and Electronics Engineers (IEEE) Xplore, and Cochrane Central Register of Controlled Trials in May 2024. The search was updated in September 2024. We used the following Medical Subject Heading (MeSH) terms: “endometriosis” and “artificial intelligence.” We acknowledge that search syntaxes vary across databases, and as such, each search strategy was carefully tailored to the specific requirements of each database to ensure comprehensiveness. Notably, we were able to either increase the number of included papers or maintain those identified by the previous strategy by using the search strategies detailed in
, Table S1. Screening was carried out independently by 2 authors (JAO and KE) following the predefined search criteria. Furthermore, both authors performed data collection for the included studies independently. Any conflicts were resolved by consensus among the authors.Quality Assessment
To assess the quality of the included studies, we used tools from the Joanna Briggs Institute (JBI) [
]. Studies that used AI to analyze social media responses [ ] were evaluated using the Critical Appraisal Checklist for Analytical Cross-Sectional Studies, while studies that investigated AI-generated responses [ , ] were evaluated using the Critical Appraisal Checklist for Textual Evidence [ ]. Although these tools are widely accepted, we acknowledge that they were not specifically designed for AI.Results
Study Selection and Description of Included Studies
The initial search yielded 400 results. Duplicate records and ineligible studies were removed, and 11 studies remained that were fully reviewed based on the inclusion criteria. Of these, 3 studies were included, including 1 abstract (
); 2 evaluated language models and consequently had no patients [ , ], while 1 evaluated patients’ knowledge in the last 11 years through comments, evaluating over 31,144 online users [ ]. Of the 3 studies, 2 had similar designs involving expert analysis of chatbot responses [ , ], and 1 was a sentiment analysis and topic modeling study with observational data (user-generated content) [ ].The use of AI was not described in much detail. One of the studies cited use of the “WordNetLemmatize” function and “PorterStemmer” functions from the Natural Language Toolkit package and the SpellChecker package in Python, for linguistic correction and reducing word forms [
]. For topic definitions, the same study applied the LDAMulticore algorithm, a probabilistic generative model. The algorithm identified the main topics of the comments and posts and the 10 words most related to each topic [ ]. Unfortunately, no further research data were available in the manuscript or supplemental material, or by request.Additional study characteristics are reported in
.Author, year | AIa model used | Aim | Main area | Patients, n | Main findings |
Cohen et al [ | ], 2024 (abstract)ChatGPT (GPT-4; (OpenAI), Claude (Anthropic), and Bard (Google) | To assess and compare the chatbots’ information accuracy for its answers about endometriosis | Patients’ knowledge through AI | —b | Experts’ validations of the answers were graded; the answers were observed to be mostly accurate yet insufficient for commonly raised inquiries. |
Goel et al [ | ], 2023Machine learning association with BERTc; sentiment analysis assisted by Python | To identify, with the Reddit application programming interface, discussion topics and themes to guide health care professionals and researchers on women’s needs | Patients’ needs | 31,144 | Social media might help to diminish the overlap between research priorities and topics discussed in social media regarding endometriosis. Surgery, advice, diagnosis, mental health, and pain should be further discussed. |
Ozgor and Simavi [ | ], 2024ChatGPT | To assess the quality of ChatGPT FAQd answers about endometriosis | Patients’ knowledge through AI | — | Of all FAQs, 91.4% (n=71) were properly answered. Accuracy was highest in the symptom and diagnosis category (91.1%), and lowest in the treatment category (81.3%). |
aAI: artifcial intelligence.
bNot applicable.
cBERT: Bidirectional Encoder Representations from Transformers.
dFAQ: frequently asked question.
AI and Social Media Platforms
To better understand the needs of women of all ages with endometriosis, a study was conducted on Reddit, a platform where people join communities focused on a common topic, over the previous 136 months (11.3 years). A total of 45,693 posts and 357,498 comments were analyzed; 92.09% of the posts were associated with negative sentiments in a sentiment analysis. Most posts were related to surgery (16.85%), followed by questions or advice (16.12%); diagnoses (12.34%); and feelings, depression, or pain (6.4%). As for the areas with the most comments, they involved sex and intimacy. Importantly, an exploratory approach used manual analysis of 3000 randomly selected posts, finding that 0.5% were made by individuals concerned about loved ones with endometriosis, reflecting the impact of endometriosis on society. Furthermore, after 2011 there was an increase in the number of posts and comments, which might be related to higher awareness of endometriosis [
]. A limitation was that Reddit users are mainly from the United States, Australia, and India, whereas health care assistance differs among countries and the questions and doubts of these users might not represent women with endometriosis worldwide [ ].AI and Large Language Models
Frequently asked questions (FAQs), or common questions, on endometriosis were used by 2 studies for assessment of the accuracy of language models [
, ]. One study [ ] using ChatGPT applied a wide range of questions created based on questions identified in social media and online platforms (n=41), as well as scientific questions (n=40) based on the European Society of Human Reproduction and Embryology (ESHRE) endometriosis guidelines. The 81 questions compiled were classified as being about general information (n=20), symptoms and diagnosis (n=17), treatment (n=16), prevention (n=15), and complications (n=13). An experienced endometriosis gynecologist gave scores of 1 to 4 for each answer provided by ChatGPT: 1 for completely true answers, 2 for accurate answers with insufficient data, 3 for answers containing correct and incorrect information, and 4 for completely incorrect answers [ ].A total of 91.4% (n=71) of the FAQs were considered accurate and sufficient, while no answers were considered completely incorrect. ESHRE-based questions were mostly considered to have completely true answers (67.5%). ChatGPT had the highest accuracy in the symptom and diagnosis category (94.1%) and the lowest in the treatment category (81.3%). Each question was asked twice, and if the answers were divergent, it was considered as having negative reproducibility. The reproducibility rate was 100% for questions related to prevention, symptoms and diagnosis, and complications. The lowest rate was for treatment questions (81.3%) and ESHRE-based questions (70%) [
].The other study approached 3 large language models (LLMs) and applied 10 FAQs on endometriosis. The answers were compared to guidelines and expert opinions and were rated and averaged between 3 gynecologists. They were graded similarly to the previous study. The answers were graded as 1 if completely incorrect, 2 if mostly incorrect but somewhat correct, 3 if mostly correct but somewhat incorrect, 4 if correct but inadequate, and 5 if correct and comprehensive. Among 3 LLMs, Bard had better average scores than ChatGPT or Claude. Most answers were considered correct but inadequate, and only 1 ChatGPT and Bard answer was graded as 5 by all experts [
]. Only this study shared the questions applied to chatbots.Quality Assessment
The quality assessment revealed considerable variation among the studies. The study that used AI to analyze social media responses [
] received a moderate quality rating (62%) based on the JBI Critical Appraisal Checklist for Analytical Cross-Sectional Studies. This rating was primarily due to the lack of clear identification of patients with and without endometriosis and the inability to explore confounding factors. In contrast, the studies that evaluated AI-generated responses [ , ] were assessed using the JBI Critical Appraisal Checklist for Textual Evidence and were classified as low quality (<50%), mainly due to insufficient information about the sources of opinions and limited presentation of the data. While the tools used were not specifically designed for evaluating studies involving AI, they were the most appropriate options available.Discussion
Principal Findings
LLMs are frequently used as a source of knowledge on health conditions due to their ability to provide rapid and concise responses [
]. It has been found that they provide answers to questions about cardiovascular disorders [ ] and pediatric urology that are accurate and consistent with the subspecialty guidelines, [ ] and that they can interpret radiological imaging with a low error rate [ ]. Yet, there are still doubts concerning the use of AI in medical practice and as a source of patient education [ ].Social media is often used to identify patients’ struggles and for patient education through experience exchange [
, ]. As observed, many patients use social media for education, but previous studies have observed that inaccurate information is 30-fold higher than accurate information [ ]. Unfortunately, it has been shown that research- and education-related posts attract less engagement than posts related to emotional support [ , ].When compared to online chat, chatbots have become a faster and safer way of acquiring information on health conditions [
]. Because of this, their answers have been compared to human scores and test results. A study applied United States Medical Licensing Examination (USMLE) step 1, 2, and 3 questions to ChatGPT and achieved over 60% accuracy without prior training [ , ]. ChatGPT has also been used for ophthalmology resident examination questions and was found to obtain scores similar to those of ophthalmology residents [ ].Chatbots can be a particularly useful asset for patients with endometriosis, since diagnosis is often delayed by 6 to 14 years from first symptoms [
, ]. This delay is commonly due to patients and physicians normalizing the symptoms, which stems from a limited understanding of endometriosis etiology and restricted access to specialized care [ - ]. Additionally, diagnosing endometriosis can be challenging and requires, regardless of the method, evaluation by specialized physicians [ , ]. The diagnosis can be performed either through the standard diagnostic method (laparoscopy combined with abdominal cavity exploration and histological biopsy [ ]) or through secondary methods, such as magnetic resonance imaging and transvaginal ultrasound [ ]. All these factors play a role in treatment and diagnosis delay, with clinical implications such as chronic pain, reduction in quality of life, and higher treatment cost [ ]. The exact impact on fertility is still unknown [ - ], but it has been noted that women with a short delay were less likely to have infertility [ ]. These factors enhance the need for further evaluation of online support groups’ access to information [ ] and AI responses, especially in places with low access to specialized care.AI is already being used in medical practice, and responsible AI is vital to maximize the relationship between health care professionals and patients [
, ], not only for better understanding of their feelings [ ], but also for understanding whether most information provided to them is correct [ , ]. A study reported that only 25% of general practitioners feel adequately informed about endometriosis, though those with gynecology qualifications (58.9%) or continuing medical education in the field (19.6%) had better awareness [ ]. In the general population, women’s knowledge about endometriosis in generally low. A study in a high-income country had only 4.5% reporting very good knowledge and about one-third indicating sufficient or good knowledge about endometriosis [ ]. In a lower-middle-income country, an endometriosis knowledge score was measured in women from the general population, and it was found that they had a mean score of 4.2 of 10 [ ]. Patients’ use of AI for education and clarification is common and might enhance knowledge about endometriosis, especially whenever medical explanations are insufficient or not easily available [ ]. AI is a part of digital health and using it for patients’ benefit is needed [ , , ].This review emphasizes the need for applying AI to data analysis and increasing the amount of evaluated data. Furthermore, analysis of ChatGPT responses is important since many health professionals use it as a supplementary source of information so that patients can obtain a better understanding of the disease. Additionally, it increases the time health professionals can devote to their patients by creating spare time.
This review has limitations. First, not all questions used in the studies were made available. Secondly, some countries might use chatbots more often, as is known to be the case for Reddit [
], and only 1 study cited tested reproducibility [ ]. Furthermore, the questions and posts or comments could not be separated by source: people with endometriosis, friends or family of people with endometriosis, health professionals, or people who are simply curious.Although we rigorously followed the Cochrane and PRISMA guidelines throughout the process, the protocol for this systematic review was not registered, which may affect the transparency and replicability of the findings. Registration on appropriate platforms was considered but could not be completed due to technical difficulties, approval delays, and the temporary suspension of new submissions. Additionally, the exploratory nature of the review, particularly in the field of AI, and the specificities of the study design, which did not fully meet the criteria of these platforms, also contributed to this limitation.
The absence of specific tools for evaluating studies involving AI was an important limitation of this study. Although we used the JBI tools, we acknowledge that they do not fully capture the particularities of AI [
]. The development of PROBAST (Prediction model study Risk Of Bias Assessment Tool)-AI, a tool currently being developed using the Delphi method, will be essential for improving quality assessment in future AI studies, offering greater precision and relevance [ ]. However, this tool was not available at the time this article was completed [ ].The limited scope of this review restricts a comprehensive understanding of the topic. Future studies should adopt standardized methodologies, with validated questions and greater access to data from online users with endometriosis, in order to expand the understanding of the impact of AI on patient education for this condition.
Conclusions
Patient education can be better assessed through AI evaluation and might provide insights on endometriosis for patients with the disease, as well as for professionals in other specialties. The use of AI for endometriosis should be under a gynecologist’s supervision and might be beneficial for diagnosis and follow-up insights. LLMs cannot guide clinical decisions, and these should be based on current endometriosis guidelines.
Acknowledgments
JAO receives financial support from the Fundação de Amparo à Pesquisa do Estado de Minas Gerais (grant 133). ChatGPT (GPT-4), developed by OpenAI, was used as a support tool in specific sections of the drafting and revision process of this article. Although it contributed significantly to the enhancement of these sections, all final analyses and interpretations are solely the responsibility of the authors.
Authors' Contributions
JAO contributed to conceptualization, methodology, investigation, data curation, writing (original draft), visualization, supervision, and project administration. KE and EK participated in the investigation and data curation. FRdO contributed to conceptualization, writing (review and editing), and supervision. ALdSF participated in supervision and funding acquisition.
Conflicts of Interest
None declared.
Table S1 and S2.
DOCX File , 16 KBPreferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) checklist.
PDF File (Adobe PDF File), 100 KBReferences
- Brosens I, Gordts S, Benagiano G. Endometriosis in adolescents is a hidden, progressive and severe disease that deserves attention, not just compassion. Hum Reprod. Aug 2013;28(8):2026-2031. [FREE Full text] [CrossRef] [Medline]
- Marsh EE, Laufer MR. Endometriosis in premenarcheal girls who do not have an associated obstructive anomaly. Fertil Steril. Mar 2005;83(3):758-760. [FREE Full text] [CrossRef] [Medline]
- Oliveira FR, Casalechi M, Carneiro MM, de Ávila I, Dela Cruz C, Del Puerto HL, et al. Immunolocalization of stem/progenitor cell biomarkers Oct-4, C-kit and Musashi-1 in endometriotic lesions. Mol Biol Rep. Oct 2021;48(10):6863-6870. [CrossRef] [Medline]
- Oliveira F, Dela Cruz C, Del Puerto HL, Vilamil Q, Reis F, Camargos AF. Stem cells: are they the answer to the puzzling etiology of endometriosis? Histol Histopathol. Jan 2012;27(1):23-29. [CrossRef] [Medline]
- Holowka EM. Mediating pain: navigating endometriosis on social media. Front Pain Res (Lausanne). 2022;3:889990. [FREE Full text] [CrossRef] [Medline]
- van den Haspel K, Reddington C, Healey M, Li R, Dior U, Cheng C. The role of social media in management of individuals with endometriosis: A cross-sectional study. Aust N Z J Obstet Gynaecol. Oct 2022;62(5):701-706. [CrossRef] [Medline]
- Metzler JM, Kalaitzopoulos DR, Burla L, Schaer G, Imesch P. Examining the influence on perceptions of endometriosis via analysis of social media posts: cross-sectional study. JMIR Form Res. Mar 18, 2022;6(3):e31135. [FREE Full text] [CrossRef] [Medline]
- Towne J, Suliman Y, Russell KA, Stuparich MA, Nahas S, Behbehani S. Health information in the era of social media: an analysis of the nature and accuracy of posts made by public Facebook pages for patients with endometriosis. J Minim Invasive Gynecol. Sep 2021;28(9):1637-1642. [CrossRef] [Medline]
- Wilson S, Mogan S, Kaur K. Understanding the role of Facebook to support women with endometriosis: a Malaysian perspective. Int J Nurs Pract. Aug 2020;26(4):e12833. [CrossRef] [Medline]
- Sinai D, Avni C, Toren P. Beyond physical pain: A large-scale cohort study on endometriosis trends and mental health correlates. J Psychosom Res. Jul 2024;182:111809. [CrossRef] [Medline]
- Goel R, Modhukur V, Täär K, Salumets A, Sharma R, Peters M. Users' concerns about endometriosis on social media: sentiment analysis and topic modeling study. J Med Internet Res. Aug 15, 2023;25:e45381. [FREE Full text] [CrossRef] [Medline]
- Cohen N, Kho K, Smith K. Battle of the bots: a comparative analysis of generative AI responses from leading chatbots to patient questions about endometriosis. Am J Obstet Gynecol. Apr 2024;230(4):S1170. [CrossRef]
- Ozgor BY, Simavi MA. Accuracy and reproducibility of ChatGPT's free version answers about endometriosis. Int J Gynaecol Obstet. May 2024;165(2):691-695. [CrossRef] [Medline]
- Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M. Cochrane Handbook for Systematic Reviews of Interventions Internet. Cochrane Training. URL: https://training.cochrane.org/handbook/current [accessed 2024-10-24]
- Page M, McKenzie J, Bossuyt P, Boutron I, Hoffmann T, Mulrow C, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [FREE Full text] [CrossRef] [Medline]
- Critical appraisal tools. Joanna Briggs Institute. URL: https://jbi.global/critical-appraisal-tools [accessed 2024-10-24]
- McArthur A, Klugarova J, Yan H, Florescu S. Chapter 4: systematic reviews of text and opinion. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. Adelaide, Australia. Joanna Briggs Institute; 2020.
- Sattelberg W. The demographics of Reddit: who uses the site? Alphr. URL: https://www.alphr.com/demographics-reddit/ [accessed 2024-10-24]
- Zhou Z, Wang X, Li X, Liao L. Is ChatGPT an evidence-based doctor? Eur Urol. Sep 2023;84(3):355-356. [CrossRef] [Medline]
- Van Bulck L, Moons P. Response to the letter to the editor - Dr. ChatGPT in cardiovascular nursing: a deeper dive into trustworthiness, value, and potential risk. Eur J Cardiovasc Nurs. Jan 12, 2024;23(1):e13-e14. [CrossRef] [Medline]
- Caglar U, Yildiz O, Meric A, Ayranci A, Gelmis M, Sarilar O, et al. Evaluating the performance of ChatGPT in answering questions related to pediatric urology. J Pediatr Urol. Feb 2024;20(1):26.e1-26.e5. [CrossRef] [Medline]
- Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. Feb 08, 2023;9:e45312. [FREE Full text] [CrossRef] [Medline]
- Zhou Z. Evaluation of ChatGPT's capabilities in medical report generation. Cureus. Apr 2023;15(4):e37589. [FREE Full text] [CrossRef] [Medline]
- Alsyouf M, Stokes P, Hur D, Amasyali A, Ruckle H, Hu B. 'Fake news' in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies. BJU Int. Oct 2019;124(4):701-706. [CrossRef] [Medline]
- Burnette H, Pabani A, von Itzstein MS, Switzer B, Fan R, Ye F, et al. Use of artificial intelligence chatbots in clinical management of immune-related adverse events. J Immunother Cancer. May 30, 2024;12(5):e008599. [FREE Full text] [CrossRef] [Medline]
- Knoedler L, Knoedler S, Hoch CC, Prantl L, Frank K, Soiderer L, et al. In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions. Sci Rep. Jun 12, 2024;14(1):13553. [FREE Full text] [CrossRef] [Medline]
- Lubell J. ChatGPT passed the USMLE. What does it mean for med ed? American Medical Association. URL: https://www.ama-assn.org/practice-management/digital/chatgpt-passed-usmle-what-does-it-mean-med-ed [accessed 2024-10-24]
- Antaki F, Touma S, Milad D, El-Khoury J, Duval R. Evaluating the performance of ChatGPT in ophthalmology: an analysis of its successes and shortcomings. Ophthalmol Sci. Dec 2023;3(4):100324. [FREE Full text] [CrossRef] [Medline]
- Surrey E, Soliman AM, Trenz H, Blauer-Peterson C, Sluis A. Impact of endometriosis diagnostic delays on healthcare resource utilization and costs. Adv Ther. Mar 2020;37(3):1087-1099. [FREE Full text] [CrossRef] [Medline]
- Requadt E, Nahlik AJ, Jacobsen A, Ross WT. Patient experiences of endometriosis diagnosis: A mixed methods approach. BJOG. Jun 2024;131(7):941-951. [CrossRef] [Medline]
- Cromeens MG, Carey ET, Robinson WR, Knafl K, Thoyre S. Timing, delays and pathways to diagnosis of endometriosis: a scoping review protocol. BMJ Open. Jun 24, 2021;11(6):e049390. [FREE Full text] [CrossRef] [Medline]
- Fryes J, Mason-Jones A, Woodward A. Undertanding diagnostic delay for endometriosis: a scoping review. MedrXiv. Preprint posted January 9, 2024. [CrossRef]
- Sonntagbauer M, Haar M, Kluge S. [Artificial intelligence: How will ChatGPT and other AI applications change our everyday medical practice?]. Med Klin Intensivmed Notfmed. Jun 2023;118(5):366-371. [CrossRef] [Medline]
- Tsamantioti E, Mahdy H. Endometriosis. StatPearls. URL: https://www.ncbi.nlm.nih.gov/books/NBK567777 [accessed 2024-10-24]
- Kiesel L, Sourouni M. Diagnosis of endometriosis in the 21st century. Climacteric. Jun 2019;22(3):296-302. [CrossRef] [Medline]
- Walsh G, Stogiannos N, van de Venter R, Rainey C, Tam W, McFadden S, et al. Responsible AI practice and AI education are central to AI implementation: a rapid review for all medical imaging professionals in Europe. BJR Open. 2023;5(1):20230033. [FREE Full text] [CrossRef] [Medline]
- Roullier C, Sanguin S, Parent C, Lombart M, Sergent F, Foulon A. General practitioners and endometriosis: Level of knowledge and the impact of training. J Gynecol Obstet Hum Reprod. Dec 2021;50(10):102227. [CrossRef] [Medline]
- Szymańska J, Dąbrowska-Galas M. An assessment of Polish women's level of knowledge about endometriosis: a pilot study. BMC Womens Health. Dec 07, 2021;21(1):404. [FREE Full text] [CrossRef] [Medline]
- Saad M, Rafiq A, Jamil A, Sarfraz Z, Sarfraz A, Robles-Velasco K, et al. Addressing the endometriosis knowledge gap for improved clinical care-a cross-sectional pre- and post-educational-intervention study among Pakistani women. Healthcare (Basel). Mar 09, 2023;11(6):809. [FREE Full text] [CrossRef] [Medline]
- Miller DD. The medical AI insurgency: what physicians must know about data to practice with intelligent machines. NPJ Digit Med. 2019;2(1):62. [FREE Full text] [CrossRef] [Medline]
- Balogh DB, Hudelist G, Bļizņuks D, Raghothama J, Becker CM, Horace R, et al. FEMaLe: The use of machine learning for early diagnosis of endometriosis based on patient self-reported data-study protocol of a multicenter trial. PLoS One. 2024;19(5):e0300186. [FREE Full text] [CrossRef] [Medline]
- Collins GS, Dhiman P, Andaur Navarro CL, Ma J, Hooft L, Reitsma JB, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. Jul 09, 2021;11(7):e048008. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
ESHRE: European Society of Human Reproduction and Embryology |
FAQ: frequently asked question |
IEEE: Institute of Electrical and Electronics Engineers |
JBI: Joanna Briggs Institute |
LATINDEX: Regional Online Information System for Scientific Journals of Latin America, the Caribbean, Spain and Portugal |
LILACS: Latin American and Caribbean Literature in Health Sciences |
LLM: large language model |
MeSH: Medical Subject Heading |
PRISMA: Preferred Reporting Items for Systematic reviews and Meta-Analyses |
PROBAST: Prediction model study Risk Of Bias Assessment Tool |
USMLE: United States Medical Licensing Examination |
Edited by K El Emam, B Malin; submitted 21.07.24; peer-reviewed by E Cândido, A Jafarizadeh; comments to author 02.09.24; revised version received 02.09.24; accepted 26.09.24; published 30.10.24.
Copyright©Juliana Almeida Oliveira, Karine Eskandar, Emre Kar, Flávia Ribeiro de Oliveira, Agnaldo Lopes da Silva Filho. Originally published in JMIR AI (https://ai.jmir.org), 30.10.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR AI, is properly cited. The complete bibliographic information, a link to the original publication on https://www.ai.jmir.org/, as well as this copyright and license information must be included.