<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.0 20040830//EN" "journalpublishing.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="2.0" xml:lang="en" article-type="research-article"><front><journal-meta><journal-id journal-id-type="nlm-ta">JMIR AI</journal-id><journal-id journal-id-type="publisher-id">ai</journal-id><journal-id journal-id-type="index">41</journal-id><journal-title>JMIR AI</journal-title><abbrev-journal-title>JMIR AI</abbrev-journal-title><issn pub-type="epub">2817-1705</issn><publisher><publisher-name>JMIR Publications</publisher-name><publisher-loc>Toronto, Canada</publisher-loc></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">v5i1e88651</article-id><article-id pub-id-type="doi">10.2196/88651</article-id><article-categories><subj-group subj-group-type="heading"><subject>Viewpoint</subject></subj-group></article-categories><title-group><article-title>Ethical Risks and Structural Implications of AI-Mediated Medical Interpreting</article-title></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><name name-style="western"><surname>Lopez Vera</surname><given-names>Alexandra</given-names></name><degrees>MPH, CHES, PhD</degrees><xref ref-type="aff" rid="aff1"/></contrib></contrib-group><aff id="aff1"><institution>California University of Science and Medicine</institution><addr-line>1501 Violet St</addr-line><addr-line>Colton</addr-line><addr-line>CA</addr-line><country>United States</country></aff><contrib-group><contrib contrib-type="editor"><name name-style="western"><surname>Malin</surname><given-names>Bradley</given-names></name></contrib></contrib-group><contrib-group><contrib contrib-type="reviewer"><name name-style="western"><surname>Pena</surname><given-names>Carmen</given-names></name></contrib><contrib contrib-type="reviewer"><name name-style="western"><surname>Fernandez</surname><given-names>Leonor</given-names></name></contrib></contrib-group><author-notes><corresp>Correspondence to Alexandra Lopez Vera, MPH, CHES, PhD, California University of Science and Medicine, 1501 Violet St, Colton, CA, 92324, United States, 1 9095809661; <email>alexandra.lopezvera@cusm.edu</email></corresp></author-notes><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>5</day><month>2</month><year>2026</year></pub-date><volume>5</volume><elocation-id>e88651</elocation-id><history><date date-type="received"><day>28</day><month>11</month><year>2025</year></date><date date-type="rev-recd"><day>05</day><month>01</month><year>2026</year></date><date date-type="accepted"><day>09</day><month>01</month><year>2026</year></date></history><copyright-statement>&#x00A9; Alexandra Lopez Vera. Originally published in JMIR AI (<ext-link ext-link-type="uri" xlink:href="https://ai.jmir.org">https://ai.jmir.org</ext-link>), 5.2.2026. </copyright-statement><copyright-year>2026</copyright-year><license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (<ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link>), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR AI, is properly cited. The complete bibliographic information, a link to the original publication on <ext-link ext-link-type="uri" xlink:href="https://www.ai.jmir.org/">https://www.ai.jmir.org/</ext-link>, as well as this copyright and license information must be included.</p></license><self-uri xlink:type="simple" xlink:href="https://ai.jmir.org/2026/1/e88651"/><abstract><p>Artificial intelligence (AI) is increasingly used to support medical interpreting and public health communication, yet current systems introduce serious risks to accuracy, confidentiality, and equity, particularly for speakers of low-resource languages. Automatic translation models often struggle with regional varieties, figurative language, culturally embedded meanings, and emotionally sensitive conversations about reproductive health or chronic disease, which can lead to clinically significant misunderstandings. These limitations threaten patient safety, informed consent, and trust in health systems when clinicians rely on AI as if it were a professional interpreter. At the same time, the large data sets required to train and maintain these systems create new concerns about surveillance, secondary use of linguistic data, and gaps in existing privacy protections. This viewpoint examines the ethical and structural implications of AI&#x2013;mediated interpreting in clinical and public health settings, arguing that its routine use as a replacement for qualified interpreters would normalize a lower standard of care for people with Non-English Language Preference and reinforce existing health disparities. Instead, AI tools should be treated as optional, carefully evaluated supplements that operate under the supervision of trained clinicians and professional interpreters, within clear regulatory guardrails for transparency, accountability, and community oversight. The paper concludes that language access must remain grounded in human expertise, language rights, and structural commitments to equity, rather than in cost-saving promises of automated systems.</p></abstract><kwd-group><kwd>artificial intelligence</kwd><kwd>AI-mediated interpreting</kwd><kwd>language access</kwd><kwd>health equity</kwd><kwd>clinical communication</kwd></kwd-group></article-meta></front><body><sec id="s1" sec-type="intro"><title>Introduction</title><p>Artificial Intelligence (AI) is rapidly being integrated into public health practice [<xref ref-type="bibr" rid="ref1">1</xref>]. Among its most visible and controversial uses are AI-mediated interpreting services, including real-time translation platforms and chatbot-based tools [<xref ref-type="bibr" rid="ref2">2</xref>]. These technologies are promoted as scalable solutions to improve access for individuals with Non-English Language Preference (NELP), a population estimated to include more than 25 million people in the United States [<xref ref-type="bibr" rid="ref3">3</xref>]. However, the use of these systems for medical interpretation raises immediate ethical concerns related to accuracy, autonomy, and equity. Acknowledging these realities, this viewpoint focuses not on whether AI tools can be preferable to no interpretation at all, but on the ethical and structural risks of normalizing AI-mediated interpreting as an acceptable substitute for qualified language services in routine clinical care.</p><p>In light of these concerns, uncritical adoption of AI interpreting poses ethical and structural risks, particularly for patient safety, autonomy, and equity [<xref ref-type="bibr" rid="ref4">4</xref>]. Unlike professional interpreters who are trained to manage cultural nuance and medical terminology [<xref ref-type="bibr" rid="ref5">5</xref>], AI systems rely on training data that often underrepresent Indigenous languages, regional dialects, and community-specific expressions [<xref ref-type="bibr" rid="ref6">6</xref>]. Errors in translation can compromise informed consent, distort sensitive conversations about reproductive health or chronic disease, and undermine trust in both clinical encounters and public health communication [<xref ref-type="bibr" rid="ref7">7</xref>].</p><p>These concerns are reflected in current evaluations of AI translation tools. Systematic reviews show that although AI translation tools can perform reasonably well when translating from English, accuracy declines substantially when translating into English, particularly for non-European languages [<xref ref-type="bibr" rid="ref8">8</xref>]. Technical research has documented incremental improvements in grammatical recognition, such as tense translation in Chinese-English systems, but these advances remain limited to controlled corpora (ie, collections of text and speech data used to develop and evaluate machine translation models) and fail to capture the cultural and contextual dimensions essential to health care [<xref ref-type="bibr" rid="ref9">9</xref>]. The integrity of AI translation research has also been questioned due to persistent concerns regarding evaluation practices, transparency, and reproducibility in AI-based language systems [<xref ref-type="bibr" rid="ref10">10</xref>]. Such developments highlight not only technical shortcomings but also broader concerns about hype, oversight, and accountability.</p><p>Taken together, these issues reveal why AI translation cannot be treated as a substitute for professional interpretation in public health practice. Instead, its use must be guided by ethics, equity, and structural competency, ensuring that efficiency and cost-effectiveness do not come at the expense of accuracy, patient rights, and trust. This viewpoint analyzes the ethical risks of AI-mediated interpreting, outlines guardrails for responsible implementation, and considers policy implications for equitable integration.</p></sec><sec id="s2"><title>Technical and Linguistic Limitations of AI Interpretation</title><p>The technical performance of AI interpretation tools reveals both progress and persistent shortcomings [<xref ref-type="bibr" rid="ref8">8</xref>]. Most systems are built on large-scale neural machine translation models that optimize statistical accuracy across widely spoken languages [<xref ref-type="bibr" rid="ref11">11</xref>]. However, this optimization produces systematic blind spots: performance is strongest for languages with abundant training data and weakest for low-resource and Indigenous languages [<xref ref-type="bibr" rid="ref12">12</xref>]. In this context, &#x201C;low-resource languages&#x201D; refers to languages for which limited digitized text, speech data, or annotated training materials are available for AI model development. Such disparities are not trivial&#x2014;they map onto global and domestic inequities, leaving the very populations most dependent on language access at greater risk of miscommunication. Although AI translation systems may perform comparatively better for high-resource languages such as Spanish, any potential benefit is highly context-dependent and limited to low-risk scenarios where professional interpretation is unavailable; differential performance across languages raises serious equity and safety concerns.</p><p>For example, consider a routine outpatient encounter in which a patient with NELP describes intermittent chest tightness using an idiomatic expression that, when rendered literally by an AI translation system, is conveyed as &#x201C;discomfort&#x201D; rather than &#x201C;pressure.&#x201D; The clinician, relying on the translated output, may interpret the symptom as benign and defer further evaluation. A professional interpreter, by contrast, would be trained to clarify the patient&#x2019;s meaning, recognize the potential clinical significance, and convey the urgency embedded in the original phrasing. In this scenario, the translation error is subtle rather than overt, yet it meaningfully alters clinical interpretation and risk assessment, illustrating how AI-mediated interpreting can introduce safety risks without obvious signals of failure.</p><p>Apart from language availability, AI models struggle with the communicative complexity of health encounters. Clinical communication frequently involves layered terminology, idioms, and pragmatic features such as hedging or expressions of uncertainty [<xref ref-type="bibr" rid="ref13">13</xref>]. Because most AI translation systems are still trained on broad, nonmedical data, they often produce literal word-for-word renderings rather than contextually accurate translations [<xref ref-type="bibr" rid="ref14">14</xref>]. In clinical and public health settings, this can shift the tone and meaning of communication&#x2014;for example, turning cautious or conditional medical advice into statements that sound definitive, or softening urgent guidance into something that appears optional. Such distortions not only change the information being conveyed but also risk undermining patients&#x2019; understanding, informed decision-making, and trust in health professionals.</p><p>Context dependence is another unresolved challenge. While technical evaluations often report improvements in grammatical recognition or lexical choice, these gains are typically demonstrated in isolated sentence-level translations [<xref ref-type="bibr" rid="ref15">15</xref>]. Real encounters involve extended dialogue, code-switching, and back-and-forth clarification&#x2014;conditions under which current systems exhibit degradation in coherence and consistency [<xref ref-type="bibr" rid="ref14">14</xref>]. For example, terminology may be translated differently within the same conversation, leading to patient confusion about diagnoses, treatment instructions, or medication use.</p><p>Finally, AI translation models are not designed to detect when they are likely to fail. Unlike human interpreters, who can request clarification or signal uncertainty, the AI outputs are delivered with apparent confidence regardless of underlying accuracy [<xref ref-type="bibr" rid="ref16">16</xref>]. This &#x201C;confidence illusion&#x201D; increases the danger of undetected errors in high-stakes environments such as emergency care or consent discussions.</p><p>Taken together, these limitations demonstrate that the technical progress of AI interpreting remains insufficient to guarantee accuracy, consistency, and safety in public health and clinical practice.</p></sec><sec id="s3"><title>Data Security and Confidentiality Risks</title><p>Beyond issues of accuracy, AI-mediated interpreting also raises serious concerns regarding data security and patient confidentiality. Most commercially available translation and chatbot systems are hosted on external servers and require transmitting speech or text data across networks outside the clinical environment. This creates risks of unauthorized access, data storage without consent, or secondary uses of sensitive information such as marketing or algorithm training [<xref ref-type="bibr" rid="ref17">17</xref>]. In public health practice, these risks are not hypothetical&#x2014;leaked or improperly managed health data can expose entire communities to stigma, discrimination, or even legal jeopardy.</p><p>Such vulnerabilities directly conflict with existing privacy frameworks such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States, which mandates strict safeguards around the handling of protected health information [<xref ref-type="bibr" rid="ref18">18</xref>]. Unlike professional interpreters, who are trained to maintain confidentiality and bound by institutional or legal standards, AI systems have no inherent mechanism for accountability when breaches occur [<xref ref-type="bibr" rid="ref19">19</xref>]. Furthermore, patients may be unaware that their personal health details are being routed through third-party systems, limiting their ability to provide meaningful informed consent. <xref ref-type="table" rid="table1">Table 1</xref> summarizes key risks and ethical implications of AI-mediated interpretation in public health.</p><table-wrap id="t1" position="float"><label>Table 1.</label><caption><p>Risks and ethical implications of AI-Mediated interpreting in clinical Encounters.</p></caption><table id="table1" frame="hsides" rules="groups"><thead><tr><td align="left" valign="bottom">Domain</td><td align="left" valign="bottom">Key risks identified</td><td align="left" valign="bottom">Clinical implications</td></tr></thead><tbody><tr><td align="left" valign="top">Linguistic<break/>accuracy</td><td align="left" valign="top">Literal rendering; inconsistent term mapping; unflagged uncertainty (&#x201C;confidence illusion&#x201D;)</td><td align="left" valign="top">Incorrect clinical interpretation; inappropriate triage/management; documentation errors</td></tr><tr><td align="left" valign="top">Equity in access</td><td align="left" valign="top">Performance gaps by language data availability; limited support for Indigenous/low-resource varieties</td><td align="left" valign="top">Unequal communication quality; differential risk of error; exacerbation of disparities</td></tr><tr><td align="left" valign="top">Patient safety and informed consent</td><td align="left" valign="top">Distorted hedging/urgency; loss of pragmatic meaning in sensitive topics</td><td align="left" valign="top">Compromised informed consent; delayed diagnosis/treatment; avoidable harm</td></tr><tr><td align="left" valign="top">Confidentiality and data security</td><td align="left" valign="top">Third-party processing/storage; unclear retention/secondary use; weak auditability</td><td align="left" valign="top">Unauthorized disclosure risk; reduced willingness to disclose; legal/compliance exposure</td></tr><tr><td align="left" valign="top">Ethical and structural implications</td><td align="left" valign="top">Substitution for qualified interpreters; normalization of a lower standard for NELP<sup><xref ref-type="table-fn" rid="table1fn1">a</xref></sup> patients</td><td align="left" valign="top">Erosion of language rights; reduced trust in institutions; reinforcement of structural inequities</td></tr></tbody></table><table-wrap-foot><fn id="table1fn1"><p><sup>a</sup>NELP: Non-English Language Preference.</p></fn></table-wrap-foot></table-wrap><p>These data governance gaps highlight that the risks of AI interpretation are not only linguistic but structural. Without enforceable standards for data handling, encryption, and storage, reliance on AI tools for medical or public health communication could compromise patient trust and institutional integrity, with downstream effects on care-seeking and participation in public health programs.</p><p>This table summarizes key domains of risk associated with AI-mediated interpreting and their clinical implications. No numerical data were generated.</p></sec><sec id="s4"><title>Ethical Considerations</title><p>Ethics approval was not applicable as this viewpoint does not involve human participants, human data, human tissue, or any identifiable personal data.</p></sec><sec id="s5" sec-type="conclusions"><title>Conclusion</title><p>AI-mediated interpreting illustrates the tension between technological innovation and public health responsibility. These tools expand access and promise efficiency for populations with NELP, but their current limitations&#x2014;ranging from linguistic inaccuracies to data security vulnerabilities&#x2014;pose risks that threaten patient safety, confidentiality, and trust. Treating AI as a replacement for professional interpretation risks normalizing inequities and undermining ethical obligations to protect vulnerable communities.</p><p>The path forward is not outright rejection but cautious, principled integration. AI tools may serve as supplemental aids when professional interpreters are unavailable, but their deployment must be governed by enforceable standards for accuracy, transparency, and privacy. Some limited applications&#x2014;such as translation of standardized materials or carefully constrained use in high-resource languages&#x2014;may warrant cautious exploration. Even in these contexts, however, variability in dialect, health literacy, and clinical framing limits assumptions of safety and underscores the need for clear boundaries and oversight rather than broad endorsement.</p><p>Responsibility for establishing and enforcing these guardrails is shared. Health systems and public health agencies play a central role through procurement decisions, staff training, and oversight of clinical use, while technology vendors must ensure transparency around model limitations, data handling, and intended use. Regulators and accrediting bodies can reinforce these efforts by setting minimum standards for certification and independent auditing, particularly for tools used in high-stakes clinical and consent-related encounters. Framing AI-mediated interpreting as a patient safety issue, rather than solely a cost-saving tool, is essential to ethical and equitable implementation.</p><p>Recognizing language access as both a structural competency and a patient right is essential. Ultimately, aligning technological adoption with ethical safeguards and obligations will determine whether AI in public health functions as a bridge to equity or a source of new disparities.</p></sec></body><back><notes><sec><title>Funding</title><p>This work received no specific funding.</p></sec><sec><title>Data Availability</title><p>No datasets were generated or analyzed for this viewpoint.</p></sec></notes><fn-group><fn fn-type="con"><p>ALV is the sole author and was responsible for conceptualization, analysis, writing, and revision of the manuscript.</p></fn><fn fn-type="conflict"><p>None declared.</p></fn></fn-group><glossary><title>Abbreviations</title><def-list><def-item><term id="abb1">AI</term><def><p>artificial intelligence</p></def></def-item><def-item><term id="abb2">HIPAA</term><def><p>Health Insurance Portability and Accountability Act</p></def></def-item><def-item><term id="abb3">NELP</term><def><p>Non-English Language Preference</p></def></def-item></def-list></glossary><ref-list><title>References</title><ref id="ref1"><label>1</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Olawade</surname><given-names>DB</given-names> </name><name name-style="western"><surname>Wada</surname><given-names>OJ</given-names> </name><name name-style="western"><surname>David-Olawade</surname><given-names>AC</given-names> </name><name name-style="western"><surname>Kunonga</surname><given-names>E</given-names> </name><name name-style="western"><surname>Abaire</surname><given-names>O</given-names> </name><name name-style="western"><surname>Ling</surname><given-names>J</given-names> </name></person-group><article-title>Using artificial intelligence to improve public health: a narrative review</article-title><source>Front Public Health</source><year>2023</year><volume>11</volume><fpage>1196397</fpage><pub-id pub-id-type="doi">10.3389/fpubh.2023.1196397</pub-id><pub-id pub-id-type="medline">37954052</pub-id></nlm-citation></ref><ref id="ref2"><label>2</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Olsavszky</surname><given-names>V</given-names> </name><name name-style="western"><surname>Bazari</surname><given-names>M</given-names> </name><name name-style="western"><surname>Dai</surname><given-names>TB</given-names> </name><etal/></person-group><article-title>Digital translation platform (Translatly) to overcome communication barriers in clinical care: pilot study</article-title><source>JMIR Form Res</source><year>2025</year><month>03</month><day>14</day><volume>9</volume><fpage>e63095</fpage><pub-id pub-id-type="doi">10.2196/63095</pub-id><pub-id pub-id-type="medline">39451122</pub-id></nlm-citation></ref><ref id="ref3"><label>3</label><nlm-citation citation-type="web"><article-title>Limited English proficiency (LEP)</article-title><source>US Department of Health and Human Services</source><year>2021</year><access-date>2025-09-21</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.hhs.gov/civil-rights/for-individuals/special-topics/limited-english-proficiency/index.html">https://www.hhs.gov/civil-rights/for-individuals/special-topics/limited-english-proficiency/index.html</ext-link></comment><comment><ext-link ext-link-type="uri" xlink:href="https://www.hhs.gov"/></comment></nlm-citation></ref><ref id="ref4"><label>4</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Paterson</surname><given-names>JM</given-names> </name></person-group><article-title>AI mimicking and interpreting humans: legal and ethical reflections</article-title><source>J Bioeth Inq</source><year>2025</year><month>09</month><volume>22</volume><issue>3</issue><fpage>539</fpage><lpage>550</lpage><pub-id pub-id-type="doi">10.1007/s11673-025-10424-9</pub-id><pub-id pub-id-type="medline">40504451</pub-id></nlm-citation></ref><ref id="ref5"><label>5</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Heath</surname><given-names>M</given-names> </name><name name-style="western"><surname>Hvass</surname><given-names>AMF</given-names> </name><name name-style="western"><surname>Wejse</surname><given-names>CM</given-names> </name></person-group><article-title>Interpreter services and effect on healthcare - a systematic review of the impact of different types of interpreters on patient outcome</article-title><source>J Migr Health</source><year>2023</year><volume>7</volume><fpage>100162</fpage><pub-id pub-id-type="doi">10.1016/j.jmh.2023.100162</pub-id><pub-id pub-id-type="medline">36816444</pub-id></nlm-citation></ref><ref id="ref6"><label>6</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Perera</surname><given-names>M</given-names> </name><name name-style="western"><surname>Vidanaarachchi</surname><given-names>R</given-names> </name><name name-style="western"><surname>Chandrashekeran</surname><given-names>S</given-names> </name><name name-style="western"><surname>Kennedy</surname><given-names>M</given-names> </name><name name-style="western"><surname>Kennedy</surname><given-names>B</given-names> </name><name name-style="western"><surname>Halgamuge</surname><given-names>S</given-names> </name></person-group><article-title>Indigenous peoples and artificial intelligence: a systematic review and future directions</article-title><source>Big Data Soc</source><year>2025</year><month>06</month><volume>12</volume><issue>2</issue><pub-id pub-id-type="doi">10.1177/20539517251349170</pub-id></nlm-citation></ref><ref id="ref7"><label>7</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Flores</surname><given-names>G</given-names> </name><name name-style="western"><surname>Abreu</surname><given-names>M</given-names> </name><name name-style="western"><surname>Barone</surname><given-names>CP</given-names> </name><name name-style="western"><surname>Bachur</surname><given-names>R</given-names> </name><name name-style="western"><surname>Lin</surname><given-names>H</given-names> </name></person-group><article-title>Errors of medical interpretation and their potential clinical consequences: a comparison of professional versus ad hoc versus no interpreters</article-title><source>Ann Emerg Med</source><year>2012</year><month>11</month><volume>60</volume><issue>5</issue><fpage>545</fpage><lpage>553</lpage><pub-id pub-id-type="doi">10.1016/j.annemergmed.2012.01.025</pub-id><pub-id pub-id-type="medline">22424655</pub-id></nlm-citation></ref><ref id="ref8"><label>8</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Genovese</surname><given-names>A</given-names> </name><name name-style="western"><surname>Borna</surname><given-names>S</given-names> </name><name name-style="western"><surname>Gomez-Cabello</surname><given-names>CA</given-names> </name><etal/></person-group><article-title>Artificial intelligence in clinical settings: a systematic review of its role in language translation and interpretation</article-title><source>Ann Transl Med</source><year>2024</year><month>12</month><day>24</day><volume>12</volume><issue>6</issue><fpage>117</fpage><pub-id pub-id-type="doi">10.21037/atm-24-162</pub-id><pub-id pub-id-type="medline">39817236</pub-id></nlm-citation></ref><ref id="ref9"><label>9</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Li</surname><given-names>X</given-names> </name></person-group><article-title>Adoption of wireless network and artificial intelligence algorithm in Chinese-English tense translation</article-title><source>Comput Intell Neurosci</source><year>2022</year><month>06</month><day>11</day><volume>2022</volume><fpage>1</fpage><lpage>10</lpage><pub-id pub-id-type="doi">10.1155/2022/1662311</pub-id></nlm-citation></ref><ref id="ref10"><label>10</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Fehr</surname><given-names>J</given-names> </name><name name-style="western"><surname>Citro</surname><given-names>B</given-names> </name><name name-style="western"><surname>Malpani</surname><given-names>R</given-names> </name><name name-style="western"><surname>Lippert</surname><given-names>C</given-names> </name><name name-style="western"><surname>Madai</surname><given-names>VI</given-names> </name></person-group><article-title>A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare</article-title><source>Front Digit Health</source><year>2024</year><volume>6</volume><fpage>1267290</fpage><pub-id pub-id-type="doi">10.3389/fdgth.2024.1267290</pub-id><pub-id pub-id-type="medline">38455991</pub-id></nlm-citation></ref><ref id="ref11"><label>11</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Wang</surname><given-names>H</given-names> </name><name name-style="western"><surname>Wu</surname><given-names>H</given-names> </name><name name-style="western"><surname>He</surname><given-names>Z</given-names> </name><name name-style="western"><surname>Huang</surname><given-names>L</given-names> </name><name name-style="western"><surname>Church</surname><given-names>KW</given-names> </name></person-group><article-title>Progress in machine translation</article-title><source>Engineering (Beijing)</source><year>2022</year><month>11</month><volume>18</volume><fpage>143</fpage><lpage>153</lpage><pub-id pub-id-type="doi">10.1016/j.eng.2021.03.023</pub-id></nlm-citation></ref><ref id="ref12"><label>12</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Tafa</surname><given-names>TO</given-names> </name><name name-style="western"><surname>Hashim</surname><given-names>SZM</given-names> </name><name name-style="western"><surname>Othman</surname><given-names>MS</given-names> </name><etal/></person-group><article-title>Machine translation performance for low-resource languages: a systematic literature review</article-title><source>IEEE Access</source><year>2025</year><volume>13</volume><fpage>72486</fpage><lpage>72505</lpage><pub-id pub-id-type="doi">10.1109/ACCESS.2025.3562918</pub-id></nlm-citation></ref><ref id="ref13"><label>13</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Rossi</surname><given-names>MG</given-names> </name></person-group><article-title>Understanding the impact of figurative language in medical discourse: toward a dialogic approach in healthcare communication</article-title><source>Patient Educ Couns</source><year>2025</year><month>08</month><volume>137</volume><fpage>108811</fpage><pub-id pub-id-type="doi">10.1016/j.pec.2025.108811</pub-id><pub-id pub-id-type="medline">40339512</pub-id></nlm-citation></ref><ref id="ref14"><label>14</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Naveen</surname><given-names>P</given-names> </name><name name-style="western"><surname>Trojovsk&#x00FD;</surname><given-names>P</given-names> </name></person-group><article-title>Overview and challenges of machine translation for contextually appropriate translations</article-title><source>iScience</source><year>2024</year><month>10</month><day>18</day><volume>27</volume><issue>10</issue><fpage>110878</fpage><pub-id pub-id-type="doi">10.1016/j.isci.2024.110878</pub-id><pub-id pub-id-type="medline">39391737</pub-id></nlm-citation></ref><ref id="ref15"><label>15</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Ding</surname><given-names>L</given-names> </name><name name-style="western"><surname>Zou</surname><given-names>D</given-names> </name></person-group><article-title>Automated writing evaluation systems: a systematic review of Grammarly, Pigai, and Criterion with a perspective on future directions in the age of generative artificial intelligence</article-title><source>Educ Inf Technol</source><year>2024</year><month>08</month><volume>29</volume><issue>11</issue><fpage>14151</fpage><lpage>14203</lpage><pub-id pub-id-type="doi">10.1007/s10639-023-12402-3</pub-id></nlm-citation></ref><ref id="ref16"><label>16</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Messeri</surname><given-names>L</given-names> </name><name name-style="western"><surname>Crockett</surname><given-names>MJ</given-names> </name></person-group><article-title>Artificial intelligence and illusions of understanding in scientific research</article-title><source>Nature New Biol</source><year>2024</year><month>03</month><volume>627</volume><issue>8002</issue><fpage>49</fpage><lpage>58</lpage><pub-id pub-id-type="doi">10.1038/s41586-024-07146-0</pub-id><pub-id pub-id-type="medline">38448693</pub-id></nlm-citation></ref><ref id="ref17"><label>17</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Rezaeikhonakdar</surname><given-names>D</given-names> </name></person-group><article-title>AI chatbots and challenges of HIPAA compliance for AI developers and vendors</article-title><source>J Law Med Ethics</source><year>2023</year><volume>51</volume><issue>4</issue><fpage>988</fpage><lpage>995</lpage><pub-id pub-id-type="doi">10.1017/jme.2024.15</pub-id></nlm-citation></ref><ref id="ref18"><label>18</label><nlm-citation citation-type="web"><article-title>Summary of the HIPAA privacy rule</article-title><source>HHS.gov</source><year>2013</year><month>07</month><day>26</day><access-date>2025-09-21</access-date><comment><ext-link ext-link-type="uri" xlink:href="https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html">https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html</ext-link></comment></nlm-citation></ref><ref id="ref19"><label>19</label><nlm-citation citation-type="journal"><person-group person-group-type="author"><name name-style="western"><surname>Jeyaraman</surname><given-names>M</given-names> </name><name name-style="western"><surname>Balaji</surname><given-names>S</given-names> </name><name name-style="western"><surname>Jeyaraman</surname><given-names>N</given-names> </name><name name-style="western"><surname>Yadav</surname><given-names>S</given-names> </name></person-group><article-title>Unraveling the ethical enigma: artificial intelligence in healthcare</article-title><source>Cureus</source><year>2023</year><volume>15</volume><issue>8</issue><fpage>e43262</fpage><pub-id pub-id-type="doi">10.7759/cureus.43262</pub-id></nlm-citation></ref></ref-list></back></article>