Search Articles

View query in Help articles search

Search Results (1 to 10 of 40 Results)

Download search results: CSV END BibTex RIS


Impact of a Virtual Reality Video ("A Walk-Through Dementia") on YouTube Users: Topic Modeling Analysis

Impact of a Virtual Reality Video ("A Walk-Through Dementia") on YouTube Users: Topic Modeling Analysis

API: application programming interface; BERT: Bidirectional Encoder Representations from Transformers. The analyzed video comments are from a series of 360° videos titled “A Walk-Through Dementia,” the most viewed videos under the search terms “dementia” and “Alzheimer disease.” Developed by Alzheimer’s Research United Kingdom, this series aims to raise awareness about the impact of dementia on individuals’ lives by engaging the public, health care professionals, and caregivers.

Xiaoli Li, Xiaoyu Liu, Cheng Yin, Sandra Collins, Eman Alanazi

JMIR Form Res 2025;9:e67755

Automated Radiology Report Labeling in Chest X-Ray Pathologies: Development and Evaluation of a Large Language Model Framework

Automated Radiology Report Labeling in Chest X-Ray Pathologies: Development and Evaluation of a Large Language Model Framework

On the other hand for methods using BERT based models, the models are limited by the inherent limitations of BERT models such as their noncausal nature and limited context length. BERT-based models, despite their effectiveness in text classification tasks, have two key architectural limitations that constrain their performance in radiology report labeling. First, BERT’s bidirectional nature focuses on context aggregation but lacks the ability to model causal relationships in sequential data.

Abdullah Abdullah, Seong Tae Kim

JMIR Med Inform 2025;13:e68618

Identifying Patient-Reported Care Experiences in Free-Text Survey Comments: Topic Modeling Study

Identifying Patient-Reported Care Experiences in Free-Text Survey Comments: Topic Modeling Study

We found that researchers have taken different approaches to topic modeling of patient-reported experience, including latent Dirichlet allocation (LDA), nonnegative matrix factorization, Top2 Vec, and BERT (bidirectional encoder representations from transformers). Many of these studies trained new models or otherwise involved what could be considered a high degree of model tuning.

Brian Steele, Paul Fairie, Kyle Kemp, Adam G D'Souza, Matthias Wilms, Maria Jose Santana

JMIR Med Inform 2025;13:e63466

Teenager Substance Use on Reddit: Mixed Methods Computational Analysis of Frames and Emotions

Teenager Substance Use on Reddit: Mixed Methods Computational Analysis of Frames and Emotions

Initially, the BERT (Bidirectional Encoder Representations from Transformers)–based topic modeling algorithm identified topic clusters by capturing semantic relationships and patterns within the text. These clusters were provisionally labeled based on the most representative terms and phrases extracted by the algorithm.

Xinyu Zhang, Jianfeng Zhu, Deric R Kenne, Ruoming Jin

J Med Internet Res 2025;27:e59338

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

Coutinho and Martins [14] proposed a BERT model with a fine-tuning method for automatic ICD-10 coding of death certificates based on free-text descriptions and associated documents. Additionally, Yan et al [15] introduced Rad BERT, an ensemble model combining BERT-base, Clinical-BERT, the robustly optimized BERT pretraining approach (Ro BERTa), and Bio Med-Ro BERTa adapted for radiology.

Yan Zhuang, Junyan Zhang, Xiuxing Li, Chao Liu, Yue Yu, Wei Dong, Kunlun He

JMIR Med Inform 2025;13:e63020

Large Language Models for Mental Health Applications: Systematic Review

Large Language Models for Mental Health Applications: Systematic Review

This criterion encompasses models such as GPT (Open AI) and Bidirectional Encoder Representations from Transformers (BERT; Google AI). Although the standard BERT model, with only 0.34 billion parameters [29], does not meet the traditional criteria for “large,” its sophisticated bidirectional design and pivotal role in establishing new natural language processing (NLP) benchmarks justify its inclusion among notable LLMs [30].

Zhijun Guo, Alvina Lai, Johan H Thygesen, Joseph Farrington, Thomas Keen, Kezhi Li

JMIR Ment Health 2024;11:e57400

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

To address this issue, we developed Korean Medical BERT (KM-BERT), a medical domain–specific pretrained BERT model, which was trained on a corpus of 6 million sentences from medical textbooks, health information news, and medical research papers [17]. Furthermore, we developed the fine-tuned KM-BERT model capable of recommending medical specialties based on general user queries [18]. Comparing these models can reveal which types of tasks each model is better suited to in the health care domain.

Eunbeen Jo, Hakje Yoo, Jong-Ho Kim, Young-Min Kim, Sanghoun Song, Hyung Joon Joo

JMIR Form Res 2024;8:e47814

Exploring Public Emotions on Obesity During the COVID-19 Pandemic Using Sentiment Analysis and Topic Modeling: Cross-Sectional Study

Exploring Public Emotions on Obesity During the COVID-19 Pandemic Using Sentiment Analysis and Topic Modeling: Cross-Sectional Study

In this study, we followed the original BERTopic framework, using BERT (Bidirectional Encoder Representations from Transformers) for document embeddings, UMAP (Uniform Manifold Approximation and Projection) for dimensionality reduction, and HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) for clustering [16]. The process involves 4 main steps. BERT, a pretrained language model, is used to convert words and documents into meaningful real-value vectors.

Jorge César Correia, Sarmad Shaharyar Ahmad, Ahmed Waqas, Hafsa Meraj, Zoltan Pataky

J Med Internet Res 2024;26:e52142

Leveraging Temporal Trends for Training Contextual Word Embeddings to Address Bias in Biomedical Applications: Development Study

Leveraging Temporal Trends for Training Contextual Word Embeddings to Address Bias in Biomedical Applications: Development Study

As the embedding model, we chose BERT [1], a transformer-based model for contextualized word embeddings. We used a small version of BERT, named BERT-tiny [30], with 2 transformer layers and a hidden representation size of 128, pretrained on Book Corpus [31] and the English Wikipedia. Smaller models require less computation resources and are therefore more affordable and accessible. Rosin et al [32] have shown that BERT-tiny–based models were comparable to BERT-base in their ability to learn temporal trends.

Shunit Agmon, Uriel Singer, Kira Radinsky

JMIR AI 2024;3:e49546

The Most Effective Interventions for Classification Model Development to Predict Chat Outcomes Based on the Conversation Content in Online Suicide Prevention Chats: Machine Learning Approach

The Most Effective Interventions for Classification Model Development to Predict Chat Outcomes Based on the Conversation Content in Online Suicide Prevention Chats: Machine Learning Approach

Gao et al [12] found that pretrained BERT models did not outperform simpler methods for medical document classification. The simpler methods consisted of a convolutional neural network and a hierarchical self-attention network, which had similar performance while having fewer learnable parameters. Ilias and Askounis [13] used local interpretable model-agnostic explanations (LIME) to find influential words of BERT classifications of dementia transcripts.

Salim Salmi, Saskia Mérelle, Renske Gilissen, Rob van der Mei, Sandjai Bhulai

JMIR Ment Health 2024;11:e57362