@Article{info:doi/10.2196/55820, author="Yaseliani, Mohammad and Noor-E-Alam, Md and Hasan, Md Mahmudul", title="Mitigating Sociodemographic Bias in Opioid Use Disorder Prediction: Fairness-Aware Machine Learning Framework", journal="JMIR AI", year="2024", month="Aug", day="20", volume="3", pages="e55820", keywords="opioid use disorder; fairness and bias; bias mitigation; machine learning; majority voting", abstract="Background: Opioid use disorder (OUD) is a critical public health crisis in the United States, affecting >5.5 million Americans in 2021. Machine learning has been used to predict patient risk of incident OUD. However, little is known about the fairness and bias of these predictive models. Objective: The aims of this study are two-fold: (1) to develop a machine learning bias mitigation algorithm for sociodemographic features and (2) to develop a fairness-aware weighted majority voting (WMV) classifier for OUD prediction. Methods: We used the 2020 National Survey on Drug and Health data to develop a neural network (NN) model using stochastic gradient descent (SGD; NN-SGD) and an NN model using Adam (NN-Adam) optimizers and evaluated sociodemographic bias by comparing the area under the curve values. A bias mitigation algorithm, based on equality of odds, was implemented to minimize disparities in specificity and recall. Finally, a WMV classifier was developed for fairness-aware prediction of OUD. To further analyze bias detection and mitigation, we did a 1-N matching of OUD to non-OUD cases, controlling for socioeconomic variables, and evaluated the performance of the proposed bias mitigation algorithm and WMV classifier. Results: Our bias mitigation algorithm substantially reduced bias with NN-SGD, by 21.66{\%} for sex, 1.48{\%} for race, and 21.04{\%} for income, and with NN-Adam by 16.96{\%} for sex, 8.87{\%} for marital status, 8.45{\%} for working condition, and 41.62{\%} for race. The fairness-aware WMV classifier achieved a recall of 85.37{\%} and 92.68{\%} and an accuracy of 58.85{\%} and 90.21{\%} using NN-SGD and NN-Adam, respectively. The results after matching also indicated remarkable bias reduction with NN-SGD and NN-Adam, respectively, as follows: sex (0.14{\%} vs 0.97{\%}), marital status (12.95{\%} vs 10.33{\%}), working condition (14.79{\%} vs 15.33{\%}), race (60.13{\%} vs 41.71{\%}), and income (0.35{\%} vs 2.21{\%}). Moreover, the fairness-aware WMV classifier achieved high performance with a recall of 100{\%} and 85.37{\%} and an accuracy of 73.20{\%} and 89.38{\%} using NN-SGD and NN-Adam, respectively. Conclusions: The application of the proposed bias mitigation algorithm shows promise in reducing sociodemographic bias, with the WMV classifier confirming bias reduction and high performance in OUD prediction. ", issn="2817-1705", doi="10.2196/55820", url="https://ai.jmir.org/2024/1/e55820", url="https://doi.org/10.2196/55820" }