Studying Bias in Diagnostic and Predictive Models in AI-Driven Healthcare Systems: A Focus on Bias Detection, Mitigation, and Ethical Considerations
Presentation Type
Poster
Student
Yes
Track
Health Care Application
Abstract
The integration of artificial intelligence (AI) in healthcare systems offers transformative potential, enhancing diagnostic accuracy, predictive capabilities, and patient care delivery. However, these advancements are hindered by various forms of bias, including training data bias, sampling bias, design bias, label bias, feature bias, temporal bias, human-AI interaction bias, and evaluation bias. This study examines the sources and impacts of these biases, with a focus on their implications for health equity, fairness, and ethical considerations. Through an extensive literature review, the study reveals that biases in AI systems often arise from unrepresentative datasets used in training models, flawed model designs, and evolving societal trends, leading to disparities in healthcare outcomes for marginalized populations. Mitigation strategies such as improving data diversity, applying fairness-aware algorithms, resemble methods, and incorporating continuous bias detection frameworks are analyzed. In this study synthetic data was generated, studying evaluation bias and mitigation methods and the algorithm was validated using breast cancer data.
Start Date
2-7-2025 1:00 PM
End Date
2-7-2025 2:30 PM
Studying Bias in Diagnostic and Predictive Models in AI-Driven Healthcare Systems: A Focus on Bias Detection, Mitigation, and Ethical Considerations
Volstorff A
The integration of artificial intelligence (AI) in healthcare systems offers transformative potential, enhancing diagnostic accuracy, predictive capabilities, and patient care delivery. However, these advancements are hindered by various forms of bias, including training data bias, sampling bias, design bias, label bias, feature bias, temporal bias, human-AI interaction bias, and evaluation bias. This study examines the sources and impacts of these biases, with a focus on their implications for health equity, fairness, and ethical considerations. Through an extensive literature review, the study reveals that biases in AI systems often arise from unrepresentative datasets used in training models, flawed model designs, and evolving societal trends, leading to disparities in healthcare outcomes for marginalized populations. Mitigation strategies such as improving data diversity, applying fairness-aware algorithms, resemble methods, and incorporating continuous bias detection frameworks are analyzed. In this study synthetic data was generated, studying evaluation bias and mitigation methods and the algorithm was validated using breast cancer data.