About
It is now well recognized that Machine Learning (ML) models can contain significant biases which impact the health and wellbeing of marginalized communities. Numerous sources of bias exist, resulting in a model predicting an outcome less accurately in subsets of the population. This project consists of two parts occurring in tandem. 1) We are conducting a scoping review to identify ML models used in specific areas of population health and to examine whether and how biases were identified. 2) We are developing guidelines for model developers to identify and prevent bias in ML models used in population health.
Impact
Our scoping reviews will comprehensively identify the extent to which model developers have considered bias, and explore strategies for mitigation in the existing literature. Our guideline will provide model developers and knowledge users with novel, evidence-informed guidance on mitigating bias.