Machine Learning and Bias in Population Health Models

Data to Enable a Learning Health System
In Progress
Bias, Machine Learning
October 2021-March 2022 | Funders: Canadian Institutes of Health Research

About

It is now well recognized that Machine Learning (ML) models can contain significant biases which impact the health and wellbeing of marginalized communities. Numerous sources of bias exist, resulting in a model predicting an outcome less accurately in subsets of the population. This project consists of two parts occurring in tandem. 1) We are conducting a scoping review to identify ML models used in specific areas of population health and to examine whether and how biases were identified. 2) We are developing guidelines for model developers to identify and prevent bias in ML models used in population health.

Impact

Our scoping review will comprehensively identify the extent to which model developers have considered bias, and strategies used to mitigate them. Our guideline will provide model developers and knowledge users with novel, evidence-informed guidance on mitigating bias.

Team Members

Contact Information

Resources