Haoran Zhang


Machine learning models are being increasingly deployed in real-world clinical environments. However, these models often exhibit disparate performance between population groups, potentially leading to inequitable and discriminatory predictions. In this primer, we will discuss what it means for a model to be "fair" in the clinical setting by studying two algorithmic fairness definitions: group fairness and minimax fairness. We will analyze deep learning models for disease diagnosis using chest X-rays through the lens of these two definitions. We will then discuss algorithmic fairness methods for achieving fairness, finding that such algorithmic interventions can have serious unintended consequences. Next, we will specialize our analysis to the spurious correlation scenario -- where models may use demographic attributes as shortcuts. Finally, we question what the appropriate definition of fairness is in the clinical context, and advocate for an investigation of bias in the data whenever possible, as opposed to blindly applying algorithmic interventions.

MIA Talks Search