IPUMS.org Home Page

BIBLIOGRAPHY

Publications, working papers, and other research using data resources from IPUMS.

Full Citation

Title: Fairness Transferability Subject to Bounded Distribution Shift

Citation Type: Miscellaneous

Publication Year: 2022

DOI: 10.48550/arxiv.2206.00129

Abstract: Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound? In this paper, we study the transferability of statistical group fairness for machine learning predictors (i.e., classifiers or regressors) subject to bounded distribution shifts. Such shifts may be introduced by initial training data uncertainties, user adaptation to a deployed predictor, dynamic environments, or the use of pre-trained models in new settings. Herein, we develop a bound that characterizes such transferability, flagging potentially inappropriate deployments of machine learning for socially consequential tasks. We first develop a framework for bounding violations of statistical fairness subject to distribution shift, formulating a generic upper bound for transferred fairness violations as our primary result. We then develop bounds for specific worked examples, focusing on two commonly used fairness definitions (i.e., demographic parity and equalized odds) and two classes of distribution shift (i.e., covariate shift and label shift). Finally, we compare our theoretical bounds to deterministic models of distribution shift and against real-world data, finding that we are able to estimate fairness violation bounds in practice, even when simplifying assumptions are only approximately satisfied.

Url: https://arxiv.org/abs/2206.00129

User Submitted?: No

Authors: Chen, Yatong; Raab, Reilly; Wang, Jialu; Liu, Yang

Publisher: arXiv

Data Collections: IPUMS CPS

Topics: Methodology and Data Collection, Population Data Science

Countries:

IPUMS NHGIS NAPP IHIS ATUS Terrapop