论文标题
贝叶斯 - 最佳分类器在团体公平下
Bayes-Optimal Classifiers under Group Fairness
论文作者
论文摘要
机器学习算法正在整合到越来越多的高风险决策过程中,例如在社会福利问题中。由于需要减轻算法预测的潜在不同影响,因此在公平机器学习的新兴领域提出了许多方法。但是,在某些特殊情况下,仅研究了在各个群体公平限制下表征贝叶斯最佳分类器的基本问题。基于经典的Neyman-Pearson参数(Neyman and Pearson,1933; Shao,2003)用于最佳假设测试,本文为在群体公平下推导了贝叶斯 - 优越的分类器提供了一个统一的框架。这使我们能够提出一种我们称为Fairbayes的基于群体的阈值方法,该方法可以直接控制差异,并实现本质上最佳的公平准确性权衡。这些优点得到了彻底的实验支持。
Machine learning algorithms are becoming integrated into more and more high-stakes decision-making processes, such as in social welfare issues. Due to the need of mitigating the potentially disparate impacts from algorithmic predictions, many approaches have been proposed in the emerging area of fair machine learning. However, the fundamental problem of characterizing Bayes-optimal classifiers under various group fairness constraints has only been investigated in some special cases. Based on the classical Neyman-Pearson argument (Neyman and Pearson, 1933; Shao, 2003) for optimal hypothesis testing, this paper provides a unified framework for deriving Bayes-optimal classifiers under group fairness. This enables us to propose a group-based thresholding method we call FairBayes, that can directly control disparity, and achieve an essentially optimal fairness-accuracy tradeoff. These advantages are supported by thorough experiments.