论文标题

通过有限的小组损失,公平的联合学习

Fair Federated Learning via Bounded Group Loss

论文作者

Hu, Shengyuan, Wu, Zhiwei Steven, Smith, Virginia

论文摘要

在受保护群体之间进行公平预测是许多联合学习应用的重要限制。但是,先前的工作研究小组FAIR联合学习缺乏正式的融合或公平保证。在这项工作中,我们为可证明的公平联邦学习提供了一个一般框架。特别是,我们探索并扩展了有限的群体损失的概念,作为一种理论上的群体公平方法。使用此设置,我们提出了一种可扩展的联合优化方法,该方法在许多群体公平限制下优化了经验风险。我们为该方法提供融合保证,并为此解决方案提供公平保证。从经验上讲,我们评估了公平ML和联合学习的共同基准的方法,表明它可以比基线方法提供更公平,更准确的预测。

Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源