论文标题
自适应公平感知到改变环境的在线元学习
Adaptive Fairness-Aware Online Meta-Learning for Changing Environments
论文作者
论文摘要
公平意识的在线学习框架已成为持续终身学习设置的强大工具。学习者的目标是依次学习新任务,随着时间的流逝,他们又一次地出现了,并且学习者确保了在不同受保护的子人群(例如种族和性别)中,新任务的统计范围。现有方法的一个主要缺点是,它们大量利用了数据的I.I.D假设,因此为框架提供了静止的遗憾分析。但是,低静态遗憾并不能暗示在不变分布中采样任务的不断变化的环境中表现良好。为了解决不断变化的环境中公平意识的在线学习问题,在本文中,我们首先通过在强烈适应的损失后悔中添加长期公平限制来构建一个新颖的遗憾度量fairsar。此外,为了确定每个回合的良好模型参数,我们提出了一种新型的自适应公平感知到在线元学习算法,即Fairsaoml,它能够适应偏置控制和模型精确度中不断变化的环境。该问题的形式分别与模型的原始和双重参数的形式相关,分别与模型的准确性和公平性相关联。理论分析为损失遗憾和违反累积公平限制的损失提供了子线性上限。我们对不同现实世界数据集的实验评估,并具有不断变化的环境设置,这表明,根据最佳先前的在线学习方法,提议的Fairsaoml明显优于替代方案。
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting. The goal for the learner is to sequentially learn new tasks where they come one after another over time and the learner ensures the statistic parity of the new coming task across different protected sub-populations (e.g. race and gender). A major drawback of existing methods is that they make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework. However, low static regret cannot imply a good performance in changing environments where tasks are sampled from heterogeneous distributions. To address the fairness-aware online learning problem in changing environments, in this paper, we first construct a novel regret metric FairSAR by adding long-term fairness constraints onto a strongly adapted loss regret. Furthermore, to determine a good model parameter at each round, we propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision. The problem is formulated in the form of a bi-level convex-concave optimization with respect to the model's primal and dual parameters that are associated with the model's accuracy and fairness, respectively. The theoretic analysis provides sub-linear upper bounds for both loss regret and violation of cumulative fairness constraints. Our experimental evaluation on different real-world datasets with settings of changing environments suggests that the proposed FairSAOML significantly outperforms alternatives based on the best prior online learning approaches.