论文标题

道德机器学习的因果关系的承诺和挑战

Promises and Challenges of Causality for Ethical Machine Learning

论文作者

Rahmattalabi, Aida, Xiang, Alice

论文摘要

近年来,由于其与法律框架的兼容性,对人类利益相关者的可解释性以及对观测数据中固有的虚假相关性的稳健性,人们对设计公平决策系统的因果推理的兴趣越来越大。然而,最近对因果公平的关注,由于在文献中采用当前的因果公平方法,因此伴随着极大的怀疑态度。由经济学,社会科学和生物医学科学的因果关系的长期实证工作的激励,在本文中,我们为在“潜在成果框架”下适当应用因果公平的条件列出了条件。我们重点介绍了因果公平文献中经常忽略因果推论的关键方面。特别是,我们讨论了在社会类别(例如种族或性别)上指定干预措施的性质和时机的重要性。确切地说,我们没有假设对不可变的属性进行干预,而是提出将重点转移到他们的看法上,并讨论对公平评估的影响。我们认为,这种干预的概念化是评估因果假设的有效性和进行声音因果分析的关键,包括避免避免治疗后偏见。随后,我们说明了因果关系如何解决现有公平指标的局限性,包括依赖统计相关性的公平指标。具体而言,我们介绍了公平统计概念的因果变异,并且我们做了一个新颖的观察结果,即在因果框架下,不同的公平概念之间没有根本的分歧。最后,我们进行了广泛的实验,在其中证明了评估和缓解不公平性的方法,特别是在存在后处理变量时。

In recent years, there has been increasing interest in causal reasoning for designing fair decision-making systems due to its compatibility with legal frameworks, interpretability for human stakeholders, and robustness to spurious correlations inherent in observational data, among other factors. The recent attention to causal fairness, however, has been accompanied with great skepticism due to practical and epistemological challenges with applying current causal fairness approaches in the literature. Motivated by the long-standing empirical work on causality in econometrics, social sciences, and biomedical sciences, in this paper we lay out the conditions for appropriate application of causal fairness under the "potential outcomes framework." We highlight key aspects of causal inference that are often ignored in the causal fairness literature. In particular, we discuss the importance of specifying the nature and timing of interventions on social categories such as race or gender. Precisely, instead of postulating an intervention on immutable attributes, we propose a shift in focus to their perceptions and discuss the implications for fairness evaluation. We argue that such conceptualization of the intervention is key in evaluating the validity of causal assumptions and conducting sound causal analysis including avoiding post-treatment bias. Subsequently, we illustrate how causality can address the limitations of existing fairness metrics, including those that depend upon statistical correlations. Specifically, we introduce causal variants of common statistical notions of fairness, and we make a novel observation that under the causal framework there is no fundamental disagreement between different notions of fairness. Finally, we conduct extensive experiments where we demonstrate our approach for evaluating and mitigating unfairness, specially when post-treatment variables are present.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源