论文标题

损失通过结果不可分性的镜头最小化

Loss Minimization through the Lens of Outcome Indistinguishability

论文作者

Gopalan, Parikshit, Hu, Lunjia, Kim, Michael P., Reingold, Omer, Wieder, Udi

论文摘要

我们提出了关于损失最小化的新观点,以及通过结果范围的镜头概念的杂志概念。对于损失和假设类别的集合,Omnipredient要求预测因子同时为收集中的每个损失提供同时提供最小化的保证,而不是类别中的最佳(特定于损失)假设。我们提出了一个通用模板,以学习满足保证的预测因素,我们称损失结果不可区分。对于一系列基于统计检验的损失和假设类别的基础 - 如果自然对结果的真实概率无法区分(根据测试),则预测因子是损失。根据设计,OI的损失意味着直接和直观的方式浮雕。我们进一步简化了损失,将其分解为校准条件,以及从损失和假设类别得出的一类功能的多辅助性。通过对此类别的仔细分析,我们为有趣的损失功能(包括非convex损失)提供了有效的全机构造。 这种分解突出了我们称为校准的多频率的新的多组公平概念的实用性,该概念位于多用电话和多核电之间。我们表明,经过校准的多辅助性意味着损失的损失是由广义线性模型引起的一组重要的凸丢失,而无需进行全面的多核。对于这种损失,我们显示了损失OI的计算概念与几何性概念之间的等效性,该概念在相关的Bregman发散中以毕达哥拉斯定理为代表。我们为校准的多辅助性提供了有效的算法,其计算复杂性与多辅助性相当。总体而言,经过校准的多通用度为效率和通用性景观的普遍性提供了一个有趣的权衡点。

We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests--based on a collection of losses and hypothesis class--a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源