论文标题

要在工业应用中透明地集成公平

Towards Integrating Fairness Transparently in Industrial Applications

论文作者

Dodwell, Emily, Flynn, Cheryl, Krishnamurthy, Balachander, Majumdar, Subhabrata, Mitra, Ritwik

论文摘要

近年来,许多机器学习(ML)与偏见有关的失败导致审查公司如何将透明度和问责制纳入其ML Lifecycles中。公司有责任监视ML流程以偏见并减轻检测到的任何偏见,确保业务产品完整性,保持客户忠诚度并保护品牌形象。可以将特定于行业ML项目的挑战广泛地分为有原则的文档,人类的监督,并需要对信息重新使用并提高成本效率的机制。我们重点介绍特定的障碍,并针对ML从业者和组织主题专家提出概念解决方案。我们的系统方法通过在ML生命周期的各个阶段整合了机械化和人类的成分来应对这些挑战。为了激励我们的系统的实施 - 筛选(透明地整合公平的系统) - 我们将其结构上的基础与现实世界中的示例介绍了有关如何用于识别潜在偏见并以参与方式确定适当缓解策略的现实用例。

Numerous Machine Learning (ML) bias-related failures in recent years have led to scrutiny of how companies incorporate aspects of transparency and accountability in their ML lifecycles. Companies have a responsibility to monitor ML processes for bias and mitigate any bias detected, ensure business product integrity, preserve customer loyalty, and protect brand image. Challenges specific to industry ML projects can be broadly categorized into principled documentation, human oversight, and need for mechanisms that enable information reuse and improve cost efficiency. We highlight specific roadblocks and propose conceptual solutions on a per-category basis for ML practitioners and organizational subject matter experts. Our systematic approach tackles these challenges by integrating mechanized and human-in-the-loop components in bias detection, mitigation, and documentation of projects at various stages of the ML lifecycle. To motivate the implementation of our system -- SIFT (System to Integrate Fairness Transparently) -- we present its structural primitives with an example real-world use case on how it can be used to identify potential biases and determine appropriate mitigation strategies in a participatory manner.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源