论文标题
公平得分和过程标准化:人工智能系统中的公平认证框架
Fairness Score and Process Standardization: Framework for Fairness Certification in Artificial Intelligence Systems
论文作者
论文摘要
各种人工智能(AI)系统做出的决定极大地影响了我们的日常生活。随着人工智能系统的越来越多的使用,知道它们是公平的,确定决策中的潜在偏见,并创建一个标准化的框架以确定其公平性变得至关重要。在本文中,我们提出了一个新颖的公平分数,以衡量数据驱动的AI系统和标准操作程序(SOP)的公平性,以发布此类系统的公平认证。公平得分和审核过程标准化将确保质量,降低歧义,使比较并提高AI系统的可信度。它还将提供一个框架,以操作公平概念并促进此类系统的商业部署。此外,按照标准化流程后,指定的第三方审计机构签发的公平证书将提高他们打算部署的AI系统中组织的定罪。本文提出的偏差指数还揭示了数据集中各种受保护属性之间的比较偏差。为了证实所提出的框架,我们使用多个数据集对偏见和无偏见的数据进行迭代训练模型,并检查公平得分和建议的过程是否正确识别偏见并判断公平性。
Decisions made by various Artificial Intelligence (AI) systems greatly influence our day-to-day lives. With the increasing use of AI systems, it becomes crucial to know that they are fair, identify the underlying biases in their decision-making, and create a standardized framework to ascertain their fairness. In this paper, we propose a novel Fairness Score to measure the fairness of a data-driven AI system and a Standard Operating Procedure (SOP) for issuing Fairness Certification for such systems. Fairness Score and audit process standardization will ensure quality, reduce ambiguity, enable comparison and improve the trustworthiness of the AI systems. It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems. Furthermore, a Fairness Certificate issued by a designated third-party auditing agency following the standardized process would boost the conviction of the organizations in the AI systems that they intend to deploy. The Bias Index proposed in this paper also reveals comparative bias amongst the various protected attributes within the dataset. To substantiate the proposed framework, we iteratively train a model on biased and unbiased data using multiple datasets and check that the Fairness Score and the proposed process correctly identify the biases and judge the fairness.