论文标题

了解面部识别技术的偏见

Understanding bias in facial recognition technologies

论文作者

Leslie, David

论文摘要

在过去的几年中,围绕自动面部识别的辩论日益增长,已经达到了沸点。随着开发人员继续迅速将这些技术的范围扩展到几乎无限的应用程序范围内,越来越多的关键声音合唱引起了人们对此类系统扩散的有害影响的担忧。反对者认为,面部探测和识别技术(FDRT)的不负责任的设计和使用威胁着侵犯公民自由,侵犯了基本人权,并进一步巩固了结构性的种族主义和系统性的边缘化。他们还警告说,面对监视基础设施的逐渐蔓延到每个生活经验领域,最终可能会消除长期以来一直为个人繁荣,社会团结和人类自我创造提供珍惜的现代民主生活形式。相比之下,捍卫者强调了公共安全,保障和效率的收益,以数字化精简的面部识别能力,身份验证和特征表征可能带来。在此解释器中,我关注这场辩论的一个核心:偏见和歧视动态在FDRT的发展和部署中发挥的作用。我研究了歧视的历史模式是如何从他们最早的时刻开始涉足FDRT的设计和实施的。而且,我解释了使用有偏见的FDRT可以导致分布和识别不公正的方式。解释器以探索更广泛的道德问题的结尾,围绕着普遍面部的监视基础设施的潜在扩散,并提出了一些建议,以培养对这些技术的发展和治理的更负责任的方法。

Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems. Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring. In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源