论文标题
测量进攻性语言分类器的地理绩效差异
Measuring Geographic Performance Disparities of Offensive Language Classifiers
论文作者
论文摘要
文本分类器以一小中的全解决方案的形式进行大规模应用。然而,许多研究表明,分类器在不同的语言和方言上偏见。在测量和发现这些偏见时,会出现一些差距,应解决。首先,``语言,方言和主题内容在地理区域之间是否有所不同吗?'',其次``如果各个地区存在差异,它们会影响模型性能吗?''。我们介绍了一个名为Geoolid的新颖数据集,其中有15个地理和人口统计学上的城市中有14,000多个示例来解决这些问题。我们对与地理相关的内容进行全面分析及其对进攻语言检测模型的性能差异的影响。总体而言,我们发现当前的模型不会在各个位置概括。同样,我们表明,尽管进攻性语言模型对非裔美国人的英语产生误报,但模型表现与每个城市的少数族裔人口比例无关。警告:本文包含令人反感的语言。
Text classifiers are applied at scale in the form of one-size-fits-all solutions. Nevertheless, many studies show that classifiers are biased regarding different languages and dialects. When measuring and discovering these biases, some gaps present themselves and should be addressed. First, ``Does language, dialect, and topical content vary across geographical regions?'' and secondly ``If there are differences across the regions, do they impact model performance?''. We introduce a novel dataset called GeoOLID with more than 14 thousand examples across 15 geographically and demographically diverse cities to address these questions. We perform a comprehensive analysis of geographical-related content and their impact on performance disparities of offensive language detection models. Overall, we find that current models do not generalize across locations. Likewise, we show that while offensive language models produce false positives on African American English, model performance is not correlated with each city's minority population proportions. Warning: This paper contains offensive language.