论文标题

补充胁迫众包的非凡失败

The Extraordinary Failure of Complement Coercion Crowdsourcing

论文作者

Elazar, Yanai, Basmov, Victoria, Ravfogel, Shauli, Goldberg, Yoav, Tsarfaty, Reut

论文摘要

近年来,众包已经缓解了语言注释的收集。在这项工作中,我们遵循已知的方法,用于收集补体强制现象的标记数据。这些是具有隐含动作的构造 - 例如,“我开始了上周购买的新书”,暗示的动作正在阅读。我们的目标是通过将其简化为两个已知任务之一来收集这种现象的注释数据:显式完成和自然语言推断。但是,在这两种情况下,众包的得分都降低,即使我们遵循与以前的工作相同的方法。为什么同一过程无法产生高协议得分?我们指定了我们的建模方案,突出了以前的工作的差异,并提供了有关任务的一些见解以及失败的可能解释。我们得出的结论是,特定现象不仅需要专门的算法,而且需要在数据收集方法中进行量身定制的解决方案。

Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action -- e.g., "I started a new book I bought last week", where the implied action is reading. We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference. However, in both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work. Why does the same process fail to yield high agreement scores? We specify our modeling schemes, highlight the differences with previous work and provide some insights about the task and possible explanations for the failure. We conclude that specific phenomena require tailored solutions, not only in specialized algorithms, but also in data collection methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源