论文标题

对开放域对话的响应选择的系统评估

A Systematic Evaluation of Response Selection for Open Domain Dialogue

论文作者

Hedayatnia, Behnam, Jin, Di, Liu, Yang, Hakkani-Tur, Dilek

论文摘要

语言处理神经方法的最新进展引发了人们对建立智能开放域聊天机器人的兴趣复兴。但是,即使是最先进的神经聊天机器人也无法在对话框中每个回合产生令人满意的响应。一个实用的解决方案是在同一上下文中生成多个响应候选者,然后执行响应排名/选择以确定哪个候选者是最好的。先前的响应选择中的工作通常使用从现有对话框形成的合成数据来训练响应排名者,这是通过使用地面真理响应作为单个适当响应并通过随机选择或使用对抗方法来构建不适当响应的。在这项工作中,我们策划了一个数据集,其中为适当的(正面)和不适当(负)手动注释了为相同对话框上下文产生的多个响应发生器的响应。我们认为,这样的培训数据可以更好地匹配实际的用例示例,从而使模型能够有效地对响应进行排名。借助此新数据集,我们对响应选择的最先进方法进行了系统的评估,并证明了使用多个积极候选者和使用手动验证的硬性负面候选者的两种策略都可以带来显着的绩效提高,例如使用对抗性训练数据,例如,Recess@1分别增加了3%和13%的分别。

Recent progress on neural approaches for language processing has triggered a resurgence of interest on building intelligent open-domain chatbots. However, even the state-of-the-art neural chatbots cannot produce satisfying responses for every turn in a dialog. A practical solution is to generate multiple response candidates for the same context, and then perform response ranking/selection to determine which candidate is the best. Previous work in response selection typically trains response rankers using synthetic data that is formed from existing dialogs by using a ground truth response as the single appropriate response and constructing inappropriate responses via random selection or using adversarial methods. In this work, we curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative). We argue that such training data better matches the actual use case examples, enabling the models to learn to rank responses effectively. With this new dataset, we conduct a systematic evaluation of state-of-the-art methods for response selection, and demonstrate that both strategies of using multiple positive candidates and using manually verified hard negative candidates can bring in significant performance improvement in comparison to using the adversarial training data, e.g., increase of 3% and 13% in Recall@1 score, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源