论文标题

多跳问题的加强多任务方法

Reinforced Multi-task Approach for Multi-hop Question Generation

论文作者

Gupta, Deepak, Chauhan, Hardik, Tej, Akella Ravi, Ekbal, Asif, Bhattacharyya, Pushpak

论文摘要

问题生成(QG)试图通过产生文档和答案来解决自然语言问题来解决问题回答(QA)问题的倒数。序列神经模型的序列超过了基于QG的规则的系统,但它们的能力限制了专注于多个支持事实的能力。对于QG,我们通常需要多个支持事实来产生高质量的问题。受QA中多跳推理的最新作品的启发,我们进行了多跳问题的生成,该问题旨在基于背景下支持事实的相关问题。我们采用多任务学习,辅助任务是回答意识支持事实预测以指导问题生成器。此外,我们还提出了在增强学习(RL)框架中提出的问题感知奖励功能,以最大程度地利用支持事实。我们通过对回答数据集HotPotQA的多跳​​问题进行实验来证明我们的方法的有效性。经验评估表明,我们的模型在自动评估指标(例如BLEU,流星和胭脂)上都超过单跳神经问题的产生模型,以及人类评估指标,以质量和覆盖所产生的问题的质量和覆盖率。

Question generation (QG) attempts to solve the inverse of question answering (QA) problem by generating a natural language question given a document and an answer. While sequence to sequence neural models surpass rule-based systems for QG, they are limited in their capacity to focus on more than one supporting fact. For QG, we often require multiple supporting facts to generate high-quality questions. Inspired by recent works on multi-hop reasoning in QA, we take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context. We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator. In addition, we also proposed a question-aware reward function in a Reinforcement Learning (RL) framework to maximize the utilization of the supporting facts. We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA. Empirical evaluation shows our model to outperform the single-hop neural question generation models on both automatic evaluation metrics such as BLEU, METEOR, and ROUGE, and human evaluation metrics for quality and coverage of the generated questions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源