论文标题

使用图神经网络迈向子图引导的知识图生成

Toward Subgraph-Guided Knowledge Graph Question Generation with Graph Neural Networks

论文作者

Chen, Yu, Wu, Lingfei, Zaki, Mohammed J.

论文摘要

知识图(KG)问题生成(QG)旨在从kgs和目标答案中产生自然语言问题。先前的作品主要集中在一个简单的设置上,该设置是从单个kg三重的问题产生问题。在这项工作中,我们专注于更现实的环境,我们旨在从KG子图和目标答案中产生问题。此外,以前的大多数基于基于RNN的或基于变压器的模型构建的作品用于编码线性化的KG Sugraph,这完全丢弃了KG子图的显式结构信息。为了解决此问题,我们建议应用双向Graph2Seq模型来编码KG子图。此外,我们使用节点级复制机制增强了RNN解码器,以允许将节点属性从kg子图直接复制到输出问题。自动和人类评估结果都表明,我们的模型达到了新的最先进分数,在两个QG基准上的优于现有方法优于现有方法。实验结果还表明,我们的QG模型可以始终如一地使问题答案(QA)任务作为数据增强的平均值。

Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers. Previous works mostly focus on a simple setting which is to generate questions from a single KG triple. In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers. In addition, most of previous works built on either RNN-based or Transformer based models to encode a linearized KG sugraph, which totally discards the explicit structure information of a KG subgraph. To address this issue, we propose to apply a bidirectional Graph2Seq model to encode the KG subgraph. Furthermore, we enhance our RNN decoder with node-level copying mechanism to allow directly copying node attributes from the KG subgraph to the output question. Both automatic and human evaluation results demonstrate that our model achieves new state-of-the-art scores, outperforming existing methods by a significant margin on two QG benchmarks. Experimental results also show that our QG model can consistently benefit the Question Answering (QA) task as a mean of data augmentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源