论文标题
文本模块化网络:学习以现有模型的语言分解任务
Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models
论文作者
论文摘要
我们提出了一个称为文本模块化网络(TMN)的通用框架,用于构建可解释的系统,该系统通过将其分解为可通过现有模型解决的简单措施来解决复杂的任务来解决复杂的任务。为了确保更简单的任务的解决性,TMN通过其数据集学习现有模型的文本输入输出行为(即语言)。这不同于先前的基于分解的方法,这些方法除了专门为每个复杂任务设计外,还产生独立于现有子模型的分解。具体而言,我们专注于问答(QA),并展示如何训练下一个问题的发生器,以依次产生针对适当子模型的子问题,而无需其他人类注释。这些子问题和答案为模型的推理提供了忠实的自然语言解释。我们使用此框架来构建ModularQA,该系统可以通过将它们分解为可通过神经真实的单跨质量质量质量质量质量质量质量标准和符号计算器来回答多跳的推理问题。我们的实验表明,模块化比现有的Drop和HotPotQA数据集更广泛的系统,比最新的BlackBox(无法解释的)系统更强大,并且与先前的工作相比,生成更易于理解和可信赖的解释。
We propose a general framework called Text Modular Networks(TMNs) for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models. To ensure solvability of simpler tasks, TMNs learn the textual input-output behavior (i.e., language) of existing models through their datasets. This differs from prior decomposition-based approaches which, besides being designed specifically for each complex task, produce decompositions independent of existing sub-models. Specifically, we focus on Question Answering (QA) and show how to train a next-question generator to sequentially produce sub-questions targeting appropriate sub-models, without additional human annotation. These sub-questions and answers provide a faithful natural language explanation of the model's reasoning. We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator. Our experiments show that ModularQA is more versatile than existing explainable systems for DROP and HotpotQA datasets, is more robust than state-of-the-art blackbox (uninterpretable) systems, and generates more understandable and trustworthy explanations compared to prior work.