论文标题

外展曲:训练变压器以解释意外输入

AbductionRules: Training Transformers to Explain Unexpected Inputs

论文作者

Young, Nathan, Bao, Qiming, Bensemann, Joshua, Witbrock, Michael

论文摘要

最近,变形金刚能够对以自然语言表达的事实和规则进行可靠的逻辑推理,但绑架性推理 - 推断出对意外观察的最佳解释 - 尽管对科学发现,普通识别推理和模型的解释性有很大的应用,但仍未得到充满活力。 我们介绍了《外展者》,这是一组自然语言数据集,旨在训练和测试对自然语言知识库的普遍绑架。我们使用这些数据集来预先预处理变压器并讨论其性能,发现我们的模型学会了可延长的绑架技术,但也学会了利用数据的结构。最后,我们讨论了这种绑架推理方法的生存能力以及未来工作中它可以改善的方法的生存能力。

Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability. We present AbductionRules, a group of natural language datasets designed to train and test generalisable abduction over natural-language knowledge bases. We use these datasets to finetune pretrained Transformers and discuss their performance, finding that our models learned generalisable abductive techniques but also learned to exploit the structure of our data. Finally, we discuss the viability of this approach to abductive reasoning and ways in which it may be improved in future work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源