论文标题
神经防护网
Neural Proof Nets
论文作者
论文摘要
线性逻辑和线性λ-calculus在自然语言形式和含义的研究中具有悠久的传统。在线性逻辑的证明计算中,尤其令人感兴趣的证明网络,提供了衍生的有吸引力的几何表示,而传统证明理论格式的官僚主义并发症不承担。在基于固定理论学习的最新进展的基础上,我们提出了基于Sinkhorn网络的证明网络的神经变体,这使我们能够将解析转化为提取句法基本的问题,并将其转换为对齐。我们的方法论诱导了批处理,端到端可区分的体系结构,该体系结构实现了正式且高效的神经符号符号解析器。我们测试了我们在Thelel上的方法,这是书面荷兰语的类型与逻辑派生的数据集,在该数据集中,它设法将原始文本句子正确地将原始文本句子转录为线性λ-钙符号的证明和术语,精度高达70%。
Linear logic and the linear λ-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on ÆThel, a dataset of type-logical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear λ-calculus with an accuracy of as high as 70%.