论文标题

带有堆栈转换器的换档为任务的语义解析

Shift-Reduce Task-Oriented Semantic Parsing with Stack-Transformers

论文作者

Fernández-González, Daniel

论文摘要

如今,聪明的语音助手(例如Apple Siri和Amazon Alexa)已被广泛使用。这些面向任务的对话系统需要一个语义解析模块,以处理用户话语并了解要执行的操作。该语义解析组件最初是通过基于规则或统计插槽填充方法来处理简单查询的;但是,更复杂的话语的出现要求应用减少解析器或顺序到序列模型。尽管最初被认为是最有前途的选择,但序列到序列神经系统的出现将它们推向了最高的方法,作为该特定任务的最高表现方法。在本文中,我们推进了针对以任务为导向的对话的换档语义解析的研究。我们实施了依赖堆栈转换器的新型转移解析器。该框架允许在变压器神经体系结构上充分建模过渡系统,尤其是提高了换档解析性能。此外,我们的方法超出了常规的自上而下算法:我们将替代的自下而上和阶跃过渡系统纳入了从组成部分解析的范围中,以使其面向任务的解析领域。我们从Facebook顶级基准测试了多个域上进行了广泛的测试,从而改善了高资源和低资源设置中现有的Shift-Reduce解析器和最先进的序列模型。从经验上,我们还证明,内存算法大大优于普遍使用的自上而下策略。通过创建创新的过渡系统并利用强大的神经结构的能力,我们的研究表明了减少解析器比主要基准测试的领先序列对序列方法的优越性。

Intelligent voice assistants, such as Apple Siri and Amazon Alexa, are widely used nowadays. These task-oriented dialogue systems require a semantic parsing module in order to process user utterances and understand the action to be performed. This semantic parsing component was initially implemented by rule-based or statistical slot-filling approaches for processing simple queries; however, the appearance of more complex utterances demanded the application of shift-reduce parsers or sequence-to-sequence models. Although shift-reduce approaches were initially considered the most promising option, the emergence of sequence-to-sequence neural systems has propelled them to the forefront as the highest-performing method for this particular task. In this article, we advance the research on shift-reduce semantic parsing for task-oriented dialogue. We implement novel shift-reduce parsers that rely on Stack-Transformers. This framework allows to adequately model transition systems on the Transformer neural architecture, notably boosting shift-reduce parsing performance. Furthermore, our approach goes beyond the conventional top-down algorithm: we incorporate alternative bottom-up and in-order transition systems derived from constituency parsing into the realm of task-oriented parsing. We extensively test our approach on multiple domains from the Facebook TOP benchmark, improving over existing shift-reduce parsers and state-of-the-art sequence-to-sequence models in both high-resource and low-resource settings. We also empirically prove that the in-order algorithm substantially outperforms the commonly-used top-down strategy. Through the creation of innovative transition systems and harnessing the capabilities of a robust neural architecture, our study showcases the superiority of shift-reduce parsers over leading sequence-to-sequence methods on the main benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源