论文标题
Stil-同时填充插槽填充,翻译,意图分类和语言标识:使用MBART在Multiatis ++上使用MBART的初始结果
STIL -- Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++
论文作者
论文摘要
插槽填充,翻译,意图分类和语言识别或stil是多语言自然语言理解(NLU)的新任务。通过同时进行插槽填充和翻译为单个输出语言(在这种情况下为英语),下游系统组件的某些部分可以是单语的,可以降低开发和维护成本。使用多语言BART模型(Liu等,2020)使用Multiatis ++数据集进行了微调。当未进行翻译时,MBART的性能与所测试的语言的当前状态(Xu等人(2020)的跨语言BERT(跨语义BERT)相当,具有更好的平均意图分类精度(96.07%对95.50%),但平均插槽F1(89.87%ves 90.81%)。当执行同时翻译时,平均意图分类精度仅相对降低1.7%,平均插槽F1仅相对1.2%。
Slot-filling, Translation, Intent classification, and Language identification, or STIL, is a newly-proposed task for multilingual Natural Language Understanding (NLU). By performing simultaneous slot filling and translation into a single output language (English in this case), some portion of downstream system components can be monolingual, reducing development and maintenance cost. Results are given using the multilingual BART model (Liu et al., 2020) fine-tuned on 7 languages using the MultiATIS++ dataset. When no translation is performed, mBART's performance is comparable to the current state of the art system (Cross-Lingual BERT by Xu et al. (2020)) for the languages tested, with better average intent classification accuracy (96.07% versus 95.50%) but worse average slot F1 (89.87% versus 90.81%). When simultaneous translation is performed, average intent classification accuracy degrades by only 1.7% relative and average slot F1 degrades by only 1.2% relative.