论文标题
通过模态预训练和注意力提高多模式的精度
Improving Multimodal Accuracy Through Modality Pre-training and Attention
论文作者
论文摘要
培训多模式网络具有挑战性,它需要复杂的体系结构才能实现合理的性能。我们表明,这种现象的原因之一是各种方式的收敛速率之间的差异。我们通过在整个网络的端到端培训之前独立地独立于多模式体系结构中的特定于训练模式的子网络来解决这一问题。此外,我们表明,在预训练后的子网络之间增加了注意机制,这有助于确定在模棱两可的情况下最重要的模式,从而促进性能。我们证明,通过执行这两个技巧,一个简单的网络可以实现与复杂的体系结构相似的性能,而复杂的体系结构要训练多个任务,包括情感分析,情感识别和扬声器特质识别。
Training a multimodal network is challenging and it requires complex architectures to achieve reasonable performance. We show that one reason for this phenomena is the difference between the convergence rate of various modalities. We address this by pre-training modality-specific sub-networks in multimodal architectures independently before end-to-end training of the entire network. Furthermore, we show that the addition of an attention mechanism between sub-networks after pre-training helps identify the most important modality during ambiguous scenarios boosting the performance. We demonstrate that by performing these two tricks a simple network can achieve similar performance to a complicated architecture that is significantly more expensive to train on multiple tasks including sentiment analysis, emotion recognition, and speaker trait recognition.