论文标题
用内域嵌入初始化改善低计算语言建模
Improving Low Compute Language Modeling with In-Domain Embedding Initialisation
论文作者
论文摘要
许多NLP应用程序,例如生物医学数据和技术支持,具有100-1亿个标记的内域数据和有限的计算资源来从中学习。在这种情况下,我们应该如何训练语言模型?大多数语言建模研究都考虑了带有封闭词汇的小数据集(例如标准的100万个标记宾夕法尼亚树库),也可以使用带有字节对编码的整个网络。我们表明,对于以英语为单位的目标设置,使用内域数据的初始化和冻结输入嵌入可以通过提供稀有单词的有用表示来改善语言模型性能,并且这种模式在几个不同的域中都能存在。在此过程中,我们表明将输入和输出嵌入的标准惯例用在域内数据训练的嵌入式初始化时不会改善困惑。
Many NLP applications, such as biomedical data and technical support, have 10-100 million tokens of in-domain data and limited computational resources for learning from it. How should we train a language model in this scenario? Most language modeling research considers either a small dataset with a closed vocabulary (like the standard 1 million token Penn Treebank), or the whole web with byte-pair encoding. We show that for our target setting in English, initialising and freezing input embeddings using in-domain data can improve language model performance by providing a useful representation of rare words, and this pattern holds across several different domains. In the process, we show that the standard convention of tying input and output embeddings does not improve perplexity when initializing with embeddings trained on in-domain data.