论文标题
希腊 - 伯特:希腊人参观芝麻街
GREEK-BERT: The Greeks visiting Sesame Street
论文作者
论文摘要
基于变压器的语言模型,例如BERT及其变体,在通用基准数据集(例如,胶水,小队,种族)上的几个下游自然语言处理(NLP)任务中实现了最先进的性能。但是,这些模型主要用于资源丰富的英语。在本文中,我们介绍了希腊语,这是一种基于BERT的单语言语言模型,用于现代希腊语。我们在三个NLP任务中评估了其性能,即言论的一部分标记,命名实体识别和自然语言推断,从而获得最新的性能。有趣的是,在两个基准中,希腊语 - 伯特的表现优于两个多语言变压器的模型(M-Bert,XLM-R),以及在预训练的单词嵌入式上运行的较浅的神经基线,其优势较大(5%-10%)。最重要的是,我们同时公开提供希腊语和我们的培训代码,并说明如何对Heek-bert进行如何微调用于下游NLP任务。我们希望这些资源能够增强NLP研究和现代希腊语的应用。
Transformer-based language models, such as BERT and its variants, have achieved state-of-the-art performance in several downstream natural language processing (NLP) tasks on generic benchmark datasets (e.g., GLUE, SQUAD, RACE). However, these models have mostly been applied to the resource-rich English language. In this paper, we present GREEK-BERT, a monolingual BERT-based language model for modern Greek. We evaluate its performance in three NLP tasks, i.e., part-of-speech tagging, named entity recognition, and natural language inference, obtaining state-of-the-art performance. Interestingly, in two of the benchmarks GREEK-BERT outperforms two multilingual Transformer-based models (M-BERT, XLM-R), as well as shallower neural baselines operating on pre-trained word embeddings, by a large margin (5%-10%). Most importantly, we make both GREEK-BERT and our training code publicly available, along with code illustrating how GREEK-BERT can be fine-tuned for downstream NLP tasks. We expect these resources to boost NLP research and applications for modern Greek.