论文标题

社交媒体中对社交含义的对比度学习

Contrastive Learning of Sociopragmatic Meaning in Social Media

论文作者

Zhang, Chiyu, Abdul-Mageed, Muhammad, Jawahar, Ganesh

论文摘要

NLP中表示和对比度学习的最新进展并未被广泛考虑到\ textit {socipragmatic含义}的类别(即不同语言社区内的互动中的含义)。为了弥合这一差距,我们提出了一个新的框架,以学习可以转移到各种社会主义任务(例如,情感,仇恨言论,幽默,讽刺)的任务无关表达。我们的框架在一般和少量设置中都优于内域和室外数据的其他对比学习框架。例如,与两种流行的预训练的语言模型相比,我们的方法获得了$ 11.66 $平均$ f_1 $的$ 16 $数据集的$ 16 $ f_1 $,而每数据集中仅需$ 20 $培训样本。

Recent progress in representation and contrastive learning in NLP has not widely considered the class of \textit{sociopragmatic meaning} (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our method obtains an improvement of $11.66$ average $F_1$ on $16$ datasets when fine-tuned on only $20$ training samples per dataset.Our code is available at: https://github.com/UBC-NLP/infodcl

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源