论文标题

MACSA:具有多模式细粒对齐注释的多模式方面的情感分析数据集

MACSA: A Multimodal Aspect-Category Sentiment Analysis Dataset with Multimodal Fine-grained Aligned Annotations

论文作者

Yang, Hao, Zhao, Yanyan, Liu, Jianwei, Wu, Yang, Qin, Bing

论文摘要

多模式的细粒情感分析最近由于其广泛的应用而引起了人们的关注。但是,现有的多模式细颗粒情感数据集最关注注释文本中的细粒元素,但忽略图像中的元素,这导致视觉内容中的细粒元素无法得到应有的全部关注。在本文中,我们提出了一个新的数据集,即多模式方面类别情感分析(MACSA)数据集,其中包含超过21k的文本图像对。该数据集为文本和视觉内容提供细粒度的注释,并首先将方面类别用作枢轴,以对齐两种模态之间的细粒元素。基于我们的数据集,我们提出了多模式ACSA任务和基于多模式的对准模型(MGAM),该模型(MGAM)采用了细粒度的跨模式融合方法。实验结果表明,我们的方法可以促进对此语料库的未来研究的基线比较。我们将公开提供数据集和代码。

Multimodal fine-grained sentiment analysis has recently attracted increasing attention due to its broad applications. However, the existing multimodal fine-grained sentiment datasets most focus on annotating the fine-grained elements in text but ignore those in images, which leads to the fine-grained elements in visual content not receiving the full attention they deserve. In this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment Analysis (MACSA) dataset, which contains more than 21K text-image pairs. The dataset provides fine-grained annotations for both textual and visual content and firstly uses the aspect category as the pivot to align the fine-grained elements between the two modalities. Based on our dataset, we propose the Multimodal ACSA task and a multimodal graph-based aligned model (MGAM), which adopts a fine-grained cross-modal fusion method. Experimental results show that our method can facilitate the baseline comparison for future research on this corpus. We will make the dataset and code publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源