Joint multi-modal aspect-sentiment analysis with auxiliary cross-modal relation detection
Proceedings of the 2021 conference on empirical methods in natural …, 2021•aclanthology.org
Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two
fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the
textual analysis, joint extracting both aspect terms and sentiment polarities has been drawn
much attention due to the better applications than individual sub-task. However, in the multi-
modal scenario, the existing studies are limited to handle each sub-task independently,
which fails to model the innate connection between the above two objectives and ignores …
fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the
textual analysis, joint extracting both aspect terms and sentiment polarities has been drawn
much attention due to the better applications than individual sub-task. However, in the multi-
modal scenario, the existing studies are limited to handle each sub-task independently,
which fails to model the innate connection between the above two objectives and ignores …
Abstract
Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the textual analysis, joint extracting both aspect terms and sentiment polarities has been drawn much attention due to the better applications than individual sub-task. However, in the multi-modal scenario, the existing studies are limited to handle each sub-task independently, which fails to model the innate connection between the above two objectives and ignores the better applications. Therefore, in this paper, we are the first to jointly perform multi-modal ATE (MATE) and multi-modal ASC (MASC), and we propose a multi-modal joint learning approach with auxiliary cross-modal relation detection for multi-modal aspect-level sentiment analysis (MALSA). Specifically, we first build an auxiliary text-image relation detection module to control the proper exploitation of visual information. Second, we adopt the hierarchical framework to bridge the multi-modal connection between MATE and MASC, as well as separately visual guiding for each sub module. Finally, we can obtain all aspect-level sentiment polarities dependent on the jointly extracted specific aspects. Extensive experiments show the effectiveness of our approach against the joint textual approaches, pipeline and collapsed multi-modal approaches.
aclanthology.org
Showing the best result for this search. See all results