Sfnn: Semantic features fusion neural network for multimodal sentiment analysis

Published in CACRE, 2020

This paper proposes a neural network SFNN based on semantic feature fusion, Detecting sentiment in online reviews.Experimental results show that our model could achieve better performance than the existing methods in the benchmark dataset.

Fig 1. The SFNN framework of our proposed.

User reviews usually contain text and visual content, both of which provide important and supplementary infor- mation. Therefore, in the sentiment recognition of network reviews, the multi-modal detection method has better generalization than the single-modal detection method. Considering that the traditional model cannot extract the semantic information of the image part very well, We presented the SFNN model, as shown in Fig 1.

image-20230810173831258

Fig 2. Perfromance and architecture ablation

From Table I, we can see that compared with other models, our SFNN model obtained the best results 62.80% on the Yelp, which was improved 2.1% compared with the VistaNet model.

For more details:

Download paper here

Recommended citation:

Wu, W., Wang, Y., Xu, S., & Yan, K. (2020, September). Sfnn: Semantic features fusion neural network for multimodal sentiment analysis. In 2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE) (pp. 661-665). IEEE.