01872nas a2200253 4500000000100000000000100001008004100002260001200043653002000055653002300075653002200098653002400120653001500144653002300159100002100182700002400203700002400227245009900251856008100350300001200431490000600443520115500449022001401604 2021 d c06/202110aAttention Model10aInterlocutor State10aContext Awareness10aEmotion recognition10aMultimodal10aSentiment Analysis1 aMahesh G. Huddar1 aSanjeev S. Sannakki1 aVijay S. Rajpurohit00aAttention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN uhttps://www.ijimai.org/journal/sites/default/files/2021-05/ijimai_6_6_12.pdf a112-1210 v63 aThe availability of an enormous quantity of multimodal data and its widespread applications, automatic sentiment analysis and emotion classification in the conversation has become an interesting research topic among the research community. The interlocutor state, context state between the neighboring utterances and multimodal fusion play an important role in multimodal sentiment analysis and emotion detection in conversation. In this article, the recurrent neural network (RNN) based method is developed to capture the interlocutor state and contextual state between the utterances. The pair-wise attention mechanism is used to understand the relationship between the modalities and their importance before fusion. First, two-two combinations of modalities are fused at a time and finally, all the modalities are fused to form the trimodal representation feature vector. The experiments are conducted on three standard datasets such as IEMOCAP, CMU-MOSEI, and CMU-MOSI. The proposed model is evaluated using two metrics such as accuracy and F1-Score and the results demonstrate that the proposed model performs better than the standard baselines. a1989-1660