01505nas a2200229 4500000000100000000000100001008004100002260001200043653002400055653002400079653002100103653001900124653001500143100001800158700002300176245009000199856009800289300001000387490000600397520085800403022001401261 2014 d c09/201410aIntelligent Systems10aEmotion recognition10aMachine Learning10aBimodal Fusion10aEnergy Map1 aKrishna Asawa1 aPriyanka Manchanda00aRecognition of Emotions using Energy Based Bimodal Information Fusion and Correlation uhttp://www.ijimai.org/JOURNAL/sites/default/files/files/2014/09/ijimai20142_7_2_pdf_26941.pdf a17-210 v23 aMulti-sensor information fusion is a rapidly developing research area which forms the backbone of numerous essential technologies such as intelligent robotic control, sensor networks, video and image processing and many more. In this paper, we have developed a novel technique to analyze and correlate human emotions expressed in voice tone & facial expression. Audio and video streams captured to populate audio and video bimodal data sets to sense the expressed emotions in voice tone and facial expression respectively. An energy based mapping is being done to overcome the inherent heterogeneity of the recorded bi-modal signal. The fusion process uses sampled and mapped energy signal of both modalities’s data stream and further recognize the overall emotional component using Support Vector Machine (SVM) classifier with the accuracy 93.06%. a1989-1660