02589nas a2200241 4500000000100000000000100001008004100002260001200043653003600055653001700091653002600108653004700134653004000181653002500221100002000246700001400266245008700280856008200367300001200449490000600461520186600467022001402333 2021 d c12/202110aAudio-visual Speech Recognition10aLip Tracking10aPseudo Zernike Moment10aMel Frequency Cepstral Coefficients (MFCC)10aIncremental Feature Selection (IFS)10aStatistical Analysis1 aSaswati Debnath1 aPinki Roy00aAudio-Visual Automatic Speech Recognition Using PZM, MFCC and Statistical Analysis uhttps://www.ijimai.org/journal/sites/default/files/2021-11/ijimai7_2_11_0.pdf a121-1330 v73 aAudio-Visual Automatic Speech Recognition (AV-ASR) has become the most promising research area when the audio signal gets corrupted by noise. The main objective of this paper is to select the important and discriminative audio and visual speech features to recognize audio-visual speech. This paper proposes Pseudo Zernike Moment (PZM) and feature selection method for audio-visual speech recognition. Visual information is captured from the lip contour and computes the moments for lip reading. We have extracted 19th order of Mel Frequency Cepstral Coefficients (MFCC) as speech features from audio. Since all the 19 speech features are not equally important, therefore, feature selection algorithms are used to select the most efficient features. The various statistical algorithm such as Analysis of Variance (ANOVA), Kruskal-wallis, and Friedman test are employed to analyze the significance of features along with Incremental Feature Selection (IFS) technique. Statistical analysis is used to analyze the statistical significance of the speech features and after that IFS is used to select the speech feature subset. Furthermore, multiclass Support Vector Machine (SVM), Artificial Neural Network (ANN) and Naive Bayes (NB) machine learning techniques are used to recognize the speech for both the audio and visual modalities. Based on the recognition rate combined decision is taken from the two individual recognition systems. This paper compares the result achieved by the proposed model and the existing model for both audio and visual speech recognition. Zernike Moment (ZM) is compared with PZM and shows that our proposed model using PZM extracts better discriminative features for visual speech recognition. This study also proves that audio feature selection using statistical analysis outperforms methods without any feature selection technique. a1989-1660