02252nas a2200289 4500000000100000000000100001008004100002260001200043653002000055653003900075653002300114653002900137100002600166700001700192700001900209700002300228700002500251700002100276700002400297700002400321245009600345856007900441300001000520490000600530520141200536022001401948 2023 d c12/202310aAttention Model10aConvolutional Neural Network (CNN)10aTarget Recognition10aSynthetic Aperture Radar1 aChiagoziem C. Ukwuoma1 aQin Zhiguang1 aBole W. Tienin1 aSophyani B. Yussif1 aChukwuebuka J. Ejiyi1 aGilbert C. Urama1 aChibueze D. Ukwuoma1 aIjeoma A. Chikwendu00aSynthetic Aperture Radar Automatic Target Recognition Based on a Simple Attention Mechanism uhttps://www.ijimai.org/journal/sites/default/files/2023-11/ijimai8_4_6.pdf a67-770 v83 aA simple but effective channel attention module is proposed for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR). The channel attention technique has shown recent success in improving Deep Convolutional Neural Networks (CNN). The resolution of SAR images does not surpass optical images thus information flow of SAR images becomes relatively poor when the network depth is raised blindly leading to a serious gradients explosion/vanishing. To resolve the issue of SAR image recognition efficiency and ambiguity trade-off, we proposed a simple Channel Attention module into the ResNet Architecture as our network backbone, which utilizes few parameters yet results in a performance gain. Our simple attention module, which follows the implementation of Efficient Channel Attention, shows that avoiding dimensionality reduction is essential for learning as well as an appropriate cross-channel interaction can preserve performance and decrease model complexity. We also explored the One Policy Learning Rate on the ResNet-50 architecture and compared it with the proposed attention based ResNet-50 architecture. A thorough analysis of the MSTAR Dataset demonstrates the efficacy of the suggested strategy over the most recent findings. With the Attention-based model and the One Policy Learning Rate-based architecture, we were able to obtain recognition rate of 100% and 99.8%, respectively. a1989-1660