02247nas a2200217 4500000000100000000000100001008004100002260001200043653003200055653002000087653003300107653001800140100001300158700002200171245009800193856008200291300001200373490000600385520162400391022001402015 2023 d c06/202310aVolume Crime Classification10aCrime Detection10aMalicious Activity Detection10aDeep Learning1 aAtif Jan1 aGul Muhammad Khan00aReal World Anomalous Scene Detection and Classification using Multilayer Deep Neural Networks uhttps://www.ijimai.org/journal/sites/default/files/2023-05/ijimai8_2_15_0.pdf a158-1670 v83 aSurveillance videos record malicious events in a locality utilizing various machine learning algorithms for detection. Deep-learning algorithms being the most prominent AI algorithms are data-hungry as well as computationally expensive. These algorithms perform better when trained over a diverse and huge set of examples. These modern AI methods have a dire need of utilizing human intelligence to pamper the problem in such a way as to reduce the ultimate effort in terms of computational cost. In this research work, a novel methodology termed Bag of Focus (BoF) based training methodology has been proposed. BoF is based on the concept of selecting motion-intensive blocks in a long video, for training different deep neural networks (DNN's). The methodology reduced the computational overhead by 90% (ten times) in comparison to when full-length videos are entertained. It has been observed that training networks using BoF are equally effective in terms of performance for the same network trained over the full-length dataset. In this research work, firstly, a fine-grained annotated dataset including instance and activity information has been developed for real-world volume crimes. Secondly, a BoF-based methodology has been introduced for effective training of the state-of-the-art 3D, and 2D Convolutional Neural Networks (CNNs). Lastly, a comparison between the state-of-the-art networks have been presented for malicious event recognition in videos. It has been observed that 2D CNN even with lesser parameters achieved a promising classification accuracy of 98.7% and Area under the curve (AUC) of 99.7%. a1989-1660