A Low Cost and Computationally Efficient Approach for Occlusion Handling in Video Surveillance Systems

Authors

  • Rakesh Chandra Joshi Govind Ballabh Pant University of Agriculture and Technology image/svg+xml
  • Adithya Gaurav Singh Govind Ballabh Pant University of Agriculture and Technology image/svg+xml
  • Mayank Joshi Govind Ballabh Pant University of Agriculture and Technology image/svg+xml
  • Sanjay Mathur Govind Ballabh Pant University of Agriculture and Technology image/svg+xml

DOI:

https://doi.org/10.9781/ijimai.2019.01.001

Keywords:

Kalman Filter, Machine Learning, Video Surveillance, Virtual Assistant, Cascade Object Detector, Occlusion Handling, Video Signal Processing
Supporting Agencies
This work was supported by the scholarship provided to the first author for postgraduate studies by Ministry of Human Resource Development (MHRD), Government of India.

Abstract

In the development of intelligent video surveillance systems for tracking a vehicle, occlusions are one of the major challenges. It becomes difficult to retain features during occlusion especially in case of complete occlusion. In this paper, a target vehicle tracking algorithm for Smart Video Surveillance (SVS) is proposed to track an unidentified target vehicle even in case of occlusions. This paper proposes a computationally efficient approach for handling occlusions named as Kalman Filter Assisted Occlusion Handling (KFAOH) technique. The algorithm works through two periods namely tracking period when no occlusion is seen and detection period when occlusion occurs, thus depicting its hybrid nature. Kanade-Lucas-Tomasi (KLT) feature tracker governs the operation of algorithm during the tracking period, whereas, a Cascaded Object Detector (COD) of weak classifiers, specially trained on a large database of cars governs the operation during detection period or occlusion with the assistance of Kalman Filter (KF). The algorithm’s tracking efficiency has been tested on six different tracking scenarios with increasing complexity in real-time. Performance evaluation under different noise variances and illumination levels shows that the tracking algorithm has good robustness against high noise and low illumination. All tests have been conducted on the MATLAB platform. The validity and practicality of the algorithm are also verified by success plots and precision plots for the test cases.

Downloads

Download data is not yet available.

References

Bulling and H. Gellersen, “Toward Mobile Eye-Based Human-Computer Interaction,” in IEEE Pervasive Computing, vol. 9, no. 4, pp. 8-12, October-December 2010.

T. Ko, “A survey on behavior analysis in video surveillance for homeland security applications,” 2008 37th IEEE Applied Imagery Pattern Recognition Workshop, Washington DC, 2008, pp. 1-8.

A. Ratre and V. Pankajakshan, “Tucker tensor decomposition-based tracking and Gaussian mixture model for anomaly localisation and detection in surveillance videos,” in IET Computer Vision, vol. 12, no. 6, pp. 933-940, 9 2018.

E. Soyak, S. A. Tsaftaris and A. K. Katsaggelos, “Low-Complexity Tracking-Aware H.264 Video Compression for Transportation Surveillance,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 10, pp. 1378-1389, Oct. 2011.

H. Kim, J. L. Gabbard, A. M. Anon and T. Misu, “Driver Behavior and Performance with Augmented Reality Pedestrian Collision Warning:

An Outdoor User Study,” in IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 4, pp. 1515-1524, April 2018.

D. Beymer, P. McLauchlan, B. Coifman and J. Malik, “A real-time computer vision system for measuring traffic parameters,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, 1997, pp. 495-501.

Q. Zou, H. Ling, Y. Pang, Y. Huang and M. Tian, “Joint Headlight Pairing and Vehicle Tracking by Weighted Set Packing in Nighttime Traffic Videos,” in IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 6, pp. 1950-1961, June 2018.

A. Ghoneim, G. Muhammad, S. U. Amin and B. Gupta, “Medical Image Forgery Detection for Smart Healthcare,” in IEEE Communications

Magazine, vol. 56, no. 4, pp. 33-37, April 2018.

S. Harput et al., “Two-Stage Motion Correction for Super-Resolution Ultrasound Imaging in Human Lower Limb,” in IEEE Transactions on

Ultrasonics, Ferroelectrics, and Frequency Control, vol. 65, no. 5, pp. 803-814, May 2018.

W. Paier, M. Kettern, A. Hilsmann and P. Eisert, “A Hybrid Approach for Facial Performance Analysis and Editing,” in IEEE Transactions on

Circuits and Systems for Video Technology, vol. 27, no. 4, pp. 784-797, April 2017.

Feng Wang, Chong-Wah Ngo and Ting-Chuen Pong, “Gesture tracking and recognition for lecture video editing,” Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, pp. 934-937 Vol.3.

A. Y. Yang, S. Iyengar, S. Sastry, R. Bajcsy, P. Kuryloski and R. Jafari, “Distributed segmentation and classification of human actions using

a wearable motion sensor network,” 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,

Anchorage, AK, 2008, pp. 1-8.

A. Jalal and S. Kamal, “Real-time life logging via a depth silhouette-based human activity recognition system for smart home services,” 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, 2014, pp. 74-80.

Y. Song, J. Tang, F. Liu and S. Yan, “Body Surface Context: A New Robust Feature for Action Recognition From Depth Videos,” in IEEE

Transactions on Circuits and Systems for Video Technology, vol. 24, no. 6, pp. 952-964, June 2014.

A. Jalal, S. Kamal and D. Kim, “Shape and Motion Features Approach for Activity Tracking and Recognition from Kinect Video Camera,” 2015 IEEE 29th International Conference on Advanced Information Networking and Applications Workshops, Gwangiu, 2015, pp. 445-450.

T. Senst, V. Eiselein and T. Sikora, “Robust Local Optical Flow for Feature Tracking,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 9, pp. 1377-1387, Sept. 2012.

J. B. Kim, C. W. Lee, K. M. Lee, T. S. Yun and H. J. Kim, “Waveletbased vehicle tracking for automatic traffic surveillance,” Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology. TENCON 2001 (Cat. No.01CH37239), 2001, pp. 313-316 vol.1.

W. Kim and C. Jung, “Illumination-Invariant Background Subtraction: Comparative Review, Models, and Prospects,” in IEEE Access, vol. 5, pp. 8369-8384, 2017.

Y. Wen, Y. Lu, J. Yan, Z. Zhou, K. M. von Deneen and P. Shi, “An Algorithm for License Plate Recognition Applied to Intelligent Transportation System,” in IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 3, pp. 830-845, Sept. 2011.

V. Borisova and G. D. Yakovlev, “Target tracking based on object’s shape,” 2016 11th International Forum on Strategic Technology (IFOST), Novosibirsk, 2016, pp. 397-400.

M. E. Maresca and A. Petrosino, “The Matrioska Tracking Algorithm on LTDT2014 Dataset,” 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, 2014, pp. 720-725.

H. Kaur and J. S. Sahambi, “Vehicle Tracking in Video Using Fractional Feedback Kalman Filter,” in IEEE Transactions on Computational

Imaging, vol. 2, no. 4, pp. 550-561, Dec. 2016.

M.M. Naushad Ali, M. Abdullah-Al-Wadud, Seok-Lyong Lee, Multiple object tracking with partial occlusion handling using salient feature points, Information Sciences, Volume 278, 2014, Pages 448-465, ISSN 0020-0255, https://doi.org/10.1016/j.ins.2014.03.064.

Baheti, U. Baid and S. Talbar, “An approach to automatic object tracking system by combination of SIFT and RANSAC with mean shift and KLT,” 2016 Conference on Advances in Signal Processing (CASP), Pune, 2016, pp. 254-259.

Y. Xia, G. Li, R. Chen and P. Yu, “A object tracking method in two cameras with common view field,” 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, 2017, pp. 1-5.

H. Fan and H. Ling, “SANet: Structure-Aware Network for Visual Tracking,” 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, 2017, pp. 2217-2224.

C. Ma, J. Huang, X. Yang and M. Yang, “Hierarchical Convolutional Features for Visual Tracking,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 3074-3082.

H. Nam and B. Han, “Learning Multi-domain Convolutional Neural Networks for Visual Tracking,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 4293-4302.

M. Danelljan, G. Häger, F. S. Khan and M. Felsberg, “Learning Spatially Regularized Correlation Filters for Visual Tracking,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 4310-4318.

Zhang J., Ma S., Sclaroff S. (2014). MEEM: Robust Tracking via Multiple Experts Using Entropy Minimization. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. Lecture Notes in Computer Science, vol 8694. Springer, Cham.

R. Tao, E. Gavves and A. W. M. Smeulders, “Siamese Instance Search for Tracking,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 1420-1429.

J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015.

H. Fan and H. Ling, “Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 5487-5495.

Q. Yu, T. B. Dinh, and G. Medioni, “Online Tracking and Reacquisition Using Co-Trained Generative and Discriminative Trackers,” Proceedings of the 10th European Conference on Computer Vision, 2008.

Z. Kalal, K. Mikolajczyk and J. Matas, “Tracking-Learning-Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1409-1422, July 2012.

Q. Hu, Y. Guo, Z. Lin, W. An and H. Cheng, “Object Tracking Using Multiple Features and Adaptive Model Updating,” IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 11, pp. 2882-2897, November 2017.

J. Li, X. Zhou, S. Chan and S. Chen, “Robust Object Tracking via Large Margin and Scale-Adaptive Correlation Filter,” IEEE Access, vol. 6, pp. 12642-12655, 2018.

J. S. Kim, D. H. Yeom and Y. H. Joo, “Fast and robust algorithm of tracking multiple moving objects for intelligent video surveillance systems,” IEEE Transactions on Consumer Electronics, vol. 57, no. 3, pp. 1165-1170, August 2011.

S. Zhao, S. Zhang and L. Zhang, “Towards Occlusion Handling: Object Tracking With Background Estimation,” IEEE Transactions on Cybernetics, vol. 48, no. 7, pp. 2086-2100, July 2018.

Prasanna Kumar Reddy Katuru, Vivek Josyula, Praveen Samudrala, M. Lokanath and P. Ashish, “Color Based Vehicle Detection and Tracking using Kalman Filter with a Fractional Feedback Loop,” Advances in Computational Sciences and Technology, vol. 10, 2017, pp. 2565-2585.

Worrall A. D., Marslin R. F., Sullivan G. D. and Baker K. D. (1991). Model-based Tracking. In: Mowforth P. (ed) BMVC91. Springer, London. https://doi.org/10.1007/978-1-4471-1921-0_39

Koller, D., Daniilidis, K. and Nagel, H. H. (1993). International Journal of Computer Vision, 10, pp. 257–272. https://doi.org/10.1007/BF01539538

KaewTraKulPong P. and Bowden R. (2002). An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection. In: Remagnino P. et al. (eds) Video-Based Surveillance Systems. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-0913-4_11

M. Yokoyama and T. Poggio, “A contour-based moving object detection and tracking,” IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2005, pp. 271-276.

Kass, M., Witkin, A. and Terzopoulos, D. (1988). “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1, pp. 321–331. https://doi.org/10.1007/BF00133570

D. Comaniciu, V. Ramesh and P. Meer, “Kernel-based object tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, May 2003.

E. Ali, E. U. Khan, E. Z. Mahmudi and R. Ullah, “A Comparison of FAST, SURF, Eigen, Harris, and MSER Features,” International Journal of Computer Engineering and Information Technology, vol. 8, no. 6, pp. 100-105, 2016.

N. N. Saunier and T. Sayed, “A feature-based tracking algorithm for vehicles in intersections,” 3rd Canadian Conference on Computer and Robot Vision (CRV’06), 2006, pp. 59-59.

V. Buddubariki, S. G. Tulluri and S. Mukherjee, “Multiple object tracking by improved KLT tracker over SURF features,” Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2015, pp. 1-4.

H. Schweitzer, R. Deng and R. F. Anderson, “A Dual-Bound Algorithm for Very Fast and Exact Template Matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 459-470, March 2011.

S. Oron, A. Bar-Hillel, D. Levi and S. Avidan, “Locally Orderless Tracking,” IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, 2012, pp. 1940-1947.

L. Sevilla-Lara and E. Learned-Miller, “Distribution fields for tracking,” IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, 2012, pp. 1910-1917.

T. B. Dinh, N. Vo and G. Medioni, “Context tracker: Exploring supporters and distracters in unconstrained environments,” IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1177-1184.

Y. Wu, J. Lim and M. Yang, “Object Tracking Benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834-1848, September 2015.

S. D. Wei and S. H. Lei, “Fast template matching based on normalized cross correlation with adaptive multilevel winner update,” IEEE Transactions on Image Processing, vol. 17, no. 11, pp. 2227-2235, 2008.

D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.

M. Calonder, V. Lepetit, M. Özuysal, T. Trzcinski, C. Strecha and P. Fua, “BRIEF: Computing a Local Binary Descriptor Very Fast,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1281-1298, 2012.

H. Bay, T. Tuytelaars and L. Van Gool, “SURF: Speeded Up Robust Features,” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.

J. Shi and C. Tomasi, “Good features to track,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, 1994, pp. 593-600.

C. Tomasi and T. Kanade, “Detection and Tracking of Point Features,” International Journal of Computer Vision, vol. 9, pp. 137-154, 1991.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2004. https://doi.org/10.1017/CBO9780511811685

P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp. I-511–I-518.

A. Opelt, A. Pinz, M. Fussenegger and P. Auer, “Generic object recognition with boosting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 416-431, March 2006.

G. Welch and G. Bishop, An Introduction to the Kalman Filter, Technical Report, University of North Carolina at Chapel Hill, 1995.

Y. Hua, K. Alahari and C. Schmid, “Online Object Tracking with Proposal Selection,” IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 3092-3100.

Downloads

Published

2019-12-01
Metrics
Views/Downloads
  • Abstract
    51
  • PDF
    26

How to Cite

Joshi, R. C., Singh, A. G., Joshi, M., and Mathur, S. (2019). A Low Cost and Computationally Efficient Approach for Occlusion Handling in Video Surveillance Systems. International Journal of Interactive Multimedia and Artificial Intelligence, 5(7), 28–38. https://doi.org/10.9781/ijimai.2019.01.001