A Method of the Coverage Ratio of Street Trees Based on Deep Learning.

Authors

DOI:

https://doi.org/10.9781/ijimai.2022.07.003

Keywords:

Ecological Environment Index, Object Detection, Remote Sensing Image, Street Trees Coverage Ratio, Network, YOLOv
Supporting Agencies
This work was supported in part by the National Natural Science Foundation of China (Grant No. 62102184), in part by the Natural Science Foundation of Jiangsu Province (Grant No. BK20200784), in part by the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (Grant No. 19KJB520010), in part by China Postdoctoral Science Foundation (Grant No. 2019M661852) and in party by the National Key Research and Development Program of China (2019YFD1100404).

Abstract

The street trees coverage ratio provides reliable data support for urban ecological environment assessment, which plays an important part in the ecological environment index calculation. Aiming at the statistical estimation of urban street trees coverage ratio, an integrated model based on YOLOv4 and Unet network for detecting and extracting street trees from remote sensing images is proposed, and obtain the estimated street trees coverage ratio in images accurately. The experiments are carried out under self-made dataset, and the results show that the accuracy of street trees detection is 94.91%, and the street trees coverage ratio is 16.30% and 13.81% in the two experimental urban scenes. The MIoU of contour extraction is 98.25%, and the estimated coverage accuracy is improved by 6.89% and 5.79%, respectively. The result indicates that the proposed model achieves the automation of contour extraction of street trees and more accurate estimation of street trees coverage ratio.

Downloads

Download data is not yet available.

References

B.T. Wang, W.J. Wang, S.H. Cui, Y.Z. Pan, and J.H. Zhang, “Research on the methods of urban ecological environmental quality assessment,” in Acta Ecologica Sinica, vol. 29, no. 3, pp. 1068-1073,2009.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” in NATURE, vol. 521, no. 7553, pp. 436-444, 2015.

J. Kuruvilla, D. Sukumaran, A. Sankar, and S.P. Joy, “A Review on Image Processing and Image Segmentation,” in International Conference on Data Mining and Advanced Computing, Kadayiruppu, INDIA, 2016, pp. 198-203.

A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in COMMUNICATIONS OF THE ACM, vol. 60, no. 6, pp. 84-90, 2017.

Y. Long, “New ideas for urban research and planning under the new data environment of street urbanism,” Time Architecture, vol. 02, pp. 128-132, 2016.

X. Li, C. Zhang, W. Li., “Assessing street-level urban greenery using Google Street View and a modified green view index,” in Urban Forestry & Urban Greening, vol. 14, no. 3, pp. 675-685,2015.

I. Seiferling, N. Naik, C. Ratti, 2 “Green Streets-Quantifying and mapping urban trees with street-level imagery and computer vision,” in Landscape and Urban Planning, vol. 165, pp. 93-101, 2017.

L. Yan, Q. Xu, Y. Liu, “Vegetation extraction from remote Sensing Images based on attention network,” in Surveying and mapping geographic information, vol. 46, no. S1, pp. 44-48, 2021.

A. Bochkovskiy, C.Y. Wang, H.Y.M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” 2020. Accessed: Apr. 23, 2020. [Online]. Available: https://arxiv.org/abs/2004.10934, doi: arXiv:2004.10934.

N. Abbas, F.Q. Yu, “A Comprehensive Analysis of the End-to-End Delay for Wireless Multimedia Sensor Networks” in Journal of Electrical Engineering & Technology, vol. 13, no. 6, pp. 2456-2467, 2018.

P.F. Guo, X.Z. Wang, Y.S. Han, “The Enhanced Genetic Algorithms for the Optimization Design,” in 2010 3RD International Conference on Biomedical Engineering and Informatics, pp. 2990-2994, 2010.

K. He, X. Zhang, S. Ren, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, 2015.

S. Liu, L. Qi, H.F. Qin, J.P. Shi, J.Y. Jia, “Path Aggregation Network for Instance Segmentation,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8759-8768, 2018.

J. Redmon, A. Farhadi, “Yolov3: An incremental improvement,” 2018. Accessed: Apr. 8, 2018. [Online]. Available: https://arxiv.org/abs/1804.02767, doi: arXiv:1804.02767.

O. Ronneberger, P. Fischer, T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, pp. 234-241, 2015.

J. Long, E. Shelhamer, T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, 2015.

L.W. Yang, Z.X. Zeng, and C.J. Zhao, “Study on Statistical Technique of Urban Roadway Tree Greening Coverage,” in Chinese Landscape Architecture, vol. 04, pp. 38-39, 1996.

K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 386–397, 2018.

G. Cheng, J.W. Han, “A survey on object detection in optical remote sensing images,” Journal of Photogrammetry and Remote Sensing, vol. 117, pp. 11-28, 2016.

G. Cheng, P. Zhou and J. Han, “Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7405-7415, 2016.

Cheng G, Han J, Zhou P, “Multi-class geospatial object detection and geographic image classification based on collection of part detectors” in ISPRS Journal of Photogrammetry and Remote Sensing, vol. 98, pp. 119- 132, 2014.

Downloads

Published

2022-09-01
Metrics
Views/Downloads
  • Abstract
    215
  • PDF
    73

How to Cite

Han, W., Cao, L., and Xu, S. (2022). A Method of the Coverage Ratio of Street Trees Based on Deep Learning. International Journal of Interactive Multimedia and Artificial Intelligence, 7(5), 23–29. https://doi.org/10.9781/ijimai.2022.07.003