ED-Dehaze Net: Encoder and Decoder Dehaze Network.
DOI:
https://doi.org/10.9781/ijimai.2022.08.008Keywords:
Image Dehazing, Encoder and Decoder Network, Generative Adversarial Network, Multi-Scale Convolution Block, Loss FunctionAbstract
The presence of haze will significantly reduce the quality of images, such as resulting in lower contrast and blurry details. This paper proposes a novel end-to-end dehazing method, called Encoder and Decoder Dehaze Network (ED-Dehaze Net), which contains a Generator and a Discriminator. In particular, the Generator uses an Encoder-Decoder structure to effectively extract the texture and semantic features of hazy images. Between the Encoder and Decoder we use Multi-Scale Convolution Block (MSCB) to enhance the process of feature extraction. The proposed ED-Dehaze Net is trained by combining Adversarial Loss, Perceptual Loss and Smooth L1 Loss. Quantitative and qualitative experimental results showed that our method can obtain the state-of-the-art dehazing performance.
Downloads
References
K. He, J. Sun, X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, pp. 2341-2353, 2010.
Q. Zhu, J. Mai, L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing, vol. 24, pp. 3522 -3533, 2015.
D. Berman, S. Avidan, S. Avidan, “Non-local image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1674 -1682.
R. Fattal, “Dehazing using color-lines,” ACM Transactions on Graphics, vol. 34, pp. 13, 2014.
G. Meng, Y, Wang, J. Duan, S. Xiang, C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in IEEE International Conference on Computer Vision, 2013, pp.617-624.
B. Cai, X. Xu, K. Jia, C. Qing, D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, pp. 5187 -5198, 2016.
D. Engin, A. Genc, H. Kemal Ekenel, “Cycle-dehaze: Enhanced cyclegan for single image dehazing,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 825 -833.
W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, M.H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision, Springer, 2016, pp. 154-169.
H. Zhang, V.M. Patel, “Densely connected pyramid dehazing network,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3194 -3203.
B. Li, X. Peng, Z. Wang, J. Xu, D. Feng, “An all-in-one network for dehazing and beyond,” 2017, arXiv: 1707.06543.
X. Yang, Z. Xu, J. Luo, “Towards perceptual image dehazing by physics-based disentanglement and adversarial training,” in AAAI conference on artificial intelligence, 2018, pp. 7485 -7492.
X. Qin, Z. Wang, Y. Bai, X. Xie, H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” 2019 arXiv: Computer Vision and Pattern Recognition.
S. Yin, Y. Wang, Y.H. Yang, “A novel residual dense pyramid network for image dehazing,” Entropy, vol. 21, pp. 1123, 2019.
H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, M.H. Yang, “Multiscale boosted dehazing network with dense feature fusion,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2154-2164, doi: 10.1109/CVPR42600.2020.00223.
X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, C. Change Loy, “Esrgan: Enhanced super-resolution generative adversarial networks,” in European Conference on Computer Vision, 2018, pp. 63-79.
Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, L. Lin, “Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 701-710.
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. “Photo-realistic single image superresolution using a generative adversarial network,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4681-4690.
R. Huang, S. Zhang, T. Li, R. He, “Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis,” in International Conference on Computer Vision, 2017, pp. 2439-2448.
L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, L. Van Gool, “Pose guided person image generation,” in Neural Information Processing Systems, 2017, pp. 406-416.
N. Silberman, D. Hoiem, P. Kohli, R. Fergus, “Indoor segmentation and support inference from rgbd images,” in European Conference on Computer Vision, Springer, 2012, pp. 746-760.
C.O. Ancuti, C. Ancuti, R. Timofte, C. De Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 754-762.
E.J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, John Wiley and Sons, Inc., 1976.
S.K. Nayar, S.G. Narasimhan, “Vision in bad weather,” in IEEE International Conference on Computer Vision, 1999, pp. 820-827.
S.G. Narasimhan, S.K. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713-724, 2003.
J.Y. Zhu, T. Park, P. Isola, A.A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in International Conference on Computer Vision, 2017, pp. 2223-2232.
I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, “Generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 3, pp. 2672-2680, 2014.
C. Li, M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in European Conference on Computer Vision, 2016, pp. 702-716.
N. Jetchev, U. Bergmann, R. Vollgraf, “Texture synthesis with spatial generative adversarial networks,” arXiv: 1611.08207, 2016.
R.A. Yeh, C. Chen, T. Yian Lim, A.G. Schwing, M. Hasegawa-Johnson, M.N. Do, “Semantic image inpainting with deep generative models,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5485-5493.
J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, T.S. Huang, “Generative image inpainting with contextual attention,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505-5514.
J. Gui, Z. Sun, Y. Wen, D. Tao, J. Ye, “A review on generative adversarial networks: Algorithms, theory, and applications,” arXiv: Learning, 2020.
D. Ulyanov, A. Vedaldi, V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv: Computer Vision and Pattern Recognition, 2016, pp. 1607.08022.
R. Girshick, “Fast R-CNN,” in The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448, doi: 10.1109/ICCV.2015.169.
J. Johnson, A. Alahi, L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision, Springer, 2016, pp. 694-711.
Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M.K. Kalra, Y. Zhang, L. Sun, G. Wang, “Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1348-1357, 2018.
G. Yang, Y. Cao, X. Xing, M. Wei, “Perceptual Loss Based Super-Resolution Reconstruction from Single Magnetic Resonance Imaging,” In: X. Sun, Z. Pan, E. Bertino, (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science, vol. 11632, p. 411–424.
K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv: 1409.1556, 2014.
C. Ancuti, C.O. Ancuti, C. De Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in IEEE International Conference on Image Processing, 2016, pp. 2226-2230.
Downloads
Published
-
Abstract223
-
PDF73






