Demosaicking Algorithm Using Deep Residual Convolutional Network

Authors

DOI:

https://doi.org/10.9781/ijimai.2026.6568

Keywords:

Color Channel, Convolutional Neural Network, Demosaicking, Residual Learning
Supporting Agencies
This work was supported by Research Project Support Program for Excellence Institute (2022, ESL) in Incheon National University.

Abstract

Single-sensor imaging systems are widely deployed in portable devices including digital cameras, smartphones, and personal digital assistants (PDAs) for real-time image acquisition. While convolutional neural networks (CNNs) have demonstrated exceptional capabilities in various image processing tasks, their potential for demosaicking applications remains underexplored. This paper presents a demosaicking framework utilizing a Deep Residual Convolutional Neural Network (DRCNN) architecture. Firstly, we initialize the mosaicked images using conventional demosaicking algorithms and learn the DRCNN for three color channels. The proposed DRCNN architecture innovatively integrates three core components: Binary Convolution Units (BCUs) for computational efficiency, Efficient Layer Aggregation Networks (ELAN) for multi-scale feature fusion, and Dense Residual Blocks (DRBs) for enhanced gradient flow. Comprehensive evaluations demonstrate that the proposed algorithms outperform existing approaches in PSNR, computational complexity, and visual quality.

Downloads

Download data is not yet available.

Author Biographies

Jin Wang, Incheon National University

Jin Wang received a BS in mathematics and applied mathematics from Zhejiang University in 2007, and an MS and PhD in electronic communications engineering from Hanyang University. She was a professor at Xidian University. She is a professor at Incheon National University.

Siyou Guo, Shandong University of Technology

Siyou Guo is pursuing an M.S. degree at the School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo, China. His research interests include misinformation detection, deepfake detection, deep learning, and computer vision.

Qilei Li, Queen Mary University of London

Qilei Li received the Ph.D. in Computer Science from Queen Mary University of London. He previously earned an M.S. degree from Sichuan University in 2020. From June 2022 to April 2024, he worked as a machine learning scientist at Veritone Inc, where he focused on developing a scalable person search framework for retrieving individuals at different locations and times, as captured by various cameras. His current research interests lie in privacyaware machine learning, with a particular emphasis on learning domain-invariant knowledge representation from multimodal data captured in diverse environments. His research outcome has been recognized as ESI Highly Cited Paper (Top 1%). Additionally, he serves as an evaluator for the ELLIS PhD Program.

David Camacho, Universidad Politécnica de Madrid

David Camacho is full professor at Computer Systems Engineering Department of Universidad Politécnica de Madrid (UPM), and the head of the Applied Intelligence and Data Analysis research group (AIDA: https://aida.etsisi.uam.es) at UPM. He holds a Ph.D. in Computer Science from Universidad Carlos III de Madrid in 2001 with honors (best thesis award in Computer Science). He has published more than 300 journals, books, and conference papers. His research interests include Machine Learning (Clustering/Deep Learning), Computational Intelligence (Evolutionary Computation, Swarm Intelligence), Social Network Analysis, Fake News and Disinformation Analysis. He has participated/led more than 50 research projects (Spanish and European: H2020, DG Justice, ISFP, and Erasmus+), related to the design and application of artificial intelligence methods for data mining and optimization for problems emerging in industrial scenarios, aeronautics, aerospace engineering, cybercrime/cyber intelligence, social networks applications, or video games among others. He has served as Editor in Chief of Wiley’s Expert Systems since 2023 and sits on the Editorial Board of several journals, including Information Fusion, IEEE Transactions on Emerging Topics in Computational Intelligence (IEEE TETCI), Human-centric Computing and Information Sciences (HCIS), and Cognitive Computation among others.

Gwanggil Jeon, Incheon National University

Gwanggil Jeon received the B.S., M.S., and Ph.D. (summa cum laude) degrees from the Department of Electronics and Computer Engineering, Hanyang University, Seoul, Korea, in 2003, 2005, and 2008, respectively. From 2009.09 to 2011.08, he was with the School of Information Technology and Engineering, University of Ottawa,

References

[1] R. Lukac, K. N. Plataniotis, D. Hatzinakos, “Color image zooming on the bayer pattern,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 11, pp. 1475–1492, 2005.

[2] B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, R. M. Mersereau, “Demosaicking: color filter array interpolation,” IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 44–54, 2005.

[3] J. F. Hamilton, J. E. Adams, “Adaptive color plane interpolation in single sensor color electronic camera.” [Patent], 1997.

[4] S. C. Pei, I. K. Tam, “Effective color interpolation in ccd color filter arrays using signal correlation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 6, pp. 503–513, 2003.

[5] L. Chang, Y. P. Tan, “Effective use of spatial and spectral correlations for color filter array demosaicking,” IEEE Transactions on Consumer Electronics, vol. 50, no. 1, pp. 355–365, 2004.

[6] K. Hirakawa, T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Transactions on Image Processing, vol. 14, no. 3, pp. 360–369, 2005.

[7] N. X. Lian, L. Chang, Y. P. Tan, V. Zagorodnov, “Adaptive filtering for color filter array demosaicking,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2515–2525, 2007.

[8] K. H. Chung, Y. H. Chan, “Color demosaicing using variance of color differences,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2944–2955, 2006.

[9] D. Menon, S. Andriani, G. Calvagno, “Demosaicing with directional filtering and a posteriori decision,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 132–141, 2006.

[10] Z. Dengwen, S. Xiaoliu, D. Weiming, “Colour demosaicking with directional filtering and weighting,” IET Image Processing, vol. 6, no. 8, pp. 1084–1092, 2012.

[11] J. S. J. Li, S. Randhawa, “Color filter array demosaicking using high-order interpolation techniques with a weighted median filter for sharp color edge preservation,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 1946–1957, 2009.

[12] C. Y. Su, W. C. Kao, “Effective demosaicing using subband correlation,” IEEE Transactions on Consumer Electronics, vol. 55, no. 1, pp. 199–204, 2009.

[13] W. J. Chen, P. Y. Chang, “Effective demosaicking algorithm based on edge property for color filter arrays,” Digital Signal Processing, vol. 22, no. 1, pp. 163–169, 2012.

[14] I. Pekkucuksen, Y. Altunbasak, “Edge strength filter based color filter array interpolation,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 393–397, 2011.

[15] I. Pekkucuksen, Y. Altunbasak, “Multiscale gradients-based color filter array interpolation,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 157–165, 2012.

[16] J. Kim, G. Jeon, J. Jeong, “Demosaicking using geometric duality and dilated directional differentiation,” Optics Communications, vol. 324, pp. 194–201, 2014.

[17] D. Menon, G. Calvagno, “Regularization approaches to demosaicking,” IEEE transactions on image processing, vol. 18, no. 10, pp. 2209–2220, 2009.

[18] E. Dubois, “Frequency-domain methods for demosaicking of bayersampled color images,” IEEE Signal Processing Letters, vol. 12, no. 12, pp. 847–850, 2005.

[19] B. Leung, G. Jeon, E. Dubois, “Least-squares luma–chroma demultiplexing algorithm for bayer demosaicking,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1885–1894, 2011.

[20] G. Jeon, E. Dubois, “Demosaicking of noisy bayer-sampled color images with least-squares luma-chroma demultiplexing and noise level estimation,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 146–156, 2012.

[21] A. Krizhevsky, I. Sutskever, G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.

[22] K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.

[23] R. Zhang, P. Isola, A. A. Efros, “Colorful image colorization,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, 2016, pp. 649–666, Springer.

[24] S. Iizuka, E. Simo-Serra, H. Ishikawa, “Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification,” ACM Transactions on Graphics (ToG), vol. 35, no. 4, pp. 1–11, 2016.

[25] J. Long, E. Shelhamer, T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.

[26] H. Noh, S. Hong, B. Han, “Learning deconvolution network for semantic segmentation,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1520–1528.

[27] S. Xie, Z. Tu, “Holistically-nested edge detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1395–1403.

[28] W. Shen, X. Wang, Y. Wang, X. Bai, Z. Zhang, “Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3982–3991.

[29] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1874–1883.

[30] K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.

[31] K.-L. Hua, C.-H. Hsu, S. C. Hidayati, W.-H. Cheng, Y.-J. Chen, “Computeraided classification of lung nodules on computed tomography images via deep learning technique,” OncoTargets and Therapy, pp. 2015–2022, 2015.

[32] K. A. Jangtjik, M.-C. Yeh, K.-L. Hua, “Artist-based classification via deep learning with multi-scale weighted pooling,” in Proceedings of the 24th ACM International Conference on Multimedia, 2016, pp. 635–639.

[33] J. Kim, J. K. Lee, K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.

[34] K. Zhang, W. Zuo, L. Zhang, “Learning a single convolutional superresolution network for multiple degradations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3262–3271.

[35] S. Wang, M. Zhao, R. Dou, S. Yu, L. Liu, N. Wu, “A compact high-quality image demosaicking neural network for edge-computing devices,” Sensors, vol. 21, no. 9, p. 3265, 2021.

[36] M. Gharbi, G. Chaurasia, S. Paris, F. Durand, “Deep joint demosaicking and denoising,” ACM Transactions on Graphics (ToG), vol. 35, no. 6, pp. 1–12, 2016.

[37] N. S. Syu, Y. S. Chen, Y.-Y. Chuang, “Learning deep convolutional networks for demosaicing,” arXiv preprint arXiv:1802.03769, 2018.[38] F. Kokkinos, S. Lefkimmiatis, “Deep image demosaicking using a cascade of convolutional residual denoising networks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 303–319.

[39] B. Park, J. Jeong, “Color filter array demosaicking using densely connected residual network,” IEEE Access, vol. 7, pp. 128076–128085, 2019.

[40] R. Tan, K. Zhang, W. Zuo, L. Zhang, “Color image demosaicking via deep residual learning,” in Processing IEEE International Conference on Multimedia Expo (ICME), vol. 2, 2017, p. 6.

[41] D. S. Tan, W. Y. Chen, K. L. Hua, “Deepdemosaicking: Adaptive image demosaicking via multiple deep fully convolutional networks,” IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2408–2419, 2018.

[42] B. Xia, Y. Zhang, Y. Wang, Y. Tian, W. Yang, R. Timofte, L. Van Gool, “Basic binary convolution unit for binarized image restoration network,” arXiv preprint arXiv:2210.00405, 2022.

[43] C. Y. Wang, H. Y. M. Liao, I. H. Yeh, “Designing network design strategies through gradient path analysis,” arXiv preprint arXiv:2211.04800, 2022.

[44] J. Wang, J. Wu, Z. Wu, G. Jeon, J. Jeong, “Bilateral filtering and directional differentiation for bayer demosaicking,” IEEE Sensors Journal, vol. 17, no. 3, pp. 726–734, 2016.

[45] Z. Liu, Z. Shen, M. Savvides, K. T. Cheng, “Reactnet: Towards precise binary neural network with generalized activation functions,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, 2020, pp. 143–159, Springer.

[46] E. Agustsson, R. Timofte, “Ntire 2017 challenge on single image superresolution: Dataset and study,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 126–135.

[47] L. Zhang, X. Wu, A. Buades, X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” Journal of Electronic Imaging, vol. 20, no. 2, pp. 023016–023016, 2011.

[48] J. Wu, R. Timofte, L. Van Gool, “Demosaicing based on directional difference regression and efficient regression priors,” IEEE Transactions on Image Processing, vol. 25, no. 8, pp. 3862–3874, 2016.

[49] D. Kiku, Y. Monno, M. Tanaka, M. Okutomi, “Residual interpolation for color image demosaicking,” in 2013 IEEE International Conference on Image Processing, 2013, pp. 2304–2308, IEEE.

[50] W. Xing, K. Egiazarian, “Residual swin transformer channel attention network for image demosaicing,” in 2022 10th European Workshop on Visual Information Processing (EUVIP), 2022, pp. 1–6, IEEE.

[51] Y. Zhang, K. Li, K. Li, B. Zhong, Y. Fu, “Residual non-local attention networks for image restoration,” arXiv preprint arXiv:1903.10082, 2019.

[52] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6360–6376, 2021.

[53] B. Li, X. Liu, P. Hu, Z. Wu, J. Lv, X. Peng, “All-in- one image restoration for unknown corruption,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17452–17462.

Downloads

Published

2026-02-24
Metrics
Views/Downloads
  • Abstract
    68
  • PDF
    30

How to Cite

Wang, J., Guo, S., Li, Q., Camacho, D., and Jeon, G. (2026). Demosaicking Algorithm Using Deep Residual Convolutional Network. International Journal of Interactive Multimedia and Artificial Intelligence, 1–10. https://doi.org/10.9781/ijimai.2026.6568

Issue

Section

Regular Articles

Most read articles by the same author(s)