Improved Residual Dense Network for Large Scale Super-Resolution via Generative Adversarial Network

Main Article Content

Inad A. Aljarrah
Eman M. Alshare

Abstract

Recent single image super resolution (SISR) studies were conducted extensively on small upscaling factors such as x2 and x4 on remote sensing images, while less work was conducted on large factors such as the factor x8 and x16. Owing to the high performance of the generative adversarial networks (GANs), in this paper, two GAN’s frameworks are implemented to study the SISR on the residual remote sensing image with large magnification under x8 scale factor, which is still lacking acceptable results. This work proposes a modified version of the residual dense network (RDN) and then it been implemented within GAN framework which named RDGAN. The second GAN framework has been built based on the densely sampled super resolution network (DSSR) and we named DSGAN. The used loss function for the training employs the adversarial, mean squared error (MSE) and the perceptual loss derived from the VGG19 model. We optimize the training by using Adam for number of epochs then switching to the SGD optimizer. We validate the frameworks on the proposed dataset of this work and other three remote sensing datasets: the UC Merced, WHU-RS19 and RSSCN7. To validate the frameworks, we use the following image quality assessment metrics: the PSNR and the SSIM on the RGB and the Y channel and the MSE. The RDGAN evaluation values on the proposed dataset were 26.02, 0.704, and 257.70 for PSNR, SSIM and the MSE, respectively, and the DSGAN evaluation on the same dataset yielded 26.13, 0.708 and 251.89 for the PSNR, the SSIM, and the MSE.

Article Details

How to Cite
Aljarrah, I. A., & Alshare, E. M. (2022). Improved Residual Dense Network for Large Scale Super-Resolution via Generative Adversarial Network. International Journal of Communication Networks and Information Security (IJCNIS), 14(1). https://doi.org/10.17762/ijcnis.v14i1.5221 (Original work published April 12, 2022)
Section
Research Articles
Author Biography

Inad A. Aljarrah, Jordan University of Science and Technology

Associate Professor Department of Computer Engineering 

References

C. H. Chuang, L. W. Tsai, M. S. Deng, J. W. Hsieh and K. C. Fan, “Vehicle license plate recognition using super-resolution technique,” 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS); 2014: IEEE.

G. Gao, D. Zhu, M. Yang, H. Lu, W. Yang and H. Gao, “Face image super-resolution with pose via nuclear norm regularized structural orthogonal procrustes regression,” Neural Computing and Applications. 2020;32(9): 4361-4371.

C. H. Pham, A. Ducournau, R. Fablet and F. Rousseau, “Brain MRI super-resolution using deep 3D convolutional networks,” 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017); 2017: IEEE.

V. H. Patil and D. S. Bormane, “Interpolation for super resolution imaging,” Innovations and Advanced Techniques in Computer and Information Sciences and Engineering: Springer; 2007. p. 483- 489.

W. Shi, J. Caballero, F. Huszár, J. Totz, AP. Aitken, R. Bishop, et al., “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. https://cv-foundation.org/openaccess/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf

M. R. Arefin, V. Michalski, P. L. St-Charles, A. Kalaitzis, S. Kim, S. E. Kahou, et al., “Multi-image super-resolution for remote sensing using deep recurrent networks,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; 2020. https://openaccess.thecvf.com/content_CVPRW_2020/papers/w11/Arefin_Multi-Image_Super-Resolution_for_Remote_Sensing_Using_Deep_Recurrent_Networks_CVPRW_2020_paper.pdf

C. Dong, C. C. Loy and X. Tang, “Accelerating the super-resolution convolutional neural network,” European conference on computer vision; 2016: Springer.

C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” Proceedings of the IEEE conference on computer vision and pattern recognition;2017. https://openaccess.thecvf.com/content_cvpr_2017/papers/Ledig_Photo-Realistic_Single_Image_CVPR_2017_paper.pdf

K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf

W. Ma, Z. Pan, F. Yuan and B. Lei, “Super-resolution of remote sensing images via a dense residual generative adversarial network,” Remote Sensing. 2019;11(21):2578.

J. Kim, J. K. Lee and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. https://openaccess.thecvf.com/content_cvpr_2016/papers/Kim_Deeply-Recursive_Convolutional_Network_CVPR_2016_paper.pdf

C. Dong, C. C. Loy, K. He and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE transactions on pattern analysis and machine intelligence. 2015;38(2):295-307. https://arxiv.org/pdf/1501.00092.pdf

Y. Tai, J. Yang and X. Liu, “Image super-resolution via deep recursive residual network,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. https://openaccess.thecvf.com/content_cvpr_2017/papers/Tai_Image_Super-Resolution_via_CVPR_2017_paper.pdf

M. Zhao, X. Liu, H. Liu and K. K. L. Wong, “Super-resolution of cardiac magnetic resonance images using Laplacian pyramid based on generative adversarial networks,” Computerized Medical Imaging and Graphics. 2020; 80:101698.

Y. Yu, X. Li and F. Liu, “E-DBPN: Enhanced deep back-projection networks for remote sensing scene image super resolution,” IEEE Transactions on Geoscience and Remote Sensing. 2020;58(8):5503- 5515.

X. Mao, C. Shen and Y. B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” arXiv preprint arXiv:160309056. 2016.

J. Kim, J. K. Lee and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. https://openaccess.thecvf.com/content_cvpr_2016/papers/Kim_Accurate_Image_Super-Resolution_CVPR_2016_paper.pdf

Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” Proceedings of the European conference on computer vision (ECCV);2018. https://openaccess.thecvf.com/content_ECCV_2018/papers/Yulun_Zhang_Image_Super-Resolution_Using_ECCV_2018_paper.pdf

J. M. Haut, R. Fernandez-Beltran, M. E. Paoletti, J. Plaza and A. Plaza, “Remote sensing image superresolution using deep residual channel attention,” IEEE Transactions on Geoscience and Remote Sensing. 2019;57[11]:9277-9289.

B. Lim, S. Son, H. Kim, S. Nah and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” Proceedings of the IEEE conference on computer vision and pattern recognition workshops; 2017. https://openaccess.thecvf.com/content_cvpr_2017_workshops/w12/papers/Lim_Enhanced_Deep_Residual_CVPR_2017_paper.pdf

Y. Tai, J. Yang, X. Liu and C. Xu, “Memnet: A persistent memory network for image restoration,” Proceedings of the IEEE international conference on computer vision; 2017.

D. W. Chen and C. H. Kuo, “Modified Dual Path Network With Transform Domain Data for Image Super-Resolution,” IEEE Access. 2020;8: 97975-97985.

K. Jiang, Z. Wang, P. Yi and J. Jiang, “Hierarchical dense recursive network for image super-resolution,” Pattern Recognition. 2020; 107:107475.

K. Nazeri, H. Thasarathan and M. Ebrahimi, “Edge-informed single image super-resolution,” Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops; 2019. https://openaccess.thecvf.com/content_ICCVW_2019/papers/AIM/Nazeri_Edge-Informed_Single_Image_Super-Resolution_ICCVW_2019_paper.pdf

J. Ma, X. Wang and J. Jiang, “Image super resolution via dense discriminative network,” IEEE Transactions on Industrial Electronics. 2019;67(7): 5687-5695.

Y. Wang, L. Wang, H. Wang and P. Li, “End-to-end image super-resolution via deep and shallow convolutional networks,” IEEE Access. 2019;7: 31959-31970.

D. Chen, Z. He, Y. Cao, J. Yang, Y. Cao, M. Y. Yang, et al, “Deep Neural Network for Fast and Accurate Single Image Super-Resolution via Channel-Attention-based Fusion of Orientation-aware Features,” arXiv preprint arXiv:191204016. 2019. https://arxiv.org/pdf/1912.04016.pdf

T. Shang, Q. Dai, S. Zhu, T. Yang and Y. Guo, “Perceptual extreme super-resolution network with receptive field block,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; 2020. https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Shang_Perceptual_Extreme_Super-Resolution_Network_With_Receptive_Field_Block_CVPRW_2020_paper.pdf

J. Johnson, A. Alahi and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” European conference on computer vision; 2016: Springer.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:14091556. 2014. https://arxiv.org/pdf/1409.1556.pdf(2014.pdf

K. He, X. Zhang, S. Ren and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” Proceedings of the IEEE international conference on computer vision; 2015. https://openaccess.thecvf.com/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf

X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, et al., “Esrgan: Enhanced super-resolution generative adversarial networks,” Proceedings of the European Conference on Computer Vision (ECCV) Workshops; 2018. https://openaccess.thecvf.com/content_ECCVW_2018/papers/11133/Wang_ESRGAN_Enhanced_Super-Resolution_Generative_Adversarial_Networks_ECCVW_2018_paper.pdf

B. Xu, N. Wang, T. Chen and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint arXiv:150500853. 2015. https://arxiv.org/pdf/1505.00853.pdf%E3%80%82ReLU

A. Jolicoeur-Martineau, “The relativistic discriminator: a key element missing from standard GAN,” arXiv preprint arXiv:180700734. 2018. https://arxiv.org/pdf/1807.00734.pdf

D. Lee, S. Lee, H. Lee, K. Lee and H. J. Lee, “Resolution-preserving generative adversarial networks for image enhancement,” IEEE Access. 2019;7: 110344-110357.

I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. Courville, “Improved training of wasserstein gans,” arXiv preprint arXiv:170400028. 2017. https://arxiv.org/pdf/1704.00028.pdf]

K. Jiang, Z. Wang, P. Yi, G. Wang, T. Lu and J. Jiang, “Edge-enhanced GAN for remote sensing image superresolution,” IEEE Transactions on Geoscience and Remote Sensing. 2019; 57(8):5799-5812.

C. Ma, Y. Rao, Y. Cheng, C. Chen, J. Lu and J. Zhou, “Structure-preserving super resolution with gradient guidance,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2020. https://openaccess.thecvf.com/content_CVPR_2020/papers/Ma_Structure-Preserving_Super_Resolution_With_Gradient_Guidance_CVPR_2020_paper.pdf

Y. Zhang, Y. Tian, Y. Kong, B. Zhong and Y. Fu, “Residual dense network for image super-resolution,” Proceedings of the IEEE conference on computer vision and pattern recognition; 2018. https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Residual_Dense_Network_CVPR_2018_paper.pdf

X. Dong, X. Sun, X. Jia, Z. Xi, L. Gao and B. Zhang, “Remote sensing image super-resolution using novel dense-sampling networks,” IEEE Transactions on Geoscience and Remote Sensing. 2020;59(2):1618-1633.

Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems; 2010. https://faculty.ucmerced.edu/snewsam/papers/Yang_ACMGIS10_BagOfVisualWords.pdf

D. Dai and W. Yang, “Satellite image classification via two-layer sparse coding with biased image representation,” IEEE Geoscience and Remote Sensing Letters. 2010;8(1):173-176. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.703.6870&rep=rep1&type=pdf

Q. Zou, L. Ni, T. Zhang and Q. Wang, “Deep learning based feature selection for remote sensing scene classification,” IEEE Geoscience and Remote Sensing Letters.. 2015;12[11]: 2321-2325. http://mvr.whu.edu.cn/pubs/2015-IEEE_GRSL.pdf

D. Zhang, J. Shao, X. Li and H. T. Shen, “Remote sensing image super-resolution via mixed high-order attention network,” IEEE Transactions on Geoscience and Remote Sensing. 2020;59(6): 5183-5196.