Education, Science, Technology, Innovation and Life
Open Access
Sign In

Based on Improved U2-Net for Visible to Infrared Image Translation

Download as PDF

DOI: 10.23977/jipta.2025.080105 | Downloads: 4 | Views: 416

Author(s)

Yinxiang Fan 1

Affiliation(s)

1 Yunnan Normal University, Kunming, China

Corresponding Author

Yinxiang Fan

ABSTRACT

TWith the rapid development of artificial intelligence technology, significant progress has been made in the field of image generation and translation. However, traditional Generative Adversarial Networks[1] (GAN) face issues such as training instability, mode collapse, and artifacts in image translation tasks. This paper proposes an improved method based on U2-Net[2] for visible to infrared image translation. By introducing the Criss-Cross Attention (CCA) [3]module into the deep layers of U2-Net, the model's ability to capture global information is enhanced, and a frequency domain loss function is used to optimize the image quality. Experimental results show that the improved model outperforms the original U2-Net in multiple evaluation metrics.

KEYWORDS

Image Translation, U2-Net, Criss-Cross Attention Module, Frequency Domain Constraint, Infrared Image Generation

CITE THIS PAPER

Yinxiang Fan, Based on Improved U2-Net for Visible to Infrared Image Translation. Journal of Image Processing Theory and Applications (2025) Vol. 8: 39-44. DOI: http://dx.doi.org/10.23977/jipta.2025.080105.

REFERENCES

[1] Labaca-Castro R .Generative Adversarial Nets[M].Springer Vieweg, Wiesbaden,2023.
[2] Qin X, Zhang Z, Huang C, et al.U2-Net: Going deeper with nested U-structure for salient object detection[J]. Pattern Recognition, 2020, 106:107404.
[3] Huang Z, Wang X, Wei Y ,et al.CCNet: Criss-Cross Attention for Semantic Segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, PP(99):1-1.
[4] Henry J, Natalie T, Madsen D. Pix2pix gan for image-to-image translation[J]. Research Gate Publication, 2021: 1-5.
[5] Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]. Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.
[6] Mirza M, Osindero S .Conditional Generative Adversarial Nets[J].Computer Science, 2014:2672-2680.
[7] Wang Kunfeng, Gou Chao, Duan Yanjie, etc Research progress and prospects of Generative Adversarial Networks (GANs) [J]. Journal of Automation, 2017, 43 (3): 321-332. 
[8] Chen Foji, Zhu Feng, Wu Qingxiao, etc A Review of Generative Adversarial Networks and Their Applications in Image Generation [J]. Journal of Computer Science, 2021, 44 (2): 347-369. 
[9] Karras T, Laine S, Aittala M, et al. Analyzing and improving the image quality of stylegan[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 8110-8119. 
[10] Kingma D P, Dhariwal P. Glow: Generative flow with invertible 1x1 convolutions[J]. Advances in neural information processing systems, 2018, 31. 
[11] Ronneberger O, Fischer P, Brox T .U-Net: Convolutional Networks for Biomedical Image Segmentation[C]. International Conference on Medical Image Computing and Computer-Assisted Intervention.Springer International Publishing, 2015, 234-241. 
[12] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 3431-3440. 
[13] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation [J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(12): 2481-2495. 
[14] Jia X, Zhu C, Li M, et al. LLVIP: A visible-infrared paired dataset for low-light vision[C]. Proceedings of the IEEE/CVF international conference on computer vision. 2021: 3496-3504.

Downloads: 2241
Visits: 161896

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.