Education, Science, Technology, Innovation and Life
Open Access
Sign In

ELF-CandyGAN: A Candy Color Coloring Method for Image Local Feature Enhancement

Download as PDF

DOI: 10.23977/acss.2025.090302 | Downloads: 6 | Views: 220

Author(s)

Mei Xu 1, Huan Xu 1

Affiliation(s)

1 School of Information Engineering, Technology & Media University of Henan Kaifeng, Henan, Kaifeng, 475000, China

Corresponding Author

Mei Xu

ABSTRACT

Candy color is a new phenomenon in the field of photography, and its high brightness, low saturation, and low contrast bring a unique color experience to the world. The CandyCycleGAN network is good at realizing the transformation from ordinary color to candy color. Still, there will be problems as some of the image details are not dealt with properly, so to solve the above problems, this paper designs a candy based on the local feature enhancement of the image color coloring method (ELF-CandyGAN). Based on the generator of the U-Net network, a color learning module is designed in the downsampling process to learn the distribution, relationship, and features of candy color, which helps the network to better learn and understand the color information and keep the naturalness and authenticity of the color, and at the same time, a unique jump connection is designed to add the results in the upper layer convolution as part of the results in the lower layer convolution; secondly, a global context module is introduced in the color learning module. To ensure that the chromaticity values learned during the entire network training process remain within the candy color range, a global context module is introduced. This module also reduces computational complexity and accelerates network training. Subsequently, a feature enhancement module is designed, which introduces additional enhancement operations to enable deeper mining and processing of network features, thereby improving the performance and effectiveness of the network in the coloring task. This module also helps to enhance and restore the detail information lost during the downsampling process. Furthermore, a dual discriminator network based on PatchGAN is constructed. The first discriminator, D1, adopts a multi-scale discriminative structure to guide the generator toward producing richer image details. The second discriminator, D2, is designed to compute the structural similarity between the generated image and the original input image, encouraging the generator to produce structurally consistent outputs. Finally, a feature structure loss function is proposed to impose constraints on the structural similarity between the generated and input images, ensuring that the generated images retain more original detail features and exhibit higher realism.

KEYWORDS

Local Feature Enhancement; Candy Color; Generative Adversarial Networks; Bi-discriminator Networks; Feature Structure Loss Function

CITE THIS PAPER

Mei Xu, Huan Xu, ELF-CandyGAN: A Candy Color Coloring Method for Image Local Feature Enhancement. Advances in Computer, Signals and Systems (2025) Vol. 9: 7-20. DOI: http://dx.doi.org/10.23977/acss.2025.090302.

REFERENCES

[1] Zongnan Chen, Yaoguang Ye, Jiahui Pan, Grayscale Image Colorization Method Based on CycleGAN [J], Computer Systems & Applications, 2023, 32(08): 126-132, DOI: 10.15888/j.cnki.csa.009195.
[2] Shisong Zhu, Mei Xu, Bibo Lu, et al. CandyCycleGAN: Candy Color Coloring Algorithm Based on Chromaticity Verification[J]. Academic Journal of Computing & Information Science, 2023, 6(13).
[3] Wenhua Ding, Junwei Du, Lei Hou et al., Fashion Content and Style Transfer Based on Generative Adversarial Network [J/OL], Computer Engineering and Applications: 1-11 [2023-07-05].
[4] Wenhui Qin, Yilai Zhang, Design and Implementation of Multi-school Style Transfer Algorithm[J], Fujian Computer, 2023, 39(04):42-48[DOI:10.16707/j.cnki.fjpc.2023.04.008].
[5] Jiawei Zhou, Zikang Huang, Junhao Peng, et al. image recoloring Based on Moving Least Squares[J]. Journal of Zhejiang University (Science Edition), 2025, 52(01): 10-21.
[6] Sihong Meng, Hao Liu, Haotian Fang, et al. image colorization Based on semantic similarity propagation[J]. Journal of Graphics, 2025, 46(01): 126-138.
[7] Hongan Li, Qiaoxue Zheng, Jing Zhang, et al. Grayscale Image Colorization Method Combined with Pix2PixGenerative Adversarial Network [J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(06): 929-938.
[8] LIANG Z X,LI Z C,ZHOU S C,et al. Control color:multimodal diffusion-based interactive image colorization[J]. 2024,05,07.https://arxiv.org/abs/2402.10855.
[9] Shi W,Caballero J,F Huszar,et al,Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C],Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:1874-1883.
[10] Cao Y,XuJ,Lin S,et al,Gcnet: Non-local networks meet squeeze-excitation networks and beyond[C],Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops,2019.
[11] Ruan Yiting, Research on Interactive Line Art Coloring Method Based on Generative Adversarial Network [D], Zhejiang University, 2023, DOI:10.27461/d.cnki.gzjdx.2022.000937.
[12] He Wenhao, Ge Haibo, Camouflaged Object Detection Method Based on Local-Global Feature Mutual Compensation [J/OL], Journal of Computer Science and Technology: 1-15 [2024-03-11], http://kns.cnki. net/kcms/detail/11.5602.TP.20240226.1044.006.html.
[13] Zhou Wang, A, C, Bovik, H, R, Sheikh, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.

Downloads: 38552
Visits: 697762

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.