Education, Science, Technology, Innovation and Life
Open Access
Sign In

Research on Face Attribute Editing Method Based on Deep Neural Network

Download as PDF

DOI: 10.23977/jsoce.2021.030503 | Downloads: 12 | Views: 862

Author(s)

Caoshuai Kang 1

Affiliation(s)

1 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China

Corresponding Author

Caoshuai Kang

ABSTRACT

With the development of computer vision technology, the application of face attributes editing has been expanded upon the real scene. Face attribute editing aims to change one or more attributes of an image, such as hair color, skin color, age, etc., and keep other attributes unchanged. The key of attribute editing is to maintain the high quality and accuracy of the target attribute image. Most methods focus on editing with or without certain attributes, and lack of adding or deleting attributes for specific templates, such as bangs with specific shapes. On the other hand, when multi-attribute is edited at the same time, the existing methods are difficult to decouple multi-attribute in feature space, which leads to problems in multi-attribute editing mode There are many defects, such as artifact, face irrelevant deformation, background change and so on. Aiming at these problems, this paper studies the face attribute editing method based on deep neural network.

KEYWORDS

Face attribute editing, Deep neural network, Deep learning, Transfer learning

CITE THIS PAPER

Caoshuai Kang. Research on Face Attribute Editing Method Based on Deep Neural Network. Journal of Sociology and Ethnology (2021) 3: 17-20. DOI: http://dx.doi.org/10.23977/jsoce.2021.030503

REFERENCES

[1] Y. Taigman, M. Yang, M. Ranzato and L. Wolf(2014). DeepFace: Closing the Gap to Human-Level Performance in Face Verification. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp:1701-1708.
[2] F. Schroff, D. Kalenichenko and J. Philbin(2015). FaceNet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp:815-823.
[3] Goodfellow I, Pouget-Abadie J, Mirza M(2014). Generative adversarial nets. Advances in neural information processing systems[C]. 2014: 2672-2680.
[4] Zhu J Y, Park T, Isola P, et al(2014). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision[C]. 2017: 2223-2232.
[5] Wu P W, Lin Y J, Chang C H(2019). Relgan: Multi-domain image-to-image translation via relative attributes. Proceedings of the IEEE International Conference on Computer Vision[C]. 2019: 5914-5922.

All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.