Education, Science, Technology, Innovation and Life
Open Access
Sign In

Research on Spatial Transformation in Image Based on Deep Learning

Download as PDF

DOI: 10.23977/meet.2019.93773

Author(s)

Peng Gao, Qingxuan Jia

Corresponding Author

Peng Gao

ABSTRACT

In the field of computer graphics, synthesizing a new view of 3D objects in images from a single perspective image is an important problem. A part of the object is unobservable, since 3D objects mapping to image space will result in partial occlusion or self-occlusion of objects. The synthesis needs to infer spatial structure and posture of the object. The uncertainty due to occlusion is a problem in the synthesis. In this paper, the problem is solved by establishing a convolutional neural network (CNN), which uses images including multiple chairs as dataset. First of all, we study related networks to propose a novel multi-parallel and multi-level encoding-decoding network, which implements the transformation from a single perspective image and angle semantic information to a new perspective synthetic image in an end-to-end way. Secondly, the network is trained by establishing a dataset. Finally, it is proved the neural network performs better edge smoothing effect and higher precision in image synthesis than state-of-the-art networks.

KEYWORDS

Image Space, Spatial Transformation, Semantic Information, Image Synthesis

All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.