Deep generative models have made great progress in synthesizing images with human poses and transferring poses of one person to others . Most existing approaches explicitly leverage the pose information extracted from the source images as a conditional input for the generative networks . We propose a pose transfer network with DisentangledFeature Consistency (DFC-Net) to facilitate human pose transfer . Given a pairof images containing the source and target person, . then synthesizes animage of the target person with the desired pose from the . source then synthesized animage . The network leverages disentangled feature consistency losses in the adversarialtraining to strengthen the transfer coherence and integrates the keypointamplifier to enhance the pose feature extraction . Also, an unpaired support dataset Mixamo-Sup providing more extra pose information has been further utilized during the training to improve the . training to . improve the generality and robustness of DFC-

Author(s) : Kun Wu, Chengxiang Yin, Zhengping Che, Bo Jiang, Jian Tang, Zheng Guan, Gangyi Ding

Links : PDF - Abstract

Code :
Coursera

Keywords : pose - transfer - feature - human - consistency -

Leave a Reply

Your email address will not be published. Required fields are marked *