Avatar based Sign Language Production (SLP) builds up animation from sequences of handmotions, shapes and facial expressions . We propose a novel Mixture of MotionPrimitives (MoMP) architecture for sign language animation . A set of distinctmotion primitives are learnt during training, that can be temporally combined to animate continuous sign language sequences . We achieve state-of-the-art backtranslation performance with an 11% improvement over competing results . For the first time, we showcase stronger performance for afull translation pipeline going from spoken language to sign, than from glossto sign . We evaluate on the challenging RWTH-PHOENIX-Weather-2014T(PHoENIX14T)dataset, presenting extensive ablation studies and showing that MoMPoutperforms baselines in user evaluations. We achieved state- of theart backtransformer performance with a 11% improved over competing Results.Importantly, and for the first place, we show stronger performance

Author(s) : Ben Saunders, Necati Cihan Camgoz, Richard Bowden

Links : PDF - Abstract

Code :
Coursera

Keywords : sign - language - performance - results - primitives -

Leave a Reply

Your email address will not be published. Required fields are marked *