Physics-based character animation has seen significant advances in recent years with the adoption of Deep Reinforcement Learning (DRL) However,DRL-based learning methods are usually computationally expensive . Tuninghyperparameters for these methods often requires repetitive training of controlpolicies . In this work, we propose a novel Curriculum-based Multi-Fidelity Bayesian Optimization framework . Using curriculum-based task difficulty as fidelity criterion,our method improves searching efficiency by gradually pruning search spacethrough evaluation on easier motor skill tasks . In particular, we show that hyperparametersoptimized through our algorithm result in at least 5x efficiency gain comparing to author-released settings in DeepMimic .

Author(s) : Zeshi Yang, Zhiqi Yin

Links : PDF - Abstract

Code :
Coursera

Keywords : based - curriculum - methods - character - animation -

Leave a Reply

Your email address will not be published. Required fields are marked *