We often use perturbations to regularize neural models . For neuralencoder-decoders, previous studies applied the scheduled sampling (Bengio etal., 2015) and adversarial perturbation (Sato et al., 2019) These methods require considerable computational time . This study addresses the question of whether these approaches are efficient enough for training time . The code is publicly available athttps://://://www.takase/rethink_perturbations.com/

Author(s) : Sho Takase, Shun Kiyono

Links : PDF - Abstract

Code :
Coursera

Keywords : perturbations - training - time - decoders - computational -

Leave a Reply

Your email address will not be published. Required fields are marked *