Plug and Play Autoencoders for Conditional Text Generation

Text autoencoders are commonly used for conditional generation tasks such as style transfer . We propose methods which are plug and play, where any pretrained autoencoder can be used . This reduces the need for labeled training data for the task and makes the training procedure more efficient . Evaluations on style transfer tasks both with and without sequence-to-sequence supervision show that our method performs better than . or comparable to strong baselines while being up to four times faster . The method performs . better than or comparable . to strong . baselines, while being . up to . four times . faster, we say .

Links: PDF - Abstract

Code :

https://github.com/florianmai/emb2emb

Keywords : faster - plug - training - text - method -

Leave a Reply

Your email address will not be published. Required fields are marked *