Cycle Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer

Cycle-consistent Adversarial autoEncoders (CAE) trained from non-parallel data . CAE consists of three essential components: LSTM autoencoders that encode a text in one style into its latent representation and decode an encoded representation into its original text or a transferred representation into a style-transferred text . The entire CAE with these three components can be trained end-to-end. Extensive experiments and in-depth analyses on two widely-used public datasets consistently validate the effectiveness of proposed CAE in both style transfer and content preservation against several strong baselines in terms of four automatic evaluation metrics and human evaluation, says the authors of this paper . The authors conclude that CAE is a novel neural approach to unsupervised text style transfer. The CAE should be used as a tool to improve the accuracy of the

Links: PDF - Abstract

Code :


Keywords : cae - style - text - representation - transfer -

Leave a Reply

Your email address will not be published. Required fields are marked *