Disentangled Recurrent Wasserstein Autoencoder

In this paper, we propose recurrent Wasserstein Autoencoder (R-WAE), a new framework for generative modeling of sequential data . Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer . This is superior to (recurrent) VAE which does not explicitly enforce mutual information maximization between input data and disentangle latent representations . When the number of actions in sequential data is available as weak supervision information, R-Wae is extended to learn a categorical latent representation of actions to improve its disentanglement . Experiments on a variety of datasets show that our models outperform other baselines with the same settings in terms of disentranglement and unconditional video generation both quantitatively and qualitatively . The authors say their models outperformed other basinals with the . models outperforming other basins with the the same setting in terms in terms … quantitatively or qualitatively and quantitatively and quantitative and qualitatively

Links: PDF - Abstract

Code :


Keywords : data - models - quantitatively - recurrent - terms -

Leave a Reply

Your email address will not be published. Required fields are marked *