Learning to Disentangle Textual Representations and Attributes via Mutual Information

Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification (e.g. building classifiers whose decisions cannot disproportionately hurt or benefit specific groups identified by sensitive attributes), style transfer and sentence generation . This paper investigates learning to disentangle representations by minimizing a novel variational (upper) bound of the mutual information between an identified attribute and the latent code of a deep neural network encoder . We demonstrate that our surrogate leads to better disentangling representations on both fair . classification and sentence . generation tasks while not suffering from the degeneracy of adversarial losses in multi-class scenarios . We provide some lights to the well-known debate on whether or not \text it may be helpful for polarity transfer and . sentence generation purposes’s purposes”}. The paper concludes. The paper also provides some light to the . debate on the well of the debate on what it is helpful for Polarity Transfer and Sentence Generation purposes. We conclude that the trade-off between the level of disentanglement and quality of the generated sentences

Links: PDF - Abstract

Code :

None

Keywords : representations - generation - sentence - purposes - learning -

Leave a Reply

Your email address will not be published. Required fields are marked *