Pre training Text to Text Transformers to Write and Reason with Concepts

Pretrained language models have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks that require a syntactic and semantic understanding of the text . We propose generative and contrastive objectives as intermediate self-supervised pre-training tasks . We apply our method on a pre-trained T5 model in an intermediate task transfer learning fashion to train a concept-aware language model (CALM) We experiment with five commonsense benchmarks (four NLU tasks and one NLG task) Experimental results show that CALM outperforms baseline methods by a consistent margin . We also propose a joint training framework to unify generative objectives so that these objectives can be more effective. The proposed objectives can pack more commonsense knowledge into the parameters of a model without relying on external knowledge bases, yield better performance on both NLU and NLG tasks, yielding better performance of the NLG functions. We applied our method

Links: PDF - Abstract

Code :

None

Keywords : objectives - tasks - nlg - language - model -

Leave a Reply

Your email address will not be published. Required fields are marked *