KGPT Knowledge Grounded Pre Training for Data to Text Generation

Data-to-text generation has recently attracted substantial interest due to its wide applications . Existing methods have shown impressive performance on an array of tasks… However, they rely on a significant amount of labeled data for each task, which is costly to acquire and thus limits their application to new tasks and domains . We propose a knowledge-grounded pre-training (KGPT) which consists of two parts, 1) a general knowledge-groundsed generation model to generate knowledge-enriched text . The pre-trained model can be fine-tuned on various tasks to generate task-specific text . Under the fully-supervised setting, our model can achieve remarkable gains over the known baselines. Under zero-shot setting, . our model without seeing any examples achieves over 30 ROUGE-L on WebNLG while all other baselines fail. These experiments consistently prove the strong generalization ability of our proposed framework of our . proposed framework models are tested .

Links: PDF - Abstract

Code :

https://github.com/wenhuchen/KGPT

Keywords : text - model - knowledge - pre - generation -

Leave a Reply

Your email address will not be published. Required fields are marked *