Commonsense Knowledge Base Completion with Structural and Semantic Context

Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs . This implies significantly sparser graph structures – a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes . We investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge . We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet . Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning on subgraphs for computational efficiency . Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well. Further analysis of model predictions shine light on model predictability of common common knowledge that is needed for the type of knowledge that the language models captured well, such as language models are needed to capture well

Links: PDF - Abstract

Code :

None

Keywords : knowledge - language - model - models - graph -

Leave a Reply

Your email address will not be published. Required fields are marked *