Deep learning (DL) has shown enormous potential to impact individuals and society, both positively and negatively . Black-Box nature of DL models poses challenges for interpretability and explainability of the system . DLs have not yet been proven in their ability to utilize relevant domain knowledge and experience critical to human understanding . This aspect is missing in early data-focused approaches andnecessitated knowledge-infused learning and other strategies to incorporatecomputational knowledge . This article demonstrates how knowledge, provided as aknowledge graph, is incorporated into DL methods . We then discuss how this makes afundamental difference in the interpretability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications. We then talk about how this is used as a knowledge graph, which is one of the strategies. We also discuss how knowledge

Author(s) : Manas Gaur, Keyur Faldu, Amit Sheth

Links : PDF - Abstract

Code :
Coursera

Keywords : knowledge - learning - dl - interpretability - box -

Leave a Reply

Your email address will not be published. Required fields are marked *