Explainable AI has attracted much research attention in recent years with feature attribution algorithms, which compute “feature importance” in predictions . There is little analysis of the validity of these algorithms as there is no “ground truth” in the existing datasets to validate their correctness . In this work, we develop amethod to quantitatively evaluate the correctness of XAI algorithms by creatingdatasets with known explanation ground truth . We show that: (1)classification accuracy is positively correlated with explanation accuracy; (2)SHAP provides more accurate explanations than LIME; (3) explanation accuracy isnegatively correlated with dataset complexity; (4) Explanation accuracy is negatively correlated with datasets complexity, according to the authors of this article . We also show that SHAP provides the more accurate explanation for LIME

Author(s) : Orcun Yalcin, Xiuyi Fan, Siyuan Liu

Links : PDF - Abstract

Code :
Coursera

Keywords : explanation - algorithms - accuracy - correctness - correlated -

Leave a Reply

Your email address will not be published. Required fields are marked *