Distantly supervised (DS) relation extraction (RE) has attracted muchattention in the past few years as it can utilize large-scale auto-labeleddata . Previous works either took costly and inconsistent methods to examine a small sample of predictions, or directly test models on auto labels . To evaluate DS-RE models in a more credible way, webuild manually-annotated test sets for two D-RE datasets, NYT10 and Wiki20,and thoroughly evaluate several competitive models, especially the latestpre-trained ones . The experimental results show that the manual evaluation can indicate very different conclusions from automatic ones, especially some unexpected observations, e.g., pre-trained models can achieve dominating performance while being more susceptible to false-positives compared toprevious methods . We hope that both our manual test sets and novel observations can help advance future DS-re

Author(s) : Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou

Links : PDF - Abstract

Code :
Coursera

Keywords : test - models - manual - ds - methods -

Leave a Reply

Your email address will not be published. Required fields are marked *