Distantly supervised (DS) relation extraction (RE) has attracted muchattention in the past few years as it can utilize large-scale auto-labeleddata . Previous works either took costly and inconsistent methods to examine a small sample of predictions, or directly test models on auto labels . To evaluate DS-RE models in a more credible way, webuild manually-annotated test sets for two D-RE datasets, NYT10 and Wiki20,and thoroughly evaluate several competitive models, especially the latestpre-trained ones . The experimental results show that the manual evaluation can indicate very different conclusions from automatic ones, especially some unexpected observations, e.g., pre-trained models can achieve dominating performance while being more susceptible to false-positives compared toprevious methods . We hope that both our manual test sets and novel observations can help advance future DS-re
Author(s) : Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie ZhouLinks : PDF - Abstract
Code :
Keywords : test - models - manual - ds - methods -
- Google Cloud Professional Data Engineer Specialization
- Introduction to TensorFlow for AI, DL and ML by Andrew Ng
- Spark in R using sparklyr
- School of Data Science
- Python for Everybody Specialization
- Practical Deep Learning for Coders – Part 1
- AI Programming with Python
- How to Win a Data Science Competition on Coursera