BERT-based text ranking models have dramatically advanced the state-of-the-art in ad-hoc retrieval . Co-BERT has been proposed to exploit several BERT architectures to calibrate the query-documentrepresentations using pseudo relevance feedback before modeling the relevance of a group of documents jointly . Extensive experiments on two standard testcollections confirm the effectiveness of the proposed model in improving the performance of text re-ranking over strong fine-tuned BERT baselines . We plan to make our implementation open source to enable further comparisons . We will release the implementation of the model in order to improve performance of the new model in the future of the BERT model and to make further comparisons with the current model available to users .

Author(s) : Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye

Links : PDF - Abstract

Code :
Coursera

Keywords : bert - model - proposed - performance - text -

Leave a Reply

Your email address will not be published. Required fields are marked *