According to the Probability Ranking Principle (PRP), ranking documents indecreasing order of their probability of relevance leads to an optimal documentranking for ad-hoc retrieval . The PRP holds when two conditions are met: [C1]the models are well calibrated, and, [C2] the probabilities of relevance arereported with certainty . We know that deep neural networks (DNNs) are often not well calibrated and have several sources of uncertainty . We use two techniques to model the uncertainty of neural rankers leading to the proposed stochastic rankers, which output a predictivedistribution of relevance as opposed to point estimates . Our experimental results show that BERT-based rankers are not robustly calibrated and that stochasticallyBERT-based Rankers yield better calibrations yield better calibration, and that uncertainty estimation is beneficial for both risk-aware neural ranking

Author(s) : Gustavo Penha, Claudia Hauff

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : rankers - neural - uncertainty - calibrated - ranking -

Leave a Reply

Your email address will not be published. Required fields are marked *