Exploring the Uncertainty Properties of Neural Networks Implicit Priors in the Infinite Width Limit

Modern deep learning models have achieved great success in predictive accuracy for many data modalities . But their application to many real-world tasks is restricted by poor uncertainty estimates, such as overconfidence on out-of-distribution (OOD) data and ungraceful failing under distributional shift . We use the NNGP with a softmax link function to build a probabilistic model for multi-class classification and marginalize over the latent Gaussian outputs to sample from the posterior . This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the model and its finite-width analogue . Finally, we consider an infinite-width final layer in conjunction with a pre-trained embedding. This replicates the important practical use case of transfer learning and allows scaling to significantly larger datasets. This approach is better calibrated than its finite width analogue. As well as achieving competitive predictive accuracy, this approach is also better calibrated to its finitewidth analogue, we find these methods are well calibrated under Distributional shift. We also examine the Calibrations of previous approaches to classification with the NngP, which treats classification problems as regression to the one-hot labels. In this case

Links: PDF - Abstract

Code :

None

Keywords : width - classification - analogue - calibrated - finite -

Leave a Reply

Your email address will not be published. Required fields are marked *