Probabilistic Meta Learning for Bayesian Optimization

Recent probabilistic approaches have shown good performance at test time, but scale poorly with the number of data points or under-perform with little data on the test task… In this paper, we propose a novel approach that uses a generative model for the underlying data distribution and simultaneously learns a latent feature distribution to represent unknown task properties . To enable fast and accurate inference at test-time, we introduce a novel meta-loss that structures the latent space to match the prior used for inference . Together, these contributions ensure that our model exhibits high sample-efficiency and provides well-calibrated uncertainty estimates . We evaluate the proposed approach and compare its performance to those of the literature on a set of Bayesian optimization transfer-learning tasks. We evaluate its proposed approach to probabilism models

Links: PDF - Abstract

Code :

None

Keywords : data - test - approach - proposed - distribution -

Leave a Reply

Your email address will not be published. Required fields are marked *