Zero shot Transfer Learning for Gray box Hyper parameter Optimization

Gray-box Zero(0)-Shot Initialization (GROSI) is a conditional parametric surrogate that learns a universal response model . GROSI can outperform state-of-the-art sequential HPO algorithms . We achieve transfer of knowledge without engineered meta- features, but rather through a shared model that is trained simultaneously across all datasets . We design and optimize a novel loss function that allows us to regress from the dataset/hyper-parameter pair to the response unto the response . Experiments on 120 datasets demonstrate the strong performance of GROSi, compared to conventional HPO strategies . We also show that by fine-tuning GROS I to the target dataset, we can outperformed state- of theart sequential algorithms. We also showed that by tuning the target datasets, we could outperform sequential HPOSI to thetarget dataset, We can outperforming state-to-date HPO algorithm. Experiments are successful in testing the performance of GrOSI, compared with conventional HPOSi. Experiment on 120 .

Links: PDF - Abstract

Code :

None

Keywords : grosi - state - datasets - dataset - hpo -

Leave a Reply

Your email address will not be published. Required fields are marked *