Once-For-All (OFA) proposed an approach to jointly train several models at once with a constant training cost . This cost remains as high as 40-50 GPU days and alsosuffers from a combinatorial explosion of sub-optimal model configurations . Weseek to reduce this search space — and hence the training budget — by limiting search to models close to the accuracy-latency Pareto frontier . Weincorporate insights of compound relationships between model dimensions tobuild CompOFA, a design space smaller by several orders of magnitude . We also show that this smaller design space is dense enough tosupport equally accurate models for a similar diversity of hardware and latency targets, while also reducing the complexity of the training and subsequentextraction algorithms . We demonstrate that even with simple heuristics we canachieve a 2x reduction in training time and 216x speedup in

Author(s) : Manas Sahni, Shreya Varshini, Alind Khare, Alexey Tumanov

Links : PDF - Abstract

Code :
Coursera

Keywords : training - models - space - design - smaller -

Leave a Reply

Your email address will not be published. Required fields are marked *