Tiny-Transfer-Learning (TinyTL) can reduce training memory cost by order of magnitude (up to 13.3x) without sacrificing accuracy compared to fine-tuning the full network . TinyTL pre-trains a large super-net that contains many weight-shared sub-nets that can individually operate . Different from using the same feature extractor to fit different target datasets, TinyTL adapts the architecture of the feature extractionor to fits different target dataset while fixing the weights . This backpropagation-free discrete sub-net selection incurs no memory overhead. Extensive experiments show that TinyTL can reduce the training . memory . cost by up to 13 .3x . to train by using a large network by learning small residual feature maps in the middle of the network. TinyTL reduces the training memory costs of the training space to reduce training space .

Links: PDF - Abstract

Code :


Keywords : training - tinytl - memory - learning - network -

Leave a Reply

Your email address will not be published. Required fields are marked *


Enjoy this blog? Please spread the word :)