Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks to new domains . This approach is particularly useful when only limited or weakly labelled data are available for the new task . We show that adversarially-trained models transfer better across new domains than naturally-trained ones . This behavior results from a bias, introduced by the adversarial training, that pushes the learned inner layers to more natural image representations, which in turn enables better transfer. We show this behavior results in a bias that pushes inner layers into more natural representations, in turn enabling better transfer of data across the new domain . We also show that this behavior follows from a biased training, which pushes the learning inner layers

Links: PDF - Abstract

Code :

None

Keywords : transfer - trained - pushes - layers - behavior -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)