Transfer learning has emerged as a powerful technique for improving the performance of machine learning models on new domains where labeled training data may be scarce . In this approach a model trained for a source task is used as a starting point for training a model on a related target task . Despite recent empirical success of transfer learning approaches, the benefits and fundamental limits of the transfer learning are poorly understood . We derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data as well as appropriate notions of similarity between the target and target tasks . We further corroborate our theoretical finding with various experiments. Our lower bound provides new insights into the benefit and limitations of transfer learning. We further confirm our findings with various experimental results. We conclude that our lower bound for the lower bound is our conclusion that this lower bound gives new insights

Links: PDF - Abstract

Code :

https://github.com/z-fabian/transfer_lowerbounds_arXiv

Keywords : learning - bound - transfer - target - task -

Leave a Reply

Your email address will not be published. Required fields are marked *