Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different distributions . The Kullback-Leibler (KL) divergence $D(mu||mu’)$ plays an important role in characterizing the generalization error in the settings of domain adaptation . In particular, our bound is tighter in specific classification problems than the bound derived using Rademacher complexity . The results are based on an empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase . We further apply the method to iterative, noisy gradient descent algorithms, and obtain upper bounds which can be easily calculated, only using parameters from the learning algorithms . A few illustrative examples are provided to demonstrate the usefulness of the results. A few Illustrative examples were provided to illustrate the effectiveness of the findings.

Links: PDF - Abstract

Code :

None

Keywords : learning - illustrative - data - bound - algorithms -

Leave a Reply

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)