Deep neural networks (DNNs) exhibit knowledge transfer, which is critical to improving learning efficiency and learning in domains that lack high-quality training data . In this paper, we aim to turn the existence and pervasiveness of adversarial examples into an advantage . We show composition with an affine function is sufficient to reduce the difference between two models when adversarial transferability between them is high . We provide empirical evaluation for different transfer learning scenarios on diverse datasets, including CIFAR-10, STL-10 and CelebA, showing a strong positive correlation between the relationship between the two models and knowledge transferability, thus illustrating that our theoretical insights are predictive of practice . We also outline easily checkable sufficient conditions that identify when adversaries transferability indicates knowledge transferabilities .  inadequate to reduce the differences from two models is high. In particular, we show that composition with a . affine functions is sufficient. to reduce a . difference between . two models .

Links: PDF - Abstract

Code :

https://github.com/AI-secure/Does-Adversairal-Transferability-Indicate-Knowledge-Transferability

Keywords : transferability - models - knowledge - adversarial - reduce -

Leave a Reply

Your email address will not be published. Required fields are marked *