Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness . We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples . We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data . We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial . learning methods, and significantly improved robusts against the black box and unseen types of attacks . RoCL also demonstrate impressive results in robust transfer learning. RoCL is also demonstrating impressive results to robust transfer learnings, including robust transfer Learning.ÂÂÂ . The authors of this article also present their method, RoCL

Links: PDF - Abstract

Code :

None

Keywords : learning - robust - adversarial - rocl - supervised -

Leave a Reply

Your email address will not be published. Required fields are marked *