Deep Neural Networks are known to be vulnerable to small, adversariallycrafted, perturbations . The current most effective defense methods against these adversarial attacks are variants of adversarial training . In this paper, we introduce a radically different defense trained only on clean images . We evaluate our defense on CIFAR-10 datasetunder a wide range of attack types (including Linf , L2, and L1 boundedattacks) demonstrating its promise as a general-purpose approach .

Author(s) : Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow

Links : PDF - Abstract

Code :

Keywords : defense - adversarial - neural - networks - l -

Leave a Reply

Your email address will not be published. Required fields are marked *