Random transformation of imagebrightness can eliminate overfitting in the generation of adversarial examples . The method has a higher success rate for black-box attacks than other attack methods based on data augmentation. We hope that this method can help toevaluate and improve the robustness of models. It can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and generate adversarial example with bettertransferability. Extensive experiments on the ImageNet dataset demonstrate themethod’s effectiveness. Whether on normally or adversarially trained networks, our method has higher success rates for black box attacks than . other attackmethods based on . other . attack methods. The FSM-based approach has a . higher success . rate for . other attacks than others based on Data augmentation methods.

Author(s) : Bo Yang, Kaiyong Xu, Hengjun Wang, Hengwei Zhang

Links : PDF - Abstract

Code :
Coursera

Keywords : based - attack - method - methods - attacks -

Leave a Reply

Your email address will not be published. Required fields are marked *