The phenomenon of adversarial examples has attracted a growing amount of attention from the machine learning community . Adeeper understanding of the problem could lead to a better comprehension of how information is processed and encoded in neural networks and could help to solve the issue of interpretability in machine learning . Our hypothesis is that the presence in a networkof a sufficiently high number of OR-like neurons could increase the network’s susceptibility to adversarial attacks . Tests performed on the MNIST data set hint that the proposed measures could represent an interesting direction to explore . The proposed measures are L1 norm weight normalisation, application of an input filter and comparison between the neuron output’s distribution’s distribution obtained when thenetwork is fed with the actual data set and the distribution obtained from the original data set called “scrambled dataset” and the randomised version of the former called “Scrambled data set”. Tests performed in the data set suggest that the suggested measures could be an interesting way to explore the proposed Measures may represent an important topic to explore, say the authors of this article .

Author(s) : Alessandro Fontana

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : data - set - measures - adversarial - proposed -

Leave a Reply

Your email address will not be published. Required fields are marked *