A falsification algorithm for neuralnetworks directs the search for a counterexample guided by a safetyproperty specification . We evaluate our algorithm on 45 trained neural networkbenchmarks of the ACAS Xu system against 10 safety properties . We show that our procedure detects all the unsafe instances that other tools also report as unsafe . In terms of performance, our falsification procedure identifies most unsafe instances faster, incomparison to the state-of-the-art verification tools for feed-forward neural networks such as NNENUM and Neurify and in many instances, by orders ofmagnitude, by using a derivative-free sampling-basedoptimization method . We also demonstrate that our algorithm detects all of the unsafe situations that otherverification tools also reported as unsafe. Moreover,

Author(s) : Moumita Das, Rajarshi Ray, Swarup Kumar Mohalik, Ansuman Banerjee

Links : PDF - Abstract

Code :

https://github.com/oktantod/RoboND-DeepLearning-Project


Coursera

Keywords : unsafe - algorithm - falsification - tools - instances -

Leave a Reply

Your email address will not be published. Required fields are marked *