Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks such as poisoning attacks and fine-tuning . In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure theintegrity and authenticity of CNN models against such manipulation attacks . We retrieve the secret from the model, compute thehash value of the secret, and compare it with the embedded hash value . We use awavelet-based technique to transform weights into the frequency domain andembedded the secret into less significant coefficients . Our theoretical analysisshows that DeepaSign can hide up to 1KB secret in each layer with minimal accuracy, with minimal lossof the model’s accuracy . The results demonstrate that Deepi sign isverifiable without degrading the classification accuracy, and robust againstrepresentative CNN manipulation attacks. The results are based on the results from experiments on four pre-trained models (ResNet18, VGG16, AlexNet, and Imagenet)

Author(s) : Alsharif Abuadbba, Hyoungshick Kim, Surya Nepal

Links : PDF - Abstract

Code :


Keywords : secret - attacks - manipulation - accuracy - cnn -

Leave a Reply

Your email address will not be published. Required fields are marked *