Download PDFOpen PDF in browserFindings on Adversarial Robustness through Autoencoder-Based Denoising for Image SecurityEasyChair Preprint 117645 pages•Date: January 14, 2024AbstractIn the realm of image security, the robustness of Convolutional Neural Networks (CNNs) against adversarial attacks is of paramount importance. In this study, we present a comprehensive approach to bolstering the adversarial resilience of a CNN through the integration of an autoencoder-based denoising mechanism. We initiated our investigation by training a CNN on a substantial dataset of 2482 images, comprising 1241 for training and validation each. After the initial 50 epochs, the CNN demonstrated impressive performance with a training accuracy of 97%, validation accuracy of 92.46%, and testing accuracy of 93.23%. Encouraged by these results, we preserved the model for further analysis. To fortify the CNN against adversarial attacks, we introduced an autoencoder tailored for denoising images. This autoencoder was trained on a curated set of combined images generated from the original dataset. The primary objective of the autoencoder is to eliminate noise from images, thereby enhancing the model's ability to discern subtle patterns and features crucial for robust classification. However, a noteworthy observation emerged during our experimentation – the trained autoencoder exhibited limitations in distinguishing between benign and adversarial instances. Despite its efficacy in denoising, the autoencoder struggled to differentiate between authentic and adversarial features, raising intriguing questions about the complexity of adversarial perturbations. This study sheds light on the intricate interplay between denoising autoencoders and adversarial attacks within the context of image security. Our findings underscore the need for further exploration into the nuances of adversarial robustness and the role of denoising mechanisms in fortifying CNNs against increasingly sophisticated threats. Keyphrases: Autoencoder, Convolutional Neural Networks (CNNs), Denoising, Image Security, adversarial attacks, robustness
|