Download PDFOpen PDF in browserCMMR: a Composite Multidimensional Models Robustness Evaluation Framework for Deep LearningEasyChair Preprint 1102019 pages•Date: October 4, 2023AbstractAccurately evaluating the defense models against adversarial examples has been proven to be a challenging task. We have recognized the limitations of mainstream evaluation standards, which fail to account for the discrepancies in evaluation results arising from different adversarial attack methods, experimental setups, and metrics sets. To address these disparities, we propose the Composite Multidimensional Model Robustness (CMMR) evaluation framework, which integrates three evalu- ation dimensions: attack methods, experimental settings, and metrics sets. By comprehensively evaluating the model’s robustness across these dimensions, we aim to effectively mitigate the aforementioned variations. Furthermore, the CMMR framework allows evaluators to flexibly define their own options for each evaluation dimension to meet their specific requirements. We provide practical examples to demonstrate how the CMMR framework can be utilized to assess the performance of models in enhancing robustness through various approaches. The reliability of our methodology is assessed through both practical examinations and theoretical validations. The experimental results demonstrate the excellent reliability of the CMMR framework and its significant reduction of variations encountered in evaluating model robustness in practical sce- narios Keyphrases: Adversarial Machine Learning, adversarial attacks, robustness evaluation
|