Defending against adversarial examples using defense kernel network

Yuying Hao (TBSI, Tsinghua), Tuanhui Li (Tsinghua University), Yong Jiang (Tsinghua University), Xuanye Cheng (SenseTime Research), Li Li (Graduate School at Shenzhen, Tsinghua University)

Abstract
Deep neural networks have been widely used in recent years. Thus, the security of deep neural networks is crucial for practical applications. Most of previous defense methods are not robust for diverse adversarial perturbations and rely on some specific structure or properties of the attacked model. In this work, we propose a novel defense kernel network to convert the adversarial examples to images with evident classification features. Our method is robust to variety adversarial perturbations and can be independently apply to different attacked model. Experiments on two benchmarks demonstrate that our method has competitive defense ability against existing state-of-the-art defense methods.

DOI
10.5244/C.33.136
https://dx.doi.org/10.5244/C.33.136

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Defending against adversarial examples using defense kernel network},
author={Yuying Hao and Tuanhui Li and Yong Jiang and Xuanye Cheng and Li Li},
year={2019},
month={September},
pages={136.1--136.11},
articleno={136},
numpages={11},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.136},
url={https://dx.doi.org/10.5244/C.33.136}
}