Deep neural networks have been widely used in recent years. Thus, the security of deep neural networks is crucial for practical applications. Most of previous defense methods are not robust for diverse adversarial perturbations and rely on some specific structure or properties of the attacked model. In this work, we propose a novel defense kernel network to convert the adversarial examples to images with evident classification features. Our method is robust to variety adversarial perturbations and can be independently apply to different attacked model. Experiments on two benchmarks demonstrate that our method has competitive defense ability against existing state-of-the-art defense methods.