World Scientific
  • Search
  •   
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website www.worldscientific.com.

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

An Universal Adversarial Attack Method Based on Spherical Projection

    https://doi.org/10.1142/S0218126622500384Cited by:1 (Source: Crossref)

    Adversarial attack on neural networks has become an important problem restricting its security applications, and among adversarial attacks oriented towards the sample set, the universal perturbation design causing most sample output errors is critical to the study. This paper takes the neural network for image classification as the research object, summarizes the existing universal perturbation generation algorithm, proposes a universal perturbation generation algorithm combining batch stochastic gradient rise and spherical projection search, achieves loss function reduction through the iterative training of stochastic gradient rise in batch samples, and limits the universal perturbation search to a high-dimensional sphere with radius ε to reduce the search space of universal perturbation. Moreover, the regularized technology is introduced to improve the generation quality of universal perturbations. The experimental results show that compared with the baseline algorithm, the attack success rate increases by more than 10%, the solution efficiency of universal perturbation is improved by one order of magnitude, and the quality controllability of universal perturbation is better.

    This paper was recommended by Regional Editor Takuro Sato.

    References

    • 1. J. Deng et al., ImageNet: A large-scale hierarchical image database, Proc. 2009 IEEE Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2009) pp. 248–255. CrossrefGoogle Scholar
    • 2. G. Hinton et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Process. Mag. 29 (2012) 82–97. Google Scholar
    • 3. J. Zhang, K. Yu, Z. Wen, X. Qi and A. K. Paul , 3D reconstruction for motion blurred images using deep learning-based intelligent systems, Comput. Mater. Continua 66 (2021) 2087–2104, https://doi.org/10.32604/cmc.2020.014220. Crossref, Web of ScienceGoogle Scholar
    • 4. C. Szegedy et al., Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199, 2013. Google Scholar
    • 5. C. Chen et al., Deepdriving: Learning affordance for direct perception in autonomous driving, Proc. Int. Conf. Computer Vision (Piscataway, NJ, 2015) pp. 2722–2730. CrossrefGoogle Scholar
    • 6. R. Jia and P. Lian , Adversarial examples for evaluating reading comprehension systems, Proc. 2017 Conf. Empirical Methods in Natural Language Processing (Stroudsburg, PA, 2017), pp. 2021–2031. CrossrefGoogle Scholar
    • 7. S. Samanta and S. Mehta, Towards crafting text adversarial samples (2017), http://arxiv.org/abs/1707.02812 (accessed on May 25, 2021). Google Scholar
    • 8. A. Gleave et al., Adversarial policies: Attacking deep reinforcement learning, in 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, April 2020. Google Scholar
    • 9. L. Zhao et al., Intelligent digital twin-based software-defined vehicular networks, IEEE Netw. 99 (2020) 1–7, https://doi.org/10.1109/MNET.011.1900587. Google Scholar
    • 10. I. J. Goodfellow, S. Jonathon and S. Christian , Explaining and harnessing adversarial examples, in 3rd International Conference on Learning Representations, San Diego, CA, USA, May 2015. Google Scholar
    • 11. A. Kurakin, I. J. Goodfellow and S. Bengio , Adversarial examples in the physical world, in 5th International Conference on Learning Representations, Toulon, France, April 2017, pp. 99–112. Google Scholar
    • 12. A. Madry et al., Towards deep learning models resistant to adversarial attacks, in 6th International Conference on Learning Representations, Vancouver, BC, Canada, April 2018. Google Scholar
    • 13. S. Sarkar et al., UPSET and ANGRI: Breaking high performance image classifiers (2017),http://arxiv.org/abs/1707.01159 (accessed on May 25, 2021). Google Scholar
    • 14. P. Y. Chen et al., ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models, Proc. 10th ACM Workshop on Artificial Intelligence and Security (ACM, New York, 2017), pp. 15–26. CrossrefGoogle Scholar
    • 15. J. Su, D. V. Vargas and K. Sakurai , One pixel attack for fooling deep neural networks, IEEE Trans. Evol.Comput. 23 (2019) 828–841. Crossref, Web of ScienceGoogle Scholar
    • 16. D. Yinpeng et al., Discovering adversarial examples with momentum (2017), http://arxiv.org/abs/1710.06081 (accessed on May 25, 2021). Google Scholar
    • 17. N. Papernot, P. McDaniel and S. Jha , The limitations of deep learning in adversarial settings, Proc. IEEE European Symp Security and Privacy, Saarbrucken (Piscataway, NJ, 2016), pp. 372–387. CrossrefGoogle Scholar
    • 18. S.-M. Moosavi-Dezfooli et al., Universal adversarial perturbations, Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2017), pp. 86–94. CrossrefGoogle Scholar
    • 19. S.-M. Moosavi-Dezfooli, A. Fawzi and P. Frossard , DeepFool: A simple and accurate method to fool deep neural networks, Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2016), pp. 2574–2582. CrossrefGoogle Scholar
    • 20. N. Carlini and D. Wagner , Towards evaluating the robustness of neural networks, Proc. 2017 IEEE Symp. Security and Privacy (Piscataway, NJ, 2017), pp. 39–57. CrossrefGoogle Scholar
    • 21. Z. Chaoning et al., CD-UAP: Class discriminative universal adversarial perturbation, Proc. 34th AAAI Conf. Artificial Intelligence (Menlo Park, CA, 2020), pp. 6754–6761. Google Scholar
    • 22. S. Yucheng et al., Adaptive iterative attack towards explainable adversarial robustness, Pattern Recognit. 105 (2020) 107309. Crossref, Web of ScienceGoogle Scholar
    • 23. X. Cihang et al., Improving transferability of adversarial examples with input diversity, Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2019), pp. 2730–2739. Google Scholar
    • 24. I. Oseledets and V. Khrulkov , Art of singular vectors and universal adversarial perturbations, Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2018), pp. 8562–8570. CrossrefGoogle Scholar
    • 25. K. R. Mopuri, U. Garg and R. V. Babu , Fast feature fool: A data independent approach to universal adversarial perturbations, in British Machine Vision Conference 2017, London, UK, September 2017. Google Scholar
    • 26. W. Sivy et al., Universal perturbation generation for black-box attack using evolutionary algorithms, Proc. 24th Int. Conf. Pattern Recognition (Piscataway, NJ, 2018), pp. 1277–1282. Google Scholar
    • 27. W. Jing et al., Decision-based universal adversarial attack (2020), https://arxiv.org/abs/2009.07024v1 (accessed on May 25, 2021). Google Scholar
    • 28. J. Hayes and G. Danezis , Learning universal adversarial perturbations with generative models, Proc. IEEE Security and Privacy Workshops (Piscataway, NJ, 2018), pp. 43–49. CrossrefGoogle Scholar
    • 29. K. R. Mopuri et al., NAG: Network for adversary generation, Proc. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (Piscataway, NJ, 2018), pp. 742–751. CrossrefGoogle Scholar
    • 30. K. R. Mopuri, P. K. Uppala and R. V. Babu , Ask, acquire, and attack: Data-free uap generation using class impressions, Proc. European Conf. Computer Vision (Springer, Berlin, 2018), pp. 19–34. CrossrefGoogle Scholar
    • 31. W. Zifei, H. Xiaolin and Y. Jie , Universal adversarial perturbation generated by attacking layer-wise relevance propagation, Proc. 2020 IEEE 10th Int. Conf. Intelligent Systems (Piscataway, NJ, 2020), pp. 431–437. Google Scholar
    • 32. S. U. Din et al., Steganographic universal adversarial perturbations, Pattern Recogni. Lett. 135 (2020) 146–152. Crossref, Web of ScienceGoogle Scholar
    • 33. K. R. Mopuri, A. Ganeshan and R. V. Babu , Universalizable data-free objective for crafting universal adversarial perturbations, IEEE Trans. Pattern Anal. Mach. Intell. 41 (2019) 2452–2465. Crossref, Web of ScienceGoogle Scholar
    • 34. Z. Chaoning et al., Understanding adversarial examples from the mutual influence of images and perturbations, Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2020), pp. 14521–14530. Google Scholar
    • 35. A. Krizhevsky, Learning multiple layers of features from tiny images (2009), http://www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf (accessed on May 5, 2021). Google Scholar
    • 36. Y. Netzer et al., Reading digits in natural images with unsupervised feature learning (2011), http://ai.stanford.edu/twangcat/papers/nips2011_housenumbers.pdf (accessed on May 5, 2021). Google Scholar
    • 37. M. Lin, Q. Chen and S. Yan , Network in network, in 2nd International Conference on Learning Representations, Banff, AB, Canada, April 2014, Conference Track Proceedings. ICLR, 2014. Google Scholar
    • 38. K. Simonyan and A. Zisserman , Very deep convolutional networks for large-scale image recognition, International Conference on Learning Representations, San Diego, May 2015, pp. 1–14. Google Scholar
    • 39. K. He et al., Deep residual learning for image recognition, Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition (Piscataway, NJ, 2016), pp. 770–778. CrossrefGoogle Scholar