Preview

Informatics

Advanced search

Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition

Abstract

This paper addresses the problem of dependence of the success rate of adversarial attacks to the deep neural networks on the biomedical image type and control parameters of generation of adversarial examples. With this work we are going to contribute towards accumulation of experimental results on adversarial attacks for the community dealing with biomedical images. The white-box Projected Gradient Descent attacks were examined based on 8 classification tasks and 13 image datasets containing more than 900 000 chest X-ray and histology images of malignant tumors. An increase of the amplitude and the number of iterations of adversarial perturbations in generating malicious adversarial images leads to a growth of the fraction of successful attacks for the majority of image types examined in this study. Histology images tend to be less sensitive to the growth of amplitude of adversarial perturbations. It was found that the success of attacks was dropping dramatically when the original confidence of predicting image class exceeded 0,95.

About the Authors

D. M. Voynov
Belarusian State University
Belarus
Master Student


V. A. Kovalev
The United Institute of Informatics Problems of the National Academy of Sciences of Belarus
Belarus
Cand. Sci. (Eng.), Head of the Laboratory of Biomedical Images Analysis


References

1. Litjens G., Kooi T., Bejnordi B., Setio A., Ciompi F., Ghafoorian M.: A survey on deep learning in medical image analysis. Medical Image Analysis 42, 60-88 (2017).

2. Ker J., Wang L., Rao J., Lim T.: Deep Learning Applications in Medical Image Analysis. IEEE Access 6, 9375-9389 (2018).

3. Papernot N., McDaniel P., Goodfellow I., Jha S., Z. Berkay Celik, Swami A.: Practical Black-Box Attacks agains Machine Learning. arXiv preprint arXiv:1602.02697v4 (2017).

4. Szegedy C., Wojciech Z., Sutskever I., Bruna J., Dumitru E., Goodfellow I., Fergus R.: Intri-guing properties of neural networks. International Conference on Learning Representations (ICLR) 2014, pp. 1-10. Springer, Banff (2014).

5. Goodfellow I., Shlens J., Szegedy C.: Explaining and harnessing adversarial examples. arXiv pre-print arXiv:1412.6572v3 (2015).

6. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A.: Towards Deep Learning Models Re-sistant to Adversarial Attacks. arXiv preprint arXiv:1706.06083v3 (2017).

7. Xu W., Evans D., Qi Y.: Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint arXiv:1704.01155v2 (2017).

8. Wang H., Yu Chun-Nam: A Direct Approach to Robust Deep Learning Using Adversarial Net-works. arXiv preprint arXiv:1905.09591v1 (2019).

9. Papernot N., McDaniel P., Fredrikson M., Jha S., Z. Berkay Celik, Swami A.: The Limitations of Deep Learning in Adversarial Settings. arXiv preprint arXiv: 1511.07528v1 (2015).

10. Sun Ke, Zhu Z., Lin Z.: Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors. arXiv preprint arXiv: 1902.11019v1 (2019).

11. Han C., Murao K., Satoh S., Nakayama H.: GAN-based Medical Image Augmentation. arXiv preprint arXiv: 1904.00838v1 (2019).

12. Kazemifar S., McGuire S., Timmerman R., Wardak Z., Nguyen D., Park Y., Jiang S., Owrangi A.: MRI-only brain radiotherapy: assessing the dosimetric accuracy of synthetic CT images gen-erated using a deep learning approach. arXiv preprint arXiv: 1904.05789 (2019).

13. Werpachowski R., György A., Szepesvári. C: Detecting Overfitting via Adversarial Examples. arXiv preprint arXiv: 1903.02380v1 (2019).

14. Akhtar N., Mian A.S.: Threat of Adversarial Attacks on Deep Learning in Computer Vision. IEEE Access 6, 14410–14430 (2018).

15. Recht B., Roelofs R., Schmidt L., Shankar V.: Do CIFAR-10 Classifiers Generalize to CIFAR-10? arXiv preprint arXiv:1806.00451 (2018).

16. Ozdag M.: Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey. Procedia Computer Science 140, 152–161 (2018).

17. Veta M., Heng Y.J., Stathonikos N. et. al.: Predicting breast tumor proliferation from wholeslide images. Medical Image Analysis 54, 111–121 (2019).

18. Wiyatno R., Xu A.: Maximal Jacobian-based Saliency Map Attack. arXiv preprint arXiv:1808.07945v1 (2018).

19. Ericson N. B., Yao Z., Mahoney W.: JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. arXiv preprint arXiv: 1904.03750 (2019)


Review

For citations:


Voynov D.M., Kovalev V.A. Experimental assessment of аdversarial attacks to the deep neural networks in medical image recognition. Informatics. 2019;16(3):14-22. (In Russ.)

Views: 978


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1816-0301 (Print)
ISSN 2617-6963 (Online)