1. Litjens G., Kooi T., Bejnordi B., Setio A., Ciompi F., Ghafoorian M. A survey on deep learning in medical image analysis. Medical Image Analysis, 2017, vol. 42, рр. 60-88.
2. Ker J., Wang L., Rao J., Lim T. Deep learning applications in medical image analysis. IEEE Access, 2018, vol. 6, рр. 9375-9389.
3. Recht B., Roelofs R., Schmidt L., Shankar V. Do CIFAR-10 Classifiers Generalize to CIFAR-10? ArXiv.org, 2018. Available at: https://arxiv.org/abs/1806.00451 (accessed 15.05.2019).
4. Szegedy C., Wojciech Z., Sutskever I., Bruna J., Dumitru E., Goodfellow I., Fergus R. Intriguing properties of neural networks. International Conference on Learning Representations (ICLR’2014), Banff, Canada, 14-16 April 2014. Banff, 2014, pp. 1-10.
5. Akhtar N., Mian A. S. Threat of adversarial attacks on deep learning in computer vision. IEEE Access, 2018, vol. 6, рр. 14 410-14 430.
6. Papernot N., McDaniel P., Goodfellow I., Jha S., Berkay Celik Z., Swami A. Practical Black-Box Attacks agains Machine Learning. ArXiv.org, 2017. Available at: https://arxiv.org/abs/1602.02697 (accessed 15.05.2019).
7. Xu W., Evans D., Qi Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. ArXiv.org, 2017. Available at: https://arxiv.org/abs/1704.01155 (accessed 15.05.2019).
8. Goodfellow I., Shlens J., Szegedy C. Explaining and Harnessing Adversarial Examples. ArXiv.org, 2015. Available at: https://arxiv.org/abs/1412.6572 (accessed 15.05.2019).
9. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards Deep Learning Models Resistant to Adversarial Attacks. ArXiv.org, 2017. Available at: https://arxiv.org/abs/1706.06083 (accessed 15.05.2019).
10. Ozdag M. Adversarial attacks and defenses against deep neural networks: a survey. Procedia Computer Science, 2018, vol. 140, рр. 152-161.
11. Ericson N. B., Yao Z., Mahoney W. JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks. ArXiv.org, 2019. Available at: https://arxiv.org/abs/1904.03750 (accessed 15.05.2019).