DETECTION OF ADVERSARIAL ATTACKS USING HYBRID GENERATIVE ADVERSARIAL NETWORKS (GANS)

  • K.Bala Bhaskar
  • Dr.R.Satya Prasad
Keywords: generative adversarial networks (GANs), Adversarial attacks, Fast Gradient Sign Method (FGSM)

Abstract

Adversarial attacks are a common technique used in machine learning to find vulnerabilities in deep learning models. In the context of hybridgenerative adversarial networks (GANs), adversarial attacks are used to perturb the input data so that the generated outputs are manipulated to produce unexpected or undesirable results. One example of an adversarial attack in GANs is the "Fast Gradient Sign Method" (FGSM), which involves adding a small perturbation to the input data to cause the GAN to generate an output that is significantly different from the desired result. This technique is often used to test the robustness of GANs against attacks and to identify potential weaknesses that malicious actors could exploit. Another type of adversarial attack in GANs is known as the "Boundary Attack," which involves finding the boundary between the decision regions of the generator and the discriminator in order to identify inputs that can be manipulated to produce a desired output. This paper introduced a hybrid DL model to overcome various issues in existing models. Experiments are conducted on two datasets such as MNIST dataset. Overall, adversarial attacks in hybrid GANs are an essential area of research as they help to identify potential vulnerabilities in the models and enable researchers to develop more robust and secure machine learning systems.

Author Biographies

K.Bala Bhaskar

Research Scholar, Dept of CSE, Acharya Nagarjuna University,

Dr.R.Satya Prasad

Professor and Dean R & D, DEPT OF CSE, Dhanekula Institute of Engineering & Technology, Ganguru, Vijayawada

References

[1] Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436-444, May 2015.
[2] H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. C. Yuen, et al., "The human splicing code reveals new insights into the genetic determinants of disease", Science, vol. 347, no. 6218, Jan. 2015.
[3] M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung and W. Denk, "Connectomic reconstruction of the inner plexiform layer in the mouse retina", Nature, vol. 500, pp. 168-174, Aug. 2013.
[4] M. Amodio, D. Van Dijk, K. Srinivasan, W. S. Chen, H. Mohsen, K. R. Moon, et al., "Exploring single-cell data with deep multitasking neural networks", Nature Methods, vol. 16, no. 11, pp. 1-7, 2019.
[5] G. Hickok and D. Poeppel, "The cortical organization of speech processing", Nature Rev. Neurosci., vol. 8, no. 5, pp. 393-402, 2007.
[6] C. Manning and H. Schutze, Foundations of Statistical Natural Language Processing, Cambridge, MA, USA:MIT Press, 1999.
[9] A. Krizhevsky, I. Sutskever and G. E. Hinton, "ImageNet classification with deep convolutional neural networks", Proc. Adv. Neural Inf. Process. Syst., pp. 1097-1105, 2012.
[10] K. Chatfield, K. Simonyan, A. Vedaldi and A. Zisserman, Return of the devil in the details: Delving deep into convolutional nets, 2014.
[11] C. Szegedy et al., "Going deeper with convolutions", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1-9, Jun. 2015.
[12] S. M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi and P. Frossard, "Universal adversarial perturbations", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 86-94, Jul. 2017.
[13] S. Sabour, Y. Cao, F. Faghri and D. J. Fleet, Adversarial manipulation of deep representations, 2015.
[14] N. Papernot, P. McDaniel and I. Goodfellow, Transferability in machine learning: From phenomena to black-box attacks using adversarial samples, 2016.
[15] N. Narodytska and S. P. Kasiviswanathan, Simple black-box adversarial perturbations for deep networks, 2016.
[16] Y. Liu, W. Zhang, S. Li and N. Yu, Enhanced attacks on defensively distilled deep neural networks, 2017.
[17] N. Papernot and P. McDaniel, On the effectiveness of defensive distillation, 2016.
[18] S. J. Oh, M. Fritz and B. Schiele, Adversarial image perturbation for privacy protection—A game theory perspective, 2017S.
[19] K. R. Mopuri, U. Garg and R. V. Babu, Fast feature fool: A data independent approach to universal adversarial perturbations, 2017.
[20] H. Hosseini, B. Xiao, M. Jaiswal and R. Poovendran, On the limitation of convolutional neural networks in recognizing negative images, 2017
[21] C. Kanbak, S.-M. Moosavi-Dezfooli and P. Frossard, Geometric robustness of deep networks: Analysis and improvement, 2017.
[22] S. Moosavi-Dezfooli, A. Fawzi and P. Frossard, "DeepFool: A simple and accurate method to fool deep neural networks", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2574-2582, Jun. 2016.
[23] Y. Dong et al., Boosting adversarial attacks with momentum, 2017.
[24] N. Carlini and D. Wagner, Adversarial examples are not easily detected: Bypassing ten detection methods, 2017.
[25] A. Rozsa, E. M. Rudd and T. E. Boult, Adversarial diversity and hard positive generation, 2016.
[26] P. Tabacof, J. Tavares and E. Valle, Adversarial images for variational autoencoders, 2016.
[27] J. Kos, I. Fischer and D. Song, Adversarial examples for generative models, 2017.
[28] D. P. Kingma and M. Welling, Auto-encoding variational Bayes, 2013.
[29] H. Robbins and S. Monro, "A stochastic approximation method", Ann. Math. Statist., vol. 22, no. 3, pp. 400-407, 1951.
[30] J. Heaton, Ian Goodfellow Yoshua Bengio and Aaron Courville: Deep Learning, Cambridge, MA, USA:MIT Press, 2018.
[31] S. Ruder, "An overview of gradient descent optimization algorithms", arXiv:1609.04747, 2016.
Published
2024-07-11
How to Cite
K.Bala Bhaskar, & Dr.R.Satya Prasad. (2024). DETECTION OF ADVERSARIAL ATTACKS USING HYBRID GENERATIVE ADVERSARIAL NETWORKS (GANS). Revista Electronica De Veterinaria, 25(1), 643-652. Retrieved from https://veterinaria.org/index.php/REDVET/article/view/626
Section
Articles