Training two models with adversarial samples to increase robustness and their comparisons.
In this repository, you can find two jupyter notebooks where the adverserial samples are generated step by step using FGSM attack and training two different neural network architectures(LeNet-5 and VGG-Net) with them. They are explained in detail in the notebooks:
In the presentation the results are compared and analyzed.