Browsing by Subject "Adversarial examples"
Now showing items 1-1 of 1
-
Learning Stochastic Weight Masking to Resist Adversarial Attacks
(2019-12-02)Adding small perturbations to test images can drastically change the classification accuracy of machine learning models. These perturbed examples are called adversarial examples (Szegedy et al., 2013). Studying these ...