Chennai Mathematical Institute

Seminars




12:00 noon,Lecture Hall 1
SVD Universal perturbations

Sandesh Kamath
Chennai Mathematical Institute.
19-11-19


Abstract

Neural network models achieve state of the art results on several image classification tasks. However, these models are known to be vulnerable to adversarial attacks. Many of the well known adversarial attacks such as Gradient-based attacks, Fast Gradient Sign Method (FGSM) and DeepFool are input-dependent, small pixel-wise perturbations of images which fool state of the art neural networks into misclassifying images but are unlikely to fool any human. There seems to exist an even more interesting phenomenon known as the universal adversarial attack which is an input-agnostic perturbation i.e., the same perturbation works for most images. We in this talk will give a brief introduction to this type of attack. We then will present our generic algorithm which works as well as the first known algorithm developed in this field. We also briefly describe our theory which supports the empirical observations based on which our algorithm works.