Chennai Mathematical Institute


3.00 pm,
Robustness of Neural Network Models

K. Sandesh Kamath
Chennai Mathematical Institute.


Neural network based models e.g. Convolutional Neural Network(CNN) are currently the state-of-art classification models. CNNs are by construction translation invariant i.e. the class ascribed to a small translation of an input image is almost always the same as the class ascribed by the model to the original image. Recent work has tried to incorporate more transformation invariance to the CNNs - Group Convolutional Neural Networks(GCNNs) extend traditional CNNs and incorporate invariance to transformations such as rotations and flips.

Another important property desired from models deployed in real world scenarios is robustness to adversarial noise. This is motivated by empirical observations of researchers who showed that a small imperceptible noise added to an image can result in a dramatic change in the prediction of the classifier.

In this thesis we study various aspects of adversarial robustness of neural network models and the interaction between the demands of robustness and invariance on CNNs and GCNNs.

We show empirically and also theoretically that there is a trade-off between networks becoming invariant to transformations(rotation) and networks becoming adversarially robust. We show how to construct a universal perturbation - adding this perturbation to each test input results in the model misclassifying a large sample of the test data. We back our construction with theory and prove that the top singular vector of input dependent perturbations is a good universal perturbation. We empirically analyse the effect of the hyperparameters of stochastic gradient descent on the adversarial robustness of networks. We also give a simple frugal sampling based adversarial training algorithm. We show that a model adversarially trained by our frugal sampling algorithm performs comparably to a model which undergoes full adversarial training.