Presenter Information/ Coauthors Information

Gabriel Picioroaga, University of South DakotaFollow

Presentation Type

Oral

Abstract

Abstract: Reliable neural networks are designed so that their outputs satisfy a Lipschitz condition with respect to small deformations of the inputs. The so-called Lipschitz inequality is controlled by a constant which naturally implies that too large a change in the output requires some change in the input. Large Lipschitz constants are hypothesized as the culprit in classification errors, either intentional (e.g. adversarial attacks) or not. At each layer in a network one can find a (local) Lipschitz constant depending on the activation function and the transformation by which features are extracted (convolution, matrix multiplication, compression). The total constant is at most the product of all local ones. In our work we showed that at a convolutional layer one can derive a non-Lipschitz like inequality. We consider particular deformations which are obtained by convolving with a fixed kernel, e.g. a blurring filter. By taking advantage of mathematical properties of convolution (commutativity and L2 -norm estimates) we prove that the convolutional layer’s outputs of a signal and its deformation satisfy a non-Lipschitz inequality via a constant that depends only on the layer’s parameters and the L1-norm of the blurring kernel. Although the bounding constant is less than 1, the error in the output seems to propagate down the network and thus heuristically explain classification errors of blurry images. In our experiments in Matlab with various image classification neural nets, pre-trained and ad-hoc trained (imported with all parameters except the fully connected layer) we have noticed consistent errors in classifying blurred images. With ad-hoc trained networks we have noticed improvement in classification of heavily blurred images after the data sets were augmented with slightly blurred copies. This is joint work with Nate Harding.

Start Date

2-7-2023 11:00 AM

End Date

2-7-2023 12:00 PM

Share

COinS
 
Feb 7th, 11:00 AM Feb 7th, 12:00 PM

Session 7: Robustness Analysis of Convolutional Layers in Image Classification Neural Networks

Herold Crest 253 C

Abstract: Reliable neural networks are designed so that their outputs satisfy a Lipschitz condition with respect to small deformations of the inputs. The so-called Lipschitz inequality is controlled by a constant which naturally implies that too large a change in the output requires some change in the input. Large Lipschitz constants are hypothesized as the culprit in classification errors, either intentional (e.g. adversarial attacks) or not. At each layer in a network one can find a (local) Lipschitz constant depending on the activation function and the transformation by which features are extracted (convolution, matrix multiplication, compression). The total constant is at most the product of all local ones. In our work we showed that at a convolutional layer one can derive a non-Lipschitz like inequality. We consider particular deformations which are obtained by convolving with a fixed kernel, e.g. a blurring filter. By taking advantage of mathematical properties of convolution (commutativity and L2 -norm estimates) we prove that the convolutional layer’s outputs of a signal and its deformation satisfy a non-Lipschitz inequality via a constant that depends only on the layer’s parameters and the L1-norm of the blurring kernel. Although the bounding constant is less than 1, the error in the output seems to propagate down the network and thus heuristically explain classification errors of blurry images. In our experiments in Matlab with various image classification neural nets, pre-trained and ad-hoc trained (imported with all parameters except the fully connected layer) we have noticed consistent errors in classifying blurred images. With ad-hoc trained networks we have noticed improvement in classification of heavily blurred images after the data sets were augmented with slightly blurred copies. This is joint work with Nate Harding.