Detecting adversarial examples of text
Although deep neural networks have achieved state-of-the-art performance in various tasks, a lot of their decisions are non-interpretable because high dimensional feature vectors cause deep neural net functions to be extremely difficult to visualize, widely used gradient descent is an approximate solution, not an analytical solution, etc. Therefore, there are multiple counter-intuitive and intriguing phenomena in deep classifiers, such as adversarial examples.
In natural language and image classification tasks, research on adversarial examples divides into two major categories: generating adversarial examples to attack neural networks and defence against adversaries to protect neural networks. Approaches to adversarial attacks for text classification tasks have boomed in the last three years using character-level, word-level, phrase-level, and sentence-level perturbations. However, as far as we know, textual adversarial defence, especially detecting textual adversarial instances from their normal and noise counterparts, does not have an effective method so far. In this thesis, we apply ensemble learning and sentence representations from different transformer models based on the comprehension of the graph of deep neural nets and the reason for adversarial examples to fill this gap. Our technique obtains state-of-the-art results on character-level and word-level attacks on both IMDB and MultiNLI datasets. For future research, we will publish our code.