IBM Research - Ireland Internship Project: Measuring Adversarial Robustness of Deep Neural Networks - overview


Abstract:


Deep neural networks (DNNs) are achieving state-of-the-art accuracy on a wide range of cognitive tasks, including object recognition and natural language processing. However, DNNs have been shown to be vulnerable to adversarial examples—malicious inputs crafted by adversaries to induce the trained model to produce erroneous outputs that run counter to human interpretation. This is of concern in sensitive and safety-critical applications such as healthcare or autonomous vehicles.

The scope of this internship is to investigate the vulnerability of DNNs with respect to adversarial inputs and develop principled methods for assessing and potentially optimizing their robustness. This could include computation of complexity measures relating to learning-theoretical properties of DNNs, geometric measures assessing the structure of decision boundaries, and/or algebraic measures quantifying the stability of DNN training.

Upon successful completion, this internship will have helped to develop principled methodology that advances the understanding of adversarial robustness in particular and of DNNs in general; it will have demonstrated the effectiveness of that methodology by systematic experimental evaluation; and it will have led to a prototype implementation demonstrating how the adversarial robustness of DNNs can be assessed in a "live" real-world setting.

Required skills:


1. Working knowledge in deep neural networks and applications such as image or text classification.
2. Strong Python programming skills
3. Experience of working with deep learning libraries, e.g. TensorFlow, Pytorch, Keras
4. Solid understanding of mathematical concepts such as (Bayesian) Statistics and Linear Algebra.