You are here

Adversarial Robustness of Deep Neural Networks

Mathieu Sinn, IBM Research
10-11am  1st Mar 2019

Abstract

Adversarial samples are inputs to Deep Neural Networks (DNNs) that an adversary has tampered with in order to cause specific misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend DNNs against them. This poses potential threats to the deployment of DNNs in security critical applications. In this lecture I will review the state-of-the-art on adversarial samples and discuss recent progress in developing DNNs that are robust against adversarial samples. I will also show code examples for experimenting with adversarial attacks and defenses based on our open-source library https://github.com/IBM/adversarial-robustness-toolbox .

Short Bio

Dr. Mathieu Sinn is a Research Staff Member and Manager of the AI & Machine Learning group at the IBM Research lab in Dublin. He has a Master's in Computer Science and a PhD in Mathematics from the University of Luebeck, Germany. He has worked on a large variety of fundamental and practical aspects of Machine Learning, with the most recent focus on robustness of AI against adversarial threats. Mathieu is an IBM certified Data Science Thought Leader, regular reviewer for top AI conferences and has served as external PhD committee member on various occasions.

Venue

Lloyd LB 0.04