Should We Be Perturbed About Deep Learning?

Desmond Higham, The University of Edinburgh
Enabling real-time multi-messenger astrophysics discoveries with deep learning

Many commentators are asking whether current AI solutions are sufficiently robust, resilient, and trustworthy; and how such issues should be quantified and addressed. I believe that numerical analysts can contribute to the debate.

In part 1 of this talk I will look at the common practice of using low precision floating point formats to speed up computation time. I will focus on evaluating the softmax and log-sum-exp functions, which play an important role in many classification tools. Here, across widely used packages we see mathematically equivalent but computationally different formulations; these variations have been designed in an effort to avoid overflow and underflow. I will show that classical rounding error analysis gives insights into their floating-point accuracy, and suggests a method of choice. In part 2 I will look at a bigger picture question concerning sensitivity to adversarial attacks in deep learning. Adversarial attacks are deliberate, targeted perturbations to input data that have a dramatic effect on the output; for example, a traffic "Stop" sign on the roadside can be misinterpreted as a speed limit sign when minimal graffiti is added. The vulnerability of systems to such interventions raises questions around security, privacy and ethics, and there has been a rapid escalation of attack and defense strategies. I will consider a related higher-level question: under realistic assumptions, do adversarial examples always exist with high probability? I will also introduce and discuss the idea of a stealth attack: an undetectable, targeted perturbation to the trained network itself.

Part 1 is joint work with Pierre Blanchard (ARM) and Nick Higham (Manchester).

Part 2 is joint work with Alexander Gorban and Ivan Tyukin (Leicester).

Please use this link to attend the virtual seminar:

https://bluejeans.com/987001813