Scientific Deep Learning: Reproducibility, Interpretability & Uncertainty Quantification

Elise Jennings, Argonne National Laboratory
Lecture Advanced

From classifying galaxies and detecting gravitational waves; to discovering new materials or new particles in high energy physics colliders; neural networks are transforming the way science is done and accelerating the pace of progress and discovery. However these networks can make overly confident or incorrect predictions due to overfitting and the fact that they are incapable of correctly assessing the uncertainty. We will discuss some key topics in scientific deep learning such as uncertainty quantification, interpretability, reproducibility and integrating domain knowledge. This presentation will also showcase some of the exciting data science research at the LCF.