Machine learning is being deployed across a growing number of high-stakes mission-critical applications, including defense, transportation, and medicine. When it comes to AI systems in safety-critical contexts, significant challenges remain. The current generation of ML models tend to be greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data.
Brittle, because they are all susceptible to an emerging set of counter-AI attacks. They are opaque because, unlike traditional programs with their formal, debuggable code, AI systems are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. As part of this talk, we will explore the AI Assurance challenges that we will need to overcome in order for society to fully benefit from advances in machine learning.
Dr. Rodriguez is the director of The Artificial Intelligence and Autonomy Innovation Center at MITRE Labs and leads the AI Red Team for the Department of Defense. Being part of a not-for-profit in the public interest, Dr. Rodriguez works with a team that can look beyond the bottom line of any particular product or organization and focus on harnessing AI to help address national and global challenges.
For the past twenty years, his research has focused on exploring how artificial intelligence, and in particular Computer Vision, can be used to help solve problems for a safer world. He obtained his PhD at UCF’s Center for Research in Computer Vision. He was a visiting researcher at the Robotics Institute at Carnegie Mellon and a post-doctoral fellow at INRIA at the Département d’Informatique of Ecole Normale Supérieure in Paris, France. Dr. Rodriguez was the chair of the ODNI Video Analytics Research Working Group and is a senior technical advisor for the Pentagon’s Project MAVEN. He has served in the program committee for IEEE Computer Vision and Pattern Recognition, IEEE International Conference on Computer Vision, and IEEE Transactions on Pattern Analysis and Machine Intelligence.