STORM: STochastic Optimization using Random Models

Matthew Menickelly
Seminar

In this work, we propose a trust-region framework (STORM) for the unconstrained optimization of smooth (nonconvex) stochastic functions. Rather than assume that function value (or higher-order derivative) estimates are asymptotically correct with probability 1 as in a sample average approximation setup, we assume the following: on each iteration, we can construct with fixed probability bounded away from 1 fully-linear models for use in a trust-region subproblem and we can compute function value estimates with error bounds dictated by the current trust-region radius. We prove that an algorithm in this framework almost surely converges to a first-order stationary point. Moreover, we prove that algorithms in this framework achieve the canonical rate of convergence for unconstrained nonconvex optimization. Finally, we will demonstrate the performance of two algorithms in this framework: one designed for zeroth-order (derivative-free) stochastic optimization and one that serves as an analogue to stochastic gradient methods.
 
Bio:
Matt Menickelly is a PhD candidate in the Industrial Systems and Engineering Department of Lehigh University. His research interests lie broadly in the area of nonlinear optimization, and more specifically in derivative-free optimization, stochastic optimization, and the application of optimization methods to machine learning. Matt is currently a supplemental research scientist at the IBM TJ Watson Research Center, and has previously been a Givens Associate at Argonne.