Stochastic First- and Zero-order Methods for Nonconvex Stochastic Programming

Guanghui (George) Lan Assistant Professor, University of Florida
Seminar

We present a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving a class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the rate of convergence of this method for computing an approximate stationary point (resp., optimal solution) for nonconvex (resp., convex) nonlinear programming problems. We also show that this method can handle stochastic programming problems with endogenous uncertainty where the distribution and/or support of the random variables depend on the decision variables. A variant, which consists of applying a post-optimization phase to evaluate a short list of solutions generated by a few independent runs of the RSG method, is proposed to significantly improve the large-deviation properties of the aforementioned convergence rates. These methods are then specialized for solving a class of simulation-based optimization problems, where only stochastic zero-order information (function values) is available. To the best of our knowledge, this is the first-time that the convergence rates of SA type algorithms have been established for solving nonconvex nonlinear programming problems (with possibly endogenous uncertainty).