A New Method for Optimizing the Non-Anticipative Lagrangian Dual of a Stochastic Mixed-Integer Program

Event Sponsor: 
Mathematics and Computer Science Division Seminar
Start Date: 
Apr 20 2016 - 10:30am
Building/Room: 
Building 240/Room 4301
Location: 
Argonne National Laboratory
Speaker(s): 
Brian Dandurand
Speaker(s) Title: 
Postdoc Interviewee, MCS
Host: 
Kibaek Kim

We present a new primal-dual algorithm for computing the value of the Lagrangian dual of a stochastic mixed-integer program (SMIP) formed by relaxing its nonanticipativity constraints. The algorithm relies on the well-known progressive hedging (PH) method, but unlike previous PH approaches applied to SMIPs, our algorithm can be shown to converge to the optimal Lagrangian dual value under mild assumptions. The key improvement in the new algorithm is an inner loop of optimized linearization steps, similar to those taken in the classical Frank-Wolfe (FW) method, which also serve to build an inner approximation of the convexified local convex constraints that are not known explicitly. Thus, the new algorithm is referred to as FW-PH, which preserves the amenability to parallelization of PH. The iterative solution approaches in convex optimization based on the optimized linearization steps such as the FW method and its derivatives have a well-studied convergence analysis. For the purposes of obtaining high-quality Lagrangian dual bounds efficiently, reliance on the optimal inner loop convergence of such methods is not practical due to their slow tail-end convergence. Under mild assumptions on the structure of the SMIP and the initialization of the algorithm, we show optimal dual convergence of the sequence of dual solutions generated by FW-PH even in the case where the main PH primal update subproblems are solved approximately with only one optimized linearization step. The main assumption on the initialization of FW-PH can be relaxed when two optimized linearization steps per PH outer loop iteration are taken in the tail-end iterations. Numerical results demonstrate that our new algorithm empirically outperforms the standard implementation of progressive hedging for obtaining Lagrangian bounds of SMIPs.

Bio:
Brian Dandurand is currently a postdoctoral research fellow in the mathematical sciences at RMIT University in Melbourne, Victoria, Australia. He works with Prof. Andrew Eberhard of RMIT, Prof. Natashia Boland of Georgia Tech, and other researchers on the design, analysis, and implementation of algorithms related to decomposition and duality in mixed-integer stochastic optimization. In 2013, he obtained a Ph.D. degree under the supervision of Prof. Margaret Wiecek of the mathematical sciences department of Clemson University in Clemson, SC, USA. Research associated with the Ph.D. dissertation addresses the design, analysis, and implementation of distributed and parallel algorithms in multi objective optimization.