Newton-Based Methods for Smoothed Risk-Averse PDE-Constrained Optimization Problems
Master of Arts
This thesis introduces a modification for traditional, Newton-based methods to improve the efficiency of solving smoothed, risk-averse PDE-constrained optimization problems arising from optimal control applications. Instead of only maximizing the expected performance of a system, risk-averse formulations penalize deviations of actual performance that are below expected performance. Solving risk-averse optimization problems is notoriously expensive, as the evaluation of the objective function often requires sampling in the tail of a complicated and unknown probability distribution, which depends on the random variables that enter the system through the PDE and through the performance measure. Moreover, many risk measures, like the Conditional-Value-at-Risk, are non-smooth. As a result, many have employed smoothing functions to smooth the objective and allow the use of fast, gradient-based optimization algorithms like Newton's method. However, for iterates that are far from the optimal solution, the resulting Hessian of the smoothed problem is rank-deficient, which hinders the performance of standard algorithms and results in unnecessary PDE solves. A modification for standard Newton-based algorithms, which eliminates the problem arising from the rank-deficient Hessian, is presented. This algorithm is also tailored to reduce the number of linear PDE solves arising from adjoint and Hessian-vector computations when solving sample-based, PDE-constrained optimization problems.
Optimization under uncertainty; Conditional Value-at-Risk; PDE-Constrained Optimization