A View of Unconstrained Optimization
Dennis, J.E. Jr.
Schnabel, Robert B.
Finding the unconstrained minimizer of a function of more than one variable is an important problem with many practical applications, including data fitting, engineering design, and process control. In addition, techniques for solving unconstrained optimization problems form the basis for most methods for solving constrained optimization problems. This paper surveys the state of the art for solving unconstrained optimization problems and the closely related problem of solving systems of nonlinear equations. First we briefly give some mathematical background. Then we discuss Newton's method, the fundamental method underlying most approaches to these problems, as well as the inexact Newton method. The two main practical deficiencies of Newton's method, the need for analytic derivatives and the possible failure to converge to the solution from poor starting points, are the key issues in unconstrained optimization and are addressed next. We discuss a variety of techniques for approximating derivatives, including finite difference approximations, secant methods for nonlinear equations and unconstrained optimization, and the extension of these techniques to solving large, sparse problems. Then we discuss the main methods used to ensure convergence from poor starting points, line search methods and trust region methods. Next we briefly discuss two rather different approaches to unconstrained optimization, the Nelder-Mead simplex method and conjugate direction methods. Finally we comment on some current research directions in the field, in the solution of large problems, the solution of data fitting problems, new secant methods, the solution of singular problems.
Citable link to this pagehttps://hdl.handle.net/1911/101633
MetadataShow full item record
- CAAM Technical Reports