Supermemory gradient method for minimizing unconstrained functions
Cragg, Edward E
Master of Science
A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is presented. The new algorithm can be stated as follows: where x is an n-vector, g(x) is the gradient of the function f(x), ox is the change in the position vector for the iteration under consideration, and oxi is the change in the position vector for the ith previous iteration. The quantities a and Bi are scalars chosen at each step so as to yield the greatest decrease in the function; the scalar k denotes the number of past iterations remembered. A test problem was considered, that of a quartic involving n = 4 variables. Convergence to the minimum was attained in 18 iterations for k = 1, 12 iterations for k = 2, and 4 iterations for k = 3. The computing time was 4.7 secs for k = 1, 4.5 secs for k = 2, and 2.7 secs for k = 3.