Specific optimal control using stochastic approximation
Tarver, Loyd Estes
Pearson, J. Boyd
Master of Science
This thesis is concerned with the solution of a specific optimal control problem. Because of the effect of the initial state of a system on the solution, it is desired to obtain a specific optimal controller which will be optimum in a well-defined sense for a region of initial states. The problem is formulated by first assuming the initial state to be a random variable. Then the "stochastic optimal controller" is defined as the set of constant parameters which minimizes the expected value with respect to the initial state of a suitably chosen performance index. The method of stochastic approximation is proposed as a means of solution; however, it is shown that the theory of stochastic approximation as it now exists cannot be directly applied to the problem. Modifications of the theory which will allow solutions to be obtained are then proposed. The validity of these modifications has been verified by numerical experimentation, although an analytical verification has not been made. Both linear and nonlinear systems may be treated in this formulation. In cases where the expected value of the performance index may not be a good criterion function, the specific optimal controller for the worst-case initial state or input signal can be used as an alternative design. This min-max problem is discussed, and numerical examples are presented.