Artículos de revistas
Spectral gradient methods for linearly constrained optimization
Registro en:
Journal Of Optimization Theory And Applications. Springer/plenum Publishers, v. 125, n. 3, n. 629, n. 651, 2005.
0022-3239
WOS:000229504700008
10.1007/s10957-005-2093-3
Autor
Martinez, JM
Pilotta, EA
Raydan, M
Institución
Resumen
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented. 125 3 629 651