Preprint
Proximal regularization for the saddle point gradient dynamics
Fecha
2021Autor
Goldsztajn, Diego
Paganini, Fernando
Institución
Resumen
This paper concerns the solution of a convex optimization
problem through the saddle point gradient dynamics.
Instead of using the standard Lagrangian as is classical in this
method, we consider a regularized Lagrangian obtained through
a proximal minimization step.We show that, without assumptions
of smoothness or strict convexity in the original problem, the regularized
Lagrangian is smooth and leads to globally convergent
saddle point dynamics. The method is demonstrated through an
application to resource allocation in cloud computing.