dc.creatorGuo, Xianping
dc.creatorHernández del Valle, Adrián
dc.creatorHernández Lerma, Onésimo
dc.date.accessioned2013-01-21T23:34:07Z
dc.date.available2013-01-21T23:34:07Z
dc.date.created2013-01-21T23:34:07Z
dc.date.issued2011-07
dc.identifierSyatems & Control Letters, Vol. 60, Núm. 7, Julio 2011
dc.identifier0167-6911
dc.identifierESE
dc.identifierhttp://www.repositoriodigital.ipn.mx/handle/123456789/12040
dc.description.abstractThis paper is about nonstationary nonlinear discrete-time deterministic and stochastic control systems with Borel state and control spaces, with either bounded or unbounded costs. The control problem is to minimize an infinite-horizon total cost performance index. Using dynamic program arguments we show that, under suitable assumptions, the optional cost functions satisfy optimality equations, which in turn give a procedure to find optimal control policies. We also prove the convergence of value iteration (or successive approximations) functions. Several examples illustrate our results under different sets of assumptions.
dc.languagees
dc.publisherSyatems & Control Letters
dc.subjectDiscrete-time control systems
dc.subjectTime-nonhomogeneous systems
dc.subjectTime-varying systems
dc.subjectNonlinear systems
dc.subjectNonstationary dynamic programming
dc.titleNonstationary discrete-time deterministic and stochastic control systems: Bounded and unbounded cases
dc.typeArticle


Este ítem pertenece a la siguiente institución