dc.contributorCumbal Simba, José Renato
dc.creatorCaiza Chafla, Oscar Eduardo
dc.creatorJami Herrera, Christian Alexander
dc.date.accessioned2021-05-15T03:26:07Z
dc.date.accessioned2022-10-20T18:05:26Z
dc.date.available2021-05-15T03:26:07Z
dc.date.available2022-10-20T18:05:26Z
dc.date.created2021-05-15T03:26:07Z
dc.date.issued2021-05
dc.identifierhttp://dspace.ups.edu.ec/handle/123456789/20201
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4568772
dc.description.abstractIn this paper we propose the deployment of the RSU infrastructure through a reinforcement learning algorithm to optimally distribute the resources in the vehicular network. The main objective of our study is to use the Q-Learning algorithm to allocate channels from a controller to the RSUs in the planning scenario. With this initial deployment and its mobility, an analysis will be performed through an optimization model to obtain a minimum number of devices in the simulated VANET infrastructure. The learning of the algorithm on the scenario is dynamically established in relation to the vehicular demand and its coverage restrictions for a V2I communication.
dc.languagespa
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/ec/
dc.rightsopenAccess
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Ecuador
dc.subjectINGENIERÍA ELECTRÓNICA
dc.subjectPLANIFICACIÓN ESTRATÉGICA
dc.subjectREDES DE COMPUTADORES
dc.subjectENSEÑANZA - APRENDIZAJE
dc.titleAsignación dinámica de recursos en redes VANET mediante aprendizaje por refuerzo
dc.typebachelorThesis


Este ítem pertenece a la siguiente institución