dc.contributorRamos, Gabriel de Oliveira
dc.creatorSchreiber, Lincoln Vinicius
dc.date.accessioned2022-04-13T21:14:14Z
dc.date.accessioned2022-09-09T22:04:58Z
dc.date.accessioned2023-03-13T19:03:45Z
dc.date.available2022-04-13T21:14:14Z
dc.date.available2022-09-09T22:04:58Z
dc.date.available2023-03-13T19:03:45Z
dc.date.created2022-04-13T21:14:14Z
dc.date.created2022-09-09T22:04:58Z
dc.date.issued2022-02-18
dc.identifierhttp://148.201.128.228:8080/xmlui/handle/20.500.12032/39054
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/6146020
dc.description.abstractWith the fast increase in urbanization levels, the problem of congestion has become even more evident for society, the environment, and the economy. One practical approach to alleviating this problem is adaptive traffic signal control (ATSC). Deep reinforcement learning algorithms have shown great potential for such control. However, these methods can be viewed as black boxes since their learned policies are not easily understood or explainable. The lack of explainability of these algorithms may be limiting their use in real-world conditions. One framework that can provide explanations for any deep learning model is SHAP. It considers models as black boxes and explains them using post-hot techniques, providing explanations based on the response of that model with different inputs, without analyzing or going into internal points (such as parameters and architecture). The state of the art for using SHAP with a deep reinforcement learning algorithm to control traffic lights can demonstrate consistency in the logic of the agent’s decision making, also presenting the reaction according to the traffic in each lane. However, it could not demonstrate the relation of some sensors with the chosen action intuitively and needed to present several figures to understand the impact of the state on the action. This paper presents two approaches based on the Deep Q-Network algorithm to explain the policy learned through the SHAP framework. The first uses the XGBoost algorithm as a function approximation, and the second uses a neural network. Each approach went through a process of studying and optimizing its hyperparameters. The environment was characterized as an MDP, and we modeled it in two different ways, namely Cyclic MDP and Selector MDP. These models allowed us to choose different actions and have different representations of the environment. Both approaches presented the impact of features on each action through the SHAP framework, which promotes understanding of how the agent behaves under different traffic conditions. This work also describes the application of Explainable AI in intelligent traffic signal control, demonstrating how to interpret the model and the limitations of the approach. Furthermore, as a final result, our methods improved travel time, speed, and throughput in two different scenarios, outperforming the FixedTime, SOTL, and MaxPressure baselines.
dc.publisherUniversidade do Vale do Rio dos Sinos
dc.rightsopenAccess
dc.subjectAprendizado por reforço profundo
dc.subjectDeep reinforcement learning
dc.titleAprendizado por reforço profundo explicável: um estudo com controle semafórico inteligente
dc.typeDissertação


Este ítem pertenece a la siguiente institución