dc.creator | Palombarini, Jorge A. | |
dc.creator | Martínez, Ernesto C. | |
dc.date | 2019-09 | |
dc.date | 2019 | |
dc.date | 2020-02-20T17:09:39Z | |
dc.date.accessioned | 2023-07-14T18:31:00Z | |
dc.date.available | 2023-07-14T18:31:00Z | |
dc.identifier | http://sedici.unlp.edu.ar/handle/10915/89513 | |
dc.identifier | issn:2618-3277 | |
dc.identifier.uri | https://repositorioslatinoamericanos.uchile.cl/handle/2250/7431714 | |
dc.description | In this work, a novel approach for generating rescheduling knowledge which can be used in real-time for handling unforeseen events without extra deliberation is presented. For generating such control knowledge, the rescheduling task is modelled and solved as a closed-loop control problem by resorting to the integration of a schedule state simulator with a rescheduling agent that can learn successful schedule repairing policies directly from a variety of simulated transitions between schedule states, using as input readily available schedule color-rich Gantt chart images, and negligible prior knowledge. The generated knowledge is stored in a deep Q-network, which can be used as a computational tool in a closed-loop rescheduling control way that select repair actions to make progress towards a goal schedule state, without requiring to compute the rescheduling problem solution every time a disruptive event occurs and safely generalize control knowledge to unseen schedule states. | |
dc.description | Sociedad Argentina de Informática e Investigación Operativa | |
dc.format | application/pdf | |
dc.format | 86 | |
dc.language | en | |
dc.rights | http://creativecommons.org/licenses/by-sa/3.0/ | |
dc.rights | Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) | |
dc.subject | Ciencias Informáticas | |
dc.subject | Control knowledge | |
dc.subject | Schedule state simulator | |
dc.subject | Computational tool | |
dc.title | Closed-loop Rescheduling using Deep Reinforcement Learning | |
dc.type | Objeto de conferencia | |
dc.type | Resumen | |