doctoralThesis
Aprendizagem por reforço profundo uma nova perspectiva sobre o problema dos k-servos
Fecha
2020-01-28Registro en:
LINS, Ramon Augusto Sousa. Aprendizagem por reforço profundo uma nova perspectiva sobre o problema dos k-servos. 2020. 93f. Tese (Doutorado em Engenharia Elétrica e de Computação) - Centro de Tecnologia, Universidade Federal do Rio Grande do Norte, Natal, 2020.
Autor
Lins, Ramon Augusto Sousa
Resumen
The k-server problem in a weighted graph (or metric space) is defined by the need to
efficiently move k servers to fulfill a sequence of requests that arise online at each graph
node. This is perhaps the most influential online computation problem whose solution
remains open, serving as an abstraction for a variety of applications, as buying and selling
of currencies, reassign processes in a parallel processing for load balancing, online transportation service, probe management of oil production rigs, among others. Its conceptual
simplicity contrasts with its computational complexity that grows exponentially with the
increasing number of nodes and servers. Prior to this work, the Q-learning algorithm
was used to solve small instances of the k-server problem. The solution was restricted to
small dimensions of the problem because its storage structure grows exponentially with
the increase in the number of nodes and servers. This problem, known as the curse of
dimensionality, makes the algorithm inefficient or even impossible to execute for certain
instances of the problem. To handle with larger dimensions, Q-learning together with the
greedy algorithm were applied to a small number of nodes separated into different clusters (hierarchical approach). The local policy obtained from each cluster, together with
greedy policy, were used to form a global policy satisfactorily addressing large instances
of the problem. The results were compared to important algorithms in the literature, as the
Work function, Harmonic and greedy. The solutions proposed so far emphasize the increase in the number of nodes, but if we analyze the growth of the storage structure defined
by Cn,k ' O(nk) It can be seen that the increase in the number of servers can be quickly
limited by the problem of the curse of dimensionality. To circumvent this barrier, the
k-server problem was modeled as a deep reinforcement learning task whose state-action
value function was defined by a multilayer perceptron neural network capable of extracting environmental information from images that encode the dynamics of the problem.
The applicability of the proposed algorithm was illustrated in a case study in which different problem configurations were considered. The behavior of the agents was analyzed
during the training phase and their performance was evaluated from performance tests that
quantified the quality of the displacement policies of the servers generated. The results
provide a promising insight into its use as an alternative solution to the k-servers problem.