Dissertação
Gerenciamento dinâmico de memória em aplicações com reuso de dados no Apache Spark
Fecha
2020-05-25Autor
Donato, Mauricio Matter
Institución
Resumen
The Apache Spark is a framework able to process a massive quantity of data in-memory,
through its primary abstraction: Resilient Distributed Datasets (RDD). An RDD consists of
an immutable object collection, which can be processed in a parallel and distributed way in
the cluster. Once it was processed, an RDD could be stored in the cache, allowing its reuse
without recomputing it. While the application’s computations are done, the memory tends to
be overheaded, and RDD’s partitions must be removed according to the Least Recently Used
(LRU) algorithm. This algorithm is based on the idea that partitions frequently used in the past
will be reaccessed shortly. Thus, the algorithm removes partitions that access occurred a long
time ago. However, there are situations that the LRU algorithm could introduce degradation
in Spark’s performance, which is the case where there is cyclic access in the memory, and the
available space is lower than the dataset size. In those situations, the LRU algorithm will always
remove the block, which will be accessed soon. Considering the identified issues in the
LRU, this work proposes a Dynamic Memory Management in Applications With Data Reuse
on Apache Spark. This model aims to extract metrics from the application’s execution in order
to use that information to realize data removing from the cache. The proposed model is
compound by two main components, which are (1) an algorithm to manage the RDD’s partitions
stored int the memory and (2) a monitor agency responsible for getting information about
the application. The Dynamic Management model was validated through experiments using
the Grid’5000 platforms with benchmarks PageRank, K-Means, and Logistic Regression. The
obtained results demonstrate that the Dynamic Management model was able to improve the
utilization of available memory, being able to reduce by 23,94% the necessary execution time
to process the benchmark Logistic Regression, when it is compared to LRU. Furthermore, the
proposed model became Spark’s execution more stable, reducing the error frequency during the
processing of benchmarks. As a consequence, there was a reduction by 34,14% in the time
spend to process the benchmark PageRank. Therefore, the obtained results allow concluding
that dynamic strategies, like the one proposed by this work, can improve the Sparks execution
in applications where there is reuse data.