dc.contributorCharao, Andrea Schwertner
dc.creatorScheid, Éder John
dc.date.accessioned2022-06-15T12:56:15Z
dc.date.accessioned2022-10-07T22:04:05Z
dc.date.available2022-06-15T12:56:15Z
dc.date.available2022-10-07T22:04:05Z
dc.date.created2022-06-15T12:56:15Z
dc.date.issued2014-12-09
dc.identifierhttp://repositorio.ufsm.br/handle/1/24858
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/4034190
dc.description.abstractThe act of processing large volumes of data has always been an obstacle in computing. The emergence of the paradigm of parallel computing combined with the idea of distributing the computation across multiple computers helped to solve a considerable part of this obstacle. Many frameworks have been created based on this premise, one of them is the Apache Hadoop framework. Aiming environments where the data is distributed among several computers, the Apache Hadoop provides an optimal solution for processing big data, but the literature on how this framework behaves in an environment where the data is allocated on a single machine is still small. The focus of this work is to analyze and optimize this framework in a paralel architecture where the data is not distributed, and thus achieving results that demonstrates what is its efficiency under those circumstances.
dc.publisherUniversidade Federal de Santa Maria
dc.publisherBrasil
dc.publisherUFSM
dc.publisherCentro de Tecnologia
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rightsAcesso Aberto
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.subjectApache Hadoop
dc.subjectMemória compartilhada
dc.subjectMáquina NUMA
dc.titleAnálise e otimização do Apache Hadoop em arquiteturas paralelas com memória compartilhada
dc.typeTrabalho de Conclusão de Curso de Graduação


Este ítem pertenece a la siguiente institución