Buscar
Mostrando ítems 1-10 de 91
Análise e otimização do Apache Hadoop em arquiteturas paralelas com memória compartilhada
(Universidade Federal de Santa MariaBrasilUFSMCentro de Tecnologia, 2014-12-09)
The act of processing large volumes of data has always been an obstacle in computing.
The emergence of the paradigm of parallel computing combined with the idea of distributing
the computation across multiple computers ...
Desenvolvimento de um escalonador sensível ao contexto para o Apache Hadoop
(Universidade Federal de Santa MariaBrasilUFSMCentro de Tecnologia, 2014-01-20)
Nowadays the volume of data generated by the services provided for end users, is way
larger than the processing capacity of one computer alone. As a solution to this problem, some
tasks can be parallelized. The Apache ...
Escalonamento adaptativo para o Apache Hadoop
(Universidade Federal de Santa MariaBrasilCiência da ComputaçãoUFSMPrograma de Pós-Graduação em InformáticaCentro de Tecnologia, 2016-03-11)
Many alternatives have been employed in order to process all the data generated by
current applications in a timely manner. One of these alternatives, the Apache Hadoop,
combines parallel and distributed processing with ...
Propuesta de Apache Spark para consultas de grandes cantidades de datos
(Ediciones Universidad Simón BolívarFacultad de Ingenierías, 2022)
El siguiente documento consta de una problemática con referencia a la evolución del software de la app y de los programas web que tiene un amplio flujo de datos estos estaban presentando retrasos a la hora de lectura perdida ...
Simulation and analysis applied on virtualization to build Hadoop clusters
(Ieee, 2015-01-01)
The data growth enhances the need of a method and paradigms responsible to deal with high scalability, reliability and fault tolerance in large amounts of data. Big Data is a framework capable of dealing with this need. ...
Simulação e análise aplicada a virtualização para construção de Hadoop clusters
(2015-01-01)
The data growth enhances the need of a method and paradigms responsible to deal with high scalability, reliability and fault tolerance in large amounts of data. Big Data is a framework capable of dealing with this need. ...
Hadoop cluster deployment: A methodological approach
(2018-05-29)
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking ...
Desenvolvimento e avaliação de desempenho de um cluster Raspberry Pi e Apache Hadoop em aplicações big data
(Pós-Graduação em Ciência da ComputaçãoUniversidade Federal de Sergipe (UFS), 2023)