dc.contributorLima, João Vicente Ferreira
dc.creatorTrindade, Rafael Gauna
dc.date.accessioned2021-12-06T12:50:27Z
dc.date.accessioned2022-10-07T22:44:41Z
dc.date.available2021-12-06T12:50:27Z
dc.date.available2022-10-07T22:44:41Z
dc.date.created2021-12-06T12:50:27Z
dc.date.issued2017-12-12
dc.identifierhttp://repositorio.ufsm.br/handle/1/23152
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/4038191
dc.description.abstractDeep Learning is a subcategory of machine learning algorithms and is a subject of relevant studies in the area of Artificial Intelligence. Characterized in most cases as multi-layered Artificial Neural Networks, deep learning networks present themselves as a means of achieving improvements in numerous computational tasks, such as speech recognition, natural language processing, and object identification in images, item present in the field of computer vision. Its importance has grown steadily in recent years, and its popularity increases as vast databases of information and devices with high computational capacity become accessible. Companies invest in the field of associated research, and new applications are available to end users, in addition to the strong hope of efficiency in their application in the health area. This work proposes to analyze the performance and the way that the loss values evolve until it converge, in a scenario of inevitable overfitting, of two relatively popular Deep Learning libraries among developers and researchers: Caffe, developed by the University of Berkley, and TensorFlow, developed by Google. Executions of two known convolutional networks (AlexNet and GoogLeNet) were conducted as benchmarking in hybrid architectures that use accelerators and in a cluster, varying hyperparameters of the networks in a scenario of unavoidable overfitting. The results lead to conclusion that the TensorFlow library presented a better performance in most cases, and tends to consume less memory to store network information. However a portion of this performance is due in part to the use of vectorized instructions, and in a contrary scenario, the Caffe library may outperform the competitor, despite some technical deficiencies. Besides that, the Caffe library presents a problem by reaching overfitting with negative values, a fact that should not happens in a artificial neural network.
dc.publisherUniversidade Federal de Santa Maria
dc.publisherBrasil
dc.publisherUFSM
dc.publisherCentro de Tecnologia
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rightsAcesso Aberto
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.subjectAprendizagem profunda
dc.subjectRedes neurais
dc.subjectComputação heterogênea
dc.subjectTensorFlow
dc.subjectBenchmarking
dc.subjectCaffe
dc.titleAnálise de desempenho de bibliotecas de deep learning em arquiteturas híbridas com aceleradores
dc.typeTrabalho de Conclusão de Curso de Graduação


Este ítem pertenece a la siguiente institución