dc.contributorBaggio, José Eduardo
dc.creatorSilva, Gabriel de Jesus Coelho da
dc.date.accessioned2022-07-06T19:55:16Z
dc.date.accessioned2022-10-07T23:38:48Z
dc.date.available2022-07-06T19:55:16Z
dc.date.available2022-10-07T23:38:48Z
dc.date.created2022-07-06T19:55:16Z
dc.date.issued2019-07-15
dc.identifierhttp://repositorio.ufsm.br/handle/1/25260
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/4041146
dc.description.abstractThe growing investment in the use of artificial neural networks for end-user services, which require low latency and high responsiveness, make it desirable to have dedicated hardware accelerators for inference. FPGA (Field-Programmable Gate Arrays) programmable devices have the required ideal flexibility for the deployment of artificial neural network accelerators, while being able to support different architectural network models and still keeping performance. A modular artificial neural network design is developed in hardware description language in order to allow inference from reconfigurable devices with desirable performance. The modular design enables it to be easily scaled to support new neural network architectures and different activation functions. The project’s validation is verified by a hardware implementation of a simple and widely known neural network (exclusive-OR (XOR) function).
dc.publisherUniversidade Federal de Santa Maria
dc.publisherBrasil
dc.publisherUFSM
dc.publisherCentro de Tecnologia
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rightsAcesso Aberto
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.subjectRedes neurais artificiais
dc.subjectInferência
dc.subjectHardware
dc.subjectVHDL
dc.subjectFPGA
dc.subjectNeural networks
dc.subjectInference
dc.titleImplementação de redes neurais artificiais em hardware para inferência
dc.typeTrabalho de Conclusão de Curso de Graduação


Este ítem pertenece a la siguiente institución