Trabalho de Conclusão de Curso de Graduação
Implementação de redes neurais artificiais em hardware para inferência
Fecha
2019-07-15Autor
Silva, Gabriel de Jesus Coelho da
Institución
Resumen
The growing investment in the use of artificial neural networks for end-user services, which
require low latency and high responsiveness, make it desirable to have dedicated hardware
accelerators for inference. FPGA (Field-Programmable Gate Arrays) programmable
devices have the required ideal flexibility for the deployment of artificial neural network accelerators,
while being able to support different architectural network models and still keeping
performance. A modular artificial neural network design is developed in hardware
description language in order to allow inference from reconfigurable devices with desirable
performance. The modular design enables it to be easily scaled to support new neural
network architectures and different activation functions. The project’s validation is verified
by a hardware implementation of a simple and widely known neural network (exclusive-OR
(XOR) function).