dc.contributorUniversidade Federal de São Carlos (UFSCar)
dc.contributorUniversidade Estadual Paulista (UNESP)
dc.contributorPetrobras
dc.date.accessioned2022-05-01T11:23:36Z
dc.date.accessioned2022-12-20T03:46:03Z
dc.date.available2022-05-01T11:23:36Z
dc.date.available2022-12-20T03:46:03Z
dc.date.created2022-05-01T11:23:36Z
dc.date.issued2020-09-01
dc.identifierSN Computer Science, v. 1, n. 5, 2020.
dc.identifier2661-8907
dc.identifier2662-995X
dc.identifierhttp://hdl.handle.net/11449/233900
dc.identifier10.1007/s42979-020-00295-9
dc.identifier2-s2.0-85121264681
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5413999
dc.description.abstractDue to their number of parameters, convolutional neural networks are known to take long training periods and extended inference time. Learning may take so much computational power that it requires a costly machine and, sometimes, weeks for training. In this context, there is a trend already in motion to replace convolutional pooling layers for a stride operation in the previous layer to save time. In this work, we evaluate the speedup of such an approach and how it trades off with accuracy loss in multiple computer vision domains, deep neural architectures, and datasets. The results showed significant acceleration with an almost negligible loss in accuracy, when any, which is a further indication that convolutional pooling on deep learning performs redundant calculations.
dc.languageeng
dc.relationSN Computer Science
dc.sourceScopus
dc.subjectConvolutional neural networks
dc.subjectGait recognition
dc.subjectOptical character recognition
dc.subjectPooling
dc.titleDoes Removing Pooling Layers from Convolutional Neural Networks Improve Results?
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución