dc.contributorUniversidade Estadual Paulista (UNESP)
dc.contributorFederal University of Ceará
dc.contributorScience and Technology of Ceará
dc.date.accessioned2022-05-01T04:26:36Z
dc.date.accessioned2022-12-20T03:36:52Z
dc.date.available2022-05-01T04:26:36Z
dc.date.available2022-12-20T03:36:52Z
dc.date.created2022-05-01T04:26:36Z
dc.date.issued2021-09-01
dc.identifierApplied Soft Computing, v. 108.
dc.identifier1568-4946
dc.identifierhttp://hdl.handle.net/11449/233128
dc.identifier10.1016/j.asoc.2021.107466
dc.identifier2-s2.0-85105581286
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5413227
dc.description.abstractDeep learning techniques usually face drawbacks related to the vanishing gradient problem, i.e., the gradient becomes gradually weaker when propagating from one layer to another until it finally vanishes away and no longer helps in the learning process. Works have addressed this problem by introducing residual connections, thus assisting gradient propagation. However, such a subject of study has been poorly considered for Deep Belief Networks. In this paper, we propose a weighted layer-wise information reinforcement approach concerning Deep Belief Networks. Moreover, we also introduce metaheuristic optimization to select proper weight connections that improve the network's learning capabilities. Experiments conducted over public datasets corroborate the effectiveness of the proposed approach in image classification tasks.
dc.languageeng
dc.relationApplied Soft Computing
dc.sourceScopus
dc.subjectDeep Belief Network
dc.subjectMetaheuristic optimization
dc.subjectOptimization
dc.subjectResidual networks
dc.subjectRestricted Boltzmann machines
dc.titleReinforcing learning in Deep Belief Networks through nature-inspired optimization
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución