dc.creatorBoccato, L
dc.creatorLopes, A
dc.creatorAttux, R
dc.creatorVon Zuben, FJ
dc.date2012
dc.dateAUG
dc.date2014-07-30T13:48:40Z
dc.date2015-11-26T18:02:23Z
dc.date2014-07-30T13:48:40Z
dc.date2015-11-26T18:02:23Z
dc.date.accessioned2018-03-29T00:44:03Z
dc.date.available2018-03-29T00:44:03Z
dc.identifierNeural Networks. Pergamon-elsevier Science Ltd, v. 32, n. 292, n. 302, 2012.
dc.identifier0893-6080
dc.identifierWOS:000306162600032
dc.identifier10.1016/j.neunet.2012.02.028
dc.identifierhttp://www.repositorio.unicamp.br/jspui/handle/REPOSIP/54407
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/54407
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1292219
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.descriptionEcho state networks (ESNs) can be interpreted as promoting an encouraging compromise between two seemingly conflicting objectives: (i) simplicity of the resulting mathematical model and (ii) capability to express a wide range of nonlinear dynamics. By imposing fixed weights to the recurrent connections, the echo state approach avoids the well-known difficulties faced by recurrent neural network training strategies, but still preserves, to a certain extent, the potential of the underlying structure due to the existence of feedback loops within the dynamical reservoir. Moreover, the overall training process is relatively simple, as it amounts essentially to adapting the readout, which usually corresponds to a linear combiner. However, the linear nature of the output layer may limit the capability of exploring the available information, since higher-order statistics of the signals are not taken into account. In this work, we present a novel architecture for an ESN in which the linear combiner is replaced by a Volterra filter structure. Additionally, the principal component analysis technique is used to reduce the number of effective signals transmitted to the output layer. This idea not only improves the processing capability of the network, but also preserves the simplicity of the training process. The proposed architecture is then analyzed in the context of a set of representative information extraction problems, more specifically supervised and unsupervised channel equalization, and blind separation of convolutive mixtures. The obtained results, when compared to those produced by already proposed ESN versions, highlight the benefits brought by the novel network proposal and characterize it as a promising tool to deal with challenging signal processing tasks. (C) 2012 Elsevier Ltd. All rights reserved.
dc.description32
dc.descriptionSI
dc.description292
dc.description302
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.languageen
dc.publisherPergamon-elsevier Science Ltd
dc.publisherOxford
dc.publisherInglaterra
dc.relationNeural Networks
dc.relationNeural Netw.
dc.rightsfechado
dc.rightshttp://www.elsevier.com/about/open-access/open-access-policies/article-posting-policy
dc.sourceWeb of Science
dc.subjectEcho state networks
dc.subjectVolterra filtering
dc.subjectPrincipal component analysis
dc.subjectChannel equalization
dc.subjectSource separation
dc.subjectNeural-networks
dc.subjectTime
dc.subjectPrediction
dc.subjectSystems
dc.titleAn extended echo state network using Volterra filtering and principal component analysis
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución