dc.contributorFoster, Ian
dc.contributorJoubert, Gerhard R.
dc.contributorKučera, Luděk
dc.contributorNagel, Wolfgang E.
dc.contributorPeters, Frans
dc.creatorDematties, Dario Jesus
dc.creatorThiruvathukal, George K.
dc.creatorRizzi, Silvio
dc.creatorWainselboim, Alejandro Javier
dc.creatorZanutto, Bonifacio Silvano
dc.date.accessioned2021-05-06T01:07:43Z
dc.date.accessioned2022-10-15T01:35:53Z
dc.date.available2021-05-06T01:07:43Z
dc.date.available2022-10-15T01:35:53Z
dc.date.created2021-05-06T01:07:43Z
dc.date.issued2020
dc.identifierDematties, Dario Jesus; Thiruvathukal, George K.; Rizzi, Silvio; Wainselboim, Alejandro Javier; Zanutto, Bonifacio Silvano; Towards high-end scalability on biologically-inspired computational models; IOS Press; 36; 2020; 497-506
dc.identifier978-1-64368-071-2
dc.identifierhttp://hdl.handle.net/11336/131407
dc.identifierCONICET Digital
dc.identifierCONICET
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4330654
dc.description.abstractThe interdisciplinary field of neuroscience has made significant progress in recent decades, providing the scientific community in general with a new level of understanding on how the brain works beyond the store-and-fire model found in traditional neural networks. Meanwhile, Machine Learning (ML) based on established models has seen a surge of interest in the High Performance Computing (HPC) community, especially through the use of high-end accelerators, such as Graphical Processing Units(GPUs), including HPC clusters of same. In our work, we are motivated to exploit these high-performance computing developments and understand the scaling challenges for new–biologically inspired–learning models on leadership-class HPC resources. These emerging models feature sparse and random connectivity profiles that map to more loosely-coupled parallel architectures with a large number of CPU cores per node. Contrasted with traditional ML codes, these methods exploit loosely-coupled sparse data structures as opposed to tightly-coupled dense matrix computations, which benefit from SIMD-style parallelism found on GPUs. In this paper we introduce a hybrid Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) parallelization scheme to accelerate and scale our computational model based on the dynamics of cortical tissue. We ran computational tests on a leadership class visualization and analysis cluster at Argonne National Laboratory. We include a study of strong and weak scaling, where we obtained parallel efficiency measures with a minimum above 87% and a maximum above 97% for simulations of our biologically inspired neural network on up to 64 computing nodes running 8 threads each. This study shows promise of the MPI+OpenMP hybrid approach to support flexible and biologically-inspired computational experimental scenarios. In addition, we present the viability in the application of these strategies in high-end leadership computers in the future.
dc.languageeng
dc.publisherIOS Press
dc.relationinfo:eu-repo/semantics/altIdentifier/url/http://ebooks.iospress.nl/volumearticle/53956
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.3233/APC200077
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.sourceParallel computing: technology trends
dc.subjectMPI
dc.subjectOPENMP
dc.subjectCENTRAL PROCESSING UNITS
dc.subjectBIOLOGICAL MODELS
dc.subjectNEUROSCIENCE
dc.subjectIRREGULAR COMPUTATION
dc.titleTowards high-end scalability on biologically-inspired computational models
dc.typeinfo:eu-repo/semantics/publishedVersion
dc.typeinfo:eu-repo/semantics/bookPart
dc.typeinfo:ar-repo/semantics/parte de libro


Este ítem pertenece a la siguiente institución