dc.contributorSegura Quijano, Fredy Enrique
dc.contributorGarcía Cárdenas, Juan José
dc.contributorTirado, Vilma
dc.contributorGiraldo Trujillo, Luis Felipe
dc.contributorGarcía, Elkin
dc.contributorCMUA
dc.creatorSierra Alarcón, Sebastián
dc.date.accessioned2022-07-11T14:36:45Z
dc.date.available2022-07-11T14:36:45Z
dc.date.created2022-07-11T14:36:45Z
dc.date.issued2022
dc.identifierhttp://hdl.handle.net/1992/58722
dc.identifierinstname:Universidad de los Andes
dc.identifierreponame:Repositorio Institucional Séneca
dc.identifierrepourl:https://repositorio.uniandes.edu.co/
dc.description.abstractEste trabajo se centra en la creación de un sistema de machine learning capaz de ser desplegado en sistemas embebidos y microcontroladores de bajo poder mediante una metodología split. Partiendo de las limitaciones computacionales que este tipo de dispositivos nos ofrecen, se propone una arquitectura en donde se realiza un proceso de inferencia de forma colaborativa entre múltiples dispositivos, con el fin de compartir recursos y poder desplegar modelos neuronales más grandes y robustos en sistemas embebidos pequeños. Todo esto buscando explorar el potencial del Edge Computing y procesar los datos en el mismo lugar donde se generan.
dc.description.abstractThe present work focuses on creating a distributed Machine Learning system capable of being deployed on multiple low-power embedded devices. Furthermore, we are looking to share the device resources through collaborative computing to be able to deploy more powerful neural models in a network of embedded systems. According to that, different stages of the solution were considered to solve the challenges inherent to creating a collaborative learning model and the challenges presented by adopting this type of solution in embedded systems, such as code optimization, communication protocols, and device synchronization
dc.languageeng
dc.publisherUniversidad de los Andes
dc.publisherMaestría en Ingeniería Electrónica y de Computadores
dc.publisherFacultad de Ingeniería
dc.publisherDepartamento de Ingeniería Eléctrica y Electrónica
dc.relationRaffaele Pugliese, Stefano Regondi, and Riccardo Marini. Machine learning-based approach: Global trends, research directions, and regulatory standpoints. In: Data Science and Management 4 (2021), pp. 19-29
dc.relationTimothy Yang et al. Applied federated learning: Im- proving google keyboard query suggestions. In: arXiv preprint arXiv:1812.02903 (2018)
dc.relationBlesson Varghese et al. Revisiting the arguments for edge computing research. In: IEEE Internet Computing 25.5 (2021), pp. 36-42
dc.relationPete Warden and Daniel Situnayake. Tinyml: Machine learning with tensorflow lite on arduino and ultra-low- power microcontrollers. OReilly Media, 2019
dc.relationMahadev Satyanarayanan. The emergence of edge computing¿. In: Computer 50.1 (2017), pp. 30-39
dc.relationZiming Zhao et al. Edge computing: platforms, appli- cations and challenges. In: J. Comput. Res. Dev 55.2 (2018), pp. 327-337
dc.relationXuehai Hong and Yang Wang. Edge computing tech- nology: development and countermeasures. In: Strate- gic Study of Chinese Academy of Engineering 20.2 (2018), pp. 20-26
dc.relationKeith Bonawitz et al. Towards Federated Learning at Scale: System Design. In: (2019). DOI: 10 . 48550 / ARXIV.1902.01046. URL: https://arxiv.org/abs/1902. 01046
dc.relationDinh C. Nguyen et al. Federated Learning Meets Blockchain in Edge Computing: Opportunities and Challenges. In: (2021). DOI: 10.48550/ARXIV.2104. 01776. URL: https://arxiv.org/abs/2104.01776
dc.relationH. Brendan McMahan et al. Communication-Efficient Learning of Deep Networks from Decentralized Data. In: (2016). DOI: 10.48550/ARXIV.1602.05629. URL: https://arxiv.org/abs/1602.05629
dc.relationChandra Thapa et al. SplitFed: When Federated Learn- ing Meets Split Learning. In: (2020). DOI: 10.48550/ ARXIV.2004.12088. URL: https://arxiv.org/abs/2004. 12088
dc.relationOtkrist Gupta and Ramesh Raskar. Distributed learn- ing of deep neural network over multiple agents. In: (2018). DOI: 10.48550/ARXIV.1810.06060. URL: https: //arxiv.org/abs/1810.06060
dc.relationWeisong Shi et al. Edge computing: Vision and chal- lenges. In: IEEE internet of things journal 3.5 (2016), pp. 637-646
dc.relationNasir Abbas et al. Mobile edge computing: A survey. In: IEEE Internet of Things Journal 5.1 (2017), pp. 450-465
dc.relationColby R Banbury et al. Benchmarking TinyML sys- tems: Challenges and direction. In: arXiv preprint arXiv:2003.04821 (2020)
dc.relationLachit Dutta and Swapna Bharali. Tinyml meets iot: A comprehensive survey. In: Internet of Things 16 (2021), p. 100461
dc.relationRS-485 Unit Load and Maximum Number of Bus Con- nections. TLE 4473 GV55-2. Rev. 1.2. Texas Instru- ments. May 2004
dc.rightsAtribución-NoComercial 4.0 Internacional
dc.rightsAtribución-NoComercial 4.0 Internacional
dc.rightshttp://creativecommons.org/licenses/by-nc/4.0/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.titleSplit learning on low power devices for collaborative inference
dc.typeTrabajo de grado - Maestría


Este ítem pertenece a la siguiente institución