| dc.contributor | Segura Quijano, Fredy Enrique | |
| dc.contributor | García Cárdenas, Juan José | |
| dc.contributor | Tirado, Vilma | |
| dc.contributor | Giraldo Trujillo, Luis Felipe | |
| dc.contributor | García, Elkin | |
| dc.contributor | CMUA | |
| dc.creator | Sierra Alarcón, Sebastián | |
| dc.date.accessioned | 2022-07-11T14:36:45Z | |
| dc.date.available | 2022-07-11T14:36:45Z | |
| dc.date.created | 2022-07-11T14:36:45Z | |
| dc.date.issued | 2022 | |
| dc.identifier | http://hdl.handle.net/1992/58722 | |
| dc.identifier | instname:Universidad de los Andes | |
| dc.identifier | reponame:Repositorio Institucional Séneca | |
| dc.identifier | repourl:https://repositorio.uniandes.edu.co/ | |
| dc.description.abstract | Este trabajo se centra en la creación de un sistema de machine learning capaz de ser desplegado en sistemas embebidos y microcontroladores de bajo poder mediante una metodología split. Partiendo de las limitaciones computacionales que este tipo de dispositivos nos ofrecen, se propone una arquitectura en donde se realiza un proceso de inferencia de forma colaborativa entre múltiples dispositivos, con el fin de compartir recursos y poder desplegar modelos neuronales más grandes y robustos en sistemas embebidos pequeños. Todo esto buscando explorar el potencial del Edge Computing y procesar los datos en el mismo lugar donde se generan. | |
| dc.description.abstract | The present work focuses on creating a distributed Machine Learning system capable of being deployed on multiple low-power embedded devices. Furthermore, we are looking to share the device resources through collaborative computing to be able to deploy more powerful neural models in a network of embedded systems. According to that, different stages of the solution were considered to solve the challenges inherent to creating a collaborative learning model and the challenges presented by adopting this type of solution in embedded systems, such as code optimization, communication protocols, and device synchronization | |
| dc.language | eng | |
| dc.publisher | Universidad de los Andes | |
| dc.publisher | Maestría en Ingeniería Electrónica y de Computadores | |
| dc.publisher | Facultad de Ingeniería | |
| dc.publisher | Departamento de Ingeniería Eléctrica y Electrónica | |
| dc.relation | Raffaele Pugliese, Stefano Regondi, and Riccardo
Marini. Machine learning-based approach: Global
trends, research directions, and regulatory standpoints.
In: Data Science and Management 4 (2021), pp. 19-29 | |
| dc.relation | Timothy Yang et al. Applied federated learning: Im-
proving google keyboard query suggestions. In: arXiv
preprint arXiv:1812.02903 (2018) | |
| dc.relation | Blesson Varghese et al. Revisiting the arguments for
edge computing research. In: IEEE Internet Computing
25.5 (2021), pp. 36-42 | |
| dc.relation | Pete Warden and Daniel Situnayake. Tinyml: Machine
learning with tensorflow lite on arduino and ultra-low-
power microcontrollers. OReilly Media, 2019 | |
| dc.relation | Mahadev Satyanarayanan. The emergence of edge
computing¿. In: Computer 50.1 (2017), pp. 30-39 | |
| dc.relation | Ziming Zhao et al. Edge computing: platforms, appli-
cations and challenges. In: J. Comput. Res. Dev 55.2
(2018), pp. 327-337 | |
| dc.relation | Xuehai Hong and Yang Wang. Edge computing tech-
nology: development and countermeasures. In: Strate-
gic Study of Chinese Academy of Engineering 20.2
(2018), pp. 20-26 | |
| dc.relation | Keith Bonawitz et al. Towards Federated Learning at
Scale: System Design. In: (2019). DOI: 10 . 48550 /
ARXIV.1902.01046. URL: https://arxiv.org/abs/1902.
01046 | |
| dc.relation | Dinh C. Nguyen et al. Federated Learning Meets
Blockchain in Edge Computing: Opportunities and
Challenges. In: (2021). DOI: 10.48550/ARXIV.2104.
01776. URL: https://arxiv.org/abs/2104.01776 | |
| dc.relation | H. Brendan McMahan et al. Communication-Efficient
Learning of Deep Networks from Decentralized Data.
In: (2016). DOI: 10.48550/ARXIV.1602.05629. URL:
https://arxiv.org/abs/1602.05629 | |
| dc.relation | Chandra Thapa et al. SplitFed: When Federated Learn-
ing Meets Split Learning. In: (2020). DOI: 10.48550/
ARXIV.2004.12088. URL: https://arxiv.org/abs/2004.
12088 | |
| dc.relation | Otkrist Gupta and Ramesh Raskar. Distributed learn-
ing of deep neural network over multiple agents. In:
(2018). DOI: 10.48550/ARXIV.1810.06060. URL: https:
//arxiv.org/abs/1810.06060 | |
| dc.relation | Weisong Shi et al. Edge computing: Vision and chal-
lenges. In: IEEE internet of things journal 3.5 (2016),
pp. 637-646 | |
| dc.relation | Nasir Abbas et al. Mobile edge computing: A survey.
In: IEEE Internet of Things Journal 5.1 (2017), pp. 450-465 | |
| dc.relation | Colby R Banbury et al. Benchmarking TinyML sys-
tems: Challenges and direction. In: arXiv preprint
arXiv:2003.04821 (2020) | |
| dc.relation | Lachit Dutta and Swapna Bharali. Tinyml meets iot:
A comprehensive survey. In: Internet of Things 16
(2021), p. 100461 | |
| dc.relation | RS-485 Unit Load and Maximum Number of Bus Con-
nections. TLE 4473 GV55-2. Rev. 1.2. Texas Instru-
ments. May 2004 | |
| dc.rights | Atribución-NoComercial 4.0 Internacional | |
| dc.rights | Atribución-NoComercial 4.0 Internacional | |
| dc.rights | http://creativecommons.org/licenses/by-nc/4.0/ | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.rights | http://purl.org/coar/access_right/c_abf2 | |
| dc.title | Split learning on low power devices for collaborative inference | |
| dc.type | Trabajo de grado - Maestría | |