dc.creatorGutnisky, D. A.
dc.creatorZanutto, Bonifacio Silvano
dc.date.accessioned2017-11-26T00:33:23Z
dc.date.accessioned2018-11-06T13:46:20Z
dc.date.available2017-11-26T00:33:23Z
dc.date.available2018-11-06T13:46:20Z
dc.date.created2017-11-26T00:33:23Z
dc.date.issued2004
dc.identifierGutnisky, D. A.; Zanutto, Bonifacio Silvano; Learning obstacle avoidance with an operant behavioral model; Massachusetts Institute of Technology; Artificial Life; 10; 1; -1-2004; 65-81
dc.identifier1064-5462
dc.identifierhttp://hdl.handle.net/11336/29109
dc.identifier1530-9185
dc.identifierCONICET Digital
dc.identifierCONICET
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1879165
dc.description.abstractArtificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.
dc.languageeng
dc.publisherMassachusetts Institute of Technology
dc.relationinfo:eu-repo/semantics/altIdentifier/url/http://www.mitpressjournals.org/doi/abs/10.1162/106454604322875913
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1162/106454604322875913
dc.relationinfo:eu-repo/semantics/altIdentifier/url/https://dl.acm.org/citation.cfm?id=982224
dc.relationinfo:eu-repo/semantics/altIdentifier/url/15035863
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectOPERANT LEARNING
dc.subjectNEURAL NETWORKS
dc.subjectREINFORCEMENT LEARNING
dc.subjectARTIFICIAL NEURAL NETWORKS
dc.subjectBIOINGENIERIA
dc.titleLearning obstacle avoidance with an operant behavioral model
dc.typeArtículos de revistas
dc.typeArtículos de revistas
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución