dc.contributorTacla, Cesar Augusto
dc.contributorhttps://orcid.org/0000-0002-8244-8970
dc.contributorhttp://lattes.cnpq.br/2860342167270413
dc.contributorMorveli Espinoza, Miriam Mariela Mercedes
dc.contributorhttps://orcid.org/0000-0002-7376-2271
dc.contributorhttp://lattes.cnpq.br/5351129518161204
dc.contributorTacla, Cesar Augusto
dc.contributorhttps://orcid.org/0000-0002-8244-8970
dc.contributorhttp://lattes.cnpq.br/2860342167270413
dc.contributorMarchi, Jerusa
dc.contributorhttps://orcid.org/0000-0002-4864-3764
dc.contributorhttp://lattes.cnpq.br/0882497234989588
dc.contributorMorveli Espinoza, Miriam Mariela Mercedes
dc.contributorhttps://orcid.org/0000-0002-7376-2271
dc.contributorhttp://lattes.cnpq.br/5351129518161204
dc.contributorNieves Sanchez, Juan Carlos
dc.contributorhttps://orcid.org/0000-0003-4072-8795
dc.creatorJasinski, Henrique Monteiro Rogich
dc.date.accessioned2022-09-05T19:57:22Z
dc.date.accessioned2022-12-06T15:15:58Z
dc.date.available2022-09-05T19:57:22Z
dc.date.available2022-12-06T15:15:58Z
dc.date.created2022-09-05T19:57:22Z
dc.date.issued2022-04-26
dc.identifierJASINSKI, Henrique Monteiro Rogich. Geração de explicações contrastivas para seleção de objetivos em agentes BDI. 2022. Dissertação (Mestrado em Engenharia Elétrica e Informática Industrial) - Universidade Tecnológica Federal do Paraná, Curitiba, 2022.
dc.identifierhttp://repositorio.utfpr.edu.br/jspui/handle/1/29522
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/5263250
dc.description.abstractAs agent-based systems have been growing, more and more people have access to them and are influenced by decisions taken by such systems. This increases the necessity for such systems to be capable of explaining themselves to a lay user. The Beliefs-Desires-Intentions is a commonly used agent model. It can be fairly complex because an agent has an internal deliberation process to decide what goals it will pursue based on its beliefs. This deliberative process is called goal selection. When humans explain things to each other, they make use of a series of different types of explanations. One very common explanation type is the contrastive explanation, where two scenarios are compared and the explanation presents the differences between the cases. In such a way, by using a known case and an unexpected one, it is possible to present only the causes that differentiate both. Interest in explainable artificial intelligence has been increasing in recent years, yet few works are grounded on social and cognitive sciences studies on how humans generate and evaluate explanation. As such, this dissertation aims to identify what information should be part of contrastive explanations, based on findings of social and cognitive sciences, and how to generate such explanations in the context of the Beliefs-Desires-Intentions (BDI) agent’s goal selection. A method for generating contrastive explanations for BDI-based goal selection was proposed, with groundings in the works of Bouwel and Weber (2002) and Grice (1975). The structure of contrastive questions proposed by Bouwel and Weber (2002) is used as a foundation for the questions and answers addressed by the proposed method. In turn, Grice’s (1975) work provides requirements for the communication between two cooperating parties, in the context of this work, an agent and a user. Such requirements, proposed by Grice as four sets of maxims (Quantity, Quality, Relation and Manner), establish restrictions and good practices concerning the information exchanged between the parties. By basing the method on Grice’s maxims, the generated explanations are expected to be closer to two people conversing and explaining some event among themselves. The method generates a set of possible explanations, such that each of them represents a possible answer, with its respective set of relevant information. A case study shows how the calculations of the required information are made and how requirements based on Grice’s work are accounted for. The method addresses three out of four of Grice’s maxims, as the maxim of Manner was disregarded since it is dependent on the user interaction, which is outside the scope of this work. The Quality, Relation, and Quantity maxims are addressed by each question type formulation used in the first procedure. The second procedure contributes to the satisfaction of the Quantity maxim. The selection of a single explanation needs to be done before presenting the answer to the final user. Both selection and presentation of the explanation are outside the scope of this work.
dc.publisherUniversidade Tecnológica Federal do Paraná
dc.publisherCuritiba
dc.publisherBrasil
dc.publisherPrograma de Pós-Graduação em Engenharia Elétrica e Informática Industrial
dc.publisherUTFPR
dc.rightshttp://creativecommons.org/licenses/by/4.0/
dc.rightsopenAccess
dc.subjectInteligência artificial
dc.subjectExplicação
dc.subjectComportamento humano
dc.subjectIntenção
dc.subjectProcesso decisório
dc.subjectEmoções e cognição
dc.subjectArtificial intelligence
dc.subjectExplanation
dc.subjectHuman behavior
dc.subjectIntention
dc.subjectDecision-making
dc.subjectEmotions and cognition
dc.titleGeração de explicações contrastivas para seleção de objetivos em agentes BDI
dc.typemasterThesis


Este ítem pertenece a la siguiente institución