dc.contributorFrederico Gadelha Guimarães
dc.contributorhttp://lattes.cnpq.br/2472681535872194
dc.contributorTatiane Nogueira Rios
dc.contributorSandra Eliza Fontes de Avila
dc.contributorJaime Arturo Ramírez
dc.creatorSamara Silva Santos
dc.date.accessioned2022-11-23T18:35:28Z
dc.date.accessioned2023-06-16T16:44:28Z
dc.date.available2022-11-23T18:35:28Z
dc.date.available2023-06-16T16:44:28Z
dc.date.created2022-11-23T18:35:28Z
dc.date.issued2022-07-14
dc.identifierhttp://hdl.handle.net/1843/47408
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/6683398
dc.description.abstractMachine Learning methods (ML) have been widely used in several applications, due to the high power of generalization and the ability to obtain complex relationships between data. Although systems achieve this feat, there is usually no clear relationship as to why a particular decision was made, as well as the impact of changing attributes on the generated outputs. The need to understand these methods becomes even more present in the face of laws that guarantee the ``right to explanation'', as provided for in article 20 of the General Data Protection Law (LGPD), and in other regulations in this sense throughout the world. As a result, this work proposes to investigate the induction of Oblique Decision Trees - also known as Perceptron Decision Tree or PDT - as a method of local interpretability for complex ML models. Since the PDT is transparent, it can be used to locally simulate the behavior of more complex models and thus extract information about them through it. With this in mind, a local approximation of the predictions of the complex method to be explained was proposed, through the induction of PDTs, whose weights evolved through a heuristic optimization technique, based on evolutionary computation. With the grown tree, explanations about the local decisions of opaque models are generated, by providing the rules followed to obtain the outputs, exposing the hierarchy of local importance of the attributes and decision limits associated with each one of them. A new PDT model for regression problems was also presented, which is used to generate local explanations for this type of problem. The final application generated was named Perceptron Decision Tree Explainer (or PDTX), which, in short, is a model-agnostic local interpretability method, which works with structured tabular data, and which can make a better approximation than some classical methods in the literature, maintaining, in addition to the stability of the generated explanations, their simplicity. Additionally, a study was made on the effect of applying three local sampling techniques together with PDTX, concerning the stability of the generated explanations, and the reduction of dimensionality by five methods of reduction of attributes present in the literature, on the impact of the quality of the local approach. The results obtained are promising: compared to LIME (Local Interpretable Model-Agnostic Explanations) and Decision Trees (DT), PDTX performed significantly better for known metrics such as fidelity and stability, both in the context of classification, as in regression, and is comparable to LIME in terms of simplicity.
dc.publisherUniversidade Federal de Minas Gerais
dc.publisherBrasil
dc.publisherENG - DEPARTAMENTO DE ENGENHARIA ELÉTRICA
dc.publisherPrograma de Pós-Graduação em Engenharia Elétrica
dc.publisherUFMG
dc.rightsAcesso Aberto
dc.subjectInteligência artificial explicável
dc.subjectInterpretabilidade em IA
dc.subjectInteligência artificial
dc.subjectAprendizado de máquina
dc.subjectÁrvores de decisões oblíquas
dc.titleIndução de árvores de decisão oblíquas como explicadores de predições por modelos de aprendizado de máquina
dc.typeDissertação


Este ítem pertenece a la siguiente institución