dc.contributorJefersson Alex dos Santos
dc.contributorhttp://lattes.cnpq.br/2171782600728348
dc.contributorArnaldo de Albuquerque Araújo
dc.contributorMário Fernando Montenegro Campos
dc.contributorAnísio Mendes Lacerda
dc.contributorMoacir Antonelli Ponti
dc.contributorAlexandre Xavier Falcão
dc.creatorHugo Neves de Oliveira
dc.date.accessioned2023-03-29T15:28:13Z
dc.date.accessioned2023-06-16T15:40:48Z
dc.date.available2023-03-29T15:28:13Z
dc.date.available2023-06-16T15:40:48Z
dc.date.created2023-03-29T15:28:13Z
dc.date.issued2020-07-21
dc.identifierhttp://hdl.handle.net/1843/51331
dc.identifierhttps://orcid.org/0000-0003-2622-1277
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/6679832
dc.description.abstractDistinct digitization techniques for biomedical images yield different visual patterns in samples from many radiological exams. These differences may hamper the use of data-driven Machine Learning approaches for inference over these images, such as Deep Learning. Another difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore an important step in improving the generalization capabilities of these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation between different datasets of biomedical images. In order to tackle this problem, in this work, we propose an Unsupervised and Semi-Supervised Domain Adaptation method for dense labeling tasks in biomedical images using Generative Adversarial Networks for Unsupervised Image-to-Image Translation. We merge these generative models with well-known supervised deep semantic segmentation architectures in order to create two semi-supervised methods capable of learning from both unlabeled and labeled data, whenever labeling is available. The first Domain-to-Domain method, similarly to most other Image Translation methods in the literature, is limited to a pair of domains: one source and one target. The second proposed methodology takes advantage of conditional dataset training to encourage Domain Generalization from several data sources from the same domain. From this conditional dataset encoding, we also devise a fully novel pipeline for rib segmentation in X-Ray images that does not require any label to be computed. We compare our method using a myriad of domains, datasets, segmentation tasks and traditional baselines in the Domain Adaptation literature, such as using pretrained models both with and without fine-tuning. We perform both quantitative and qualitative analysis of the proposed method and baselines in the multitude of distinct scenarios considered in our experimental evaluation. We empirically observe the limitations of pairwise Domain Adaptation approaches to truly generalizable radiograph segmentation, evidencing the better performance of multi-source training methods in this task. The proposed Conditional Domain Adaptation method shows consistently and significantly better results than the baselines in scarce labeled data scenarios – that is, when labeled data is limited or non-existent in the target dataset – achieving Jaccard indices greater than 0.9 in most tasks. Completely Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained Deep Neural Networks.
dc.publisherUniversidade Federal de Minas Gerais
dc.publisherBrasil
dc.publisherICX - DEPARTAMENTO DE CIÊNCIA DA COMPUTAÇÃO
dc.publisherPrograma de Pós-Graduação em Ciência da Computação
dc.publisherUFMG
dc.rightshttp://creativecommons.org/licenses/by/3.0/pt/
dc.rightsAcesso Aberto
dc.subjectAprendizado profundo
dc.subjectAdaptação de domínio
dc.subjectImagens médicas
dc.subjectSegmentação de imagens
dc.titleSemantic segmentation with multi-source domain adaptation for radiological images
dc.typeTese


Este ítem pertenece a la siguiente institución