dc.creatorde Siqueira, FR
dc.creatorSchwartz, WR
dc.creatorPedrini, H
dc.date2013
dc.date45231
dc.date2014-08-01T18:40:21Z
dc.date2015-11-26T18:04:47Z
dc.date2014-08-01T18:40:21Z
dc.date2015-11-26T18:04:47Z
dc.date.accessioned2018-03-29T00:46:57Z
dc.date.available2018-03-29T00:46:57Z
dc.identifierNeurocomputing. Elsevier Science Bv, v. 120, n. 336, n. 345, 2013.
dc.identifier0925-2312
dc.identifierWOS:000324847100036
dc.identifier10.1016/j.neucom.2012.09.042
dc.identifierhttp://www.repositorio.unicamp.br/jspui/handle/REPOSIP/82035
dc.identifierhttp://repositorio.unicamp.br/jspui/handle/REPOSIP/82035
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1292952
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.descriptionCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
dc.descriptionTexture information plays an important role in image analysis. Although several descriptors have been proposed to extract and analyze texture, the development of automatic systems for image interpretation and object recognition is a difficult task due to the complex aspects of texture. Scale is an important information in texture analysis, since a same texture can be perceived as different texture patterns at distinct scales. Gray level co-occurrence matrices (GLCM) have been proved to be an effective texture descriptor. This paper presents a novel strategy for extending the GLCM to multiple scales through two different approaches, a Gaussian scale-space representation, which is constructed by smoothing the image with larger and larger low-pass filters producing a set of smoothed versions of the original image, and an image pyramid, which is defined by sampling the image both in space and scale. The performance of the proposed approach is evaluated by applying the multi-scale descriptor on five benchmark texture data sets and the results are compared to other well-known texture operators, including the original GLCM, that even though faster than the proposed method, is significantly outperformed in accuracy. (c) 2013 Elsevier B.V. All rights reserved.
dc.description120
dc.descriptionSI
dc.description336
dc.description345
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionFundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.descriptionCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
dc.descriptionFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
dc.descriptionConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
dc.descriptionCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
dc.languageen
dc.publisherElsevier Science Bv
dc.publisherAmsterdam
dc.publisherHolanda
dc.relationNeurocomputing
dc.relationNeurocomputing
dc.rightsfechado
dc.rightshttp://www.elsevier.com/about/open-access/open-access-policies/article-posting-policy
dc.sourceWeb of Science
dc.subjectMulti-scale feature descriptor
dc.subjectGray level co-occurrence matrix
dc.subjectGLCM
dc.subjectTexture description
dc.subjectImage analysis
dc.subjectLocal Binary Patterns
dc.subjectImage Representation
dc.subjectClassification
dc.subjectFeatures
dc.subjectSegmentation
dc.subjectFilters
dc.subjectGlcm
dc.subjectRecognition
dc.subjectModels
dc.subjectScale
dc.titleMulti-scale gray level co-occurrence matrices for texture description
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución