dc.creatorÚbeda, Ignacio
dc.creatorSaavedra, José M.
dc.creatorNicolas, Stéphane
dc.creatorPetitjean, Caroline
dc.creatorHeutte, Laurent
dc.date.accessioned2020-04-29T15:07:05Z
dc.date.available2020-04-29T15:07:05Z
dc.date.created2020-04-29T15:07:05Z
dc.date.issued2020
dc.identifierPattern Recognition Letters 131: 398-404
dc.identifier10.1016/j.patrec.2020.02.002
dc.identifierhttps://repositorio.uchile.cl/handle/2250/174224
dc.description.abstractPattern spotting consists of locating different instances of a given object (i.e. an image query) in a collection of historical document images. These patterns may vary in shape, size, color, context and even style because they are hand-drawn, which makes pattern spotting a difficult task. To tackle this problem, we propose a Convolutional Neural Network (CNN) approach based on Feature Pyramid Networks (FPN) as the feature extractor of our system. Using FPN allows to extract descriptors of local regions of the documents to be indexed and queries, at multiple scales with just a single forward pass. Experiments conducted on DocExplore dataset show that the proposed system improves mAP by 73% (from 0.157 to 0.272) in pattern localization compared with state-of-the-art results, even when the feature extractor is not trained with domain-specific data. Memory requirement and computation time are also decreased since the descriptor dimension used for distance computation is reduced by a factor of 16.
dc.languageen
dc.publisherElsevier
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
dc.sourcePattern Recognition Letters
dc.subjectPattern spotting
dc.subjectImage retrieval
dc.subjectHistorical documents
dc.subjectConvolutional neural network
dc.titleImproving pattern spotting in historical documents using feature pyramid networks
dc.typeArtículo de revista


Este ítem pertenece a la siguiente institución