dc.creatorHUGO JAIR ESCALANTE BALDERAS
dc.creatorCARLOS ARTURO HERNANDEZ GRACIDAS
dc.creatorJESUS ANTONIO GONZALEZ BERNAL
dc.creatorAURELIO LOPEZ LOPEZ
dc.creatorMANUEL MONTES Y GOMEZ
dc.creatorEDUARDO FRANCISCO MORALES MANZANARES
dc.creatorLUIS ENRIQUE SUCAR SUCCAR
dc.creatorLUIS VILLASEÑOR PINEDA
dc.date2009
dc.date.accessioned2022-10-12T19:48:03Z
dc.date.available2022-10-12T19:48:03Z
dc.identifierhttp://inaoe.repositorioinstitucional.mx/jspui/handle/1009/1177
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4122293
dc.descriptionAutomatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.
dc.formatapplication/pdf
dc.languageeng
dc.publisherElsevier Inc.
dc.relationcitation:Escalante-Balderas, H.J., et al., (2009). The segmented and annotated IAPR TC-12 benchmark, Computer Vision and Image Understanding (114): 419–428
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.subjectinfo:eu-repo/classification/Data set creation/Data set creation
dc.subjectinfo:eu-repo/classification/Ground truth collection/Ground truth collection
dc.subjectinfo:eu-repo/classification/Evaluation metrics/Evaluation metrics
dc.subjectinfo:eu-repo/classification/Automatic image annotation/Automatic image annotation
dc.subjectinfo:eu-repo/classification/Image retrieval/Image retrieval
dc.subjectinfo:eu-repo/classification/cti/1
dc.subjectinfo:eu-repo/classification/cti/12
dc.subjectinfo:eu-repo/classification/cti/1203
dc.titleThe segmented and annotated IAPR TC-12 benchmark
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/acceptedVersion
dc.audiencestudents
dc.audienceresearchers
dc.audiencegeneralPublic


Este ítem pertenece a la siguiente institución