Artículos de revistas
Semi-supervised and active learning through Manifold Reciprocal kNN Graph for image retrieval
Fecha
2019-05-07Registro en:
Neurocomputing, v. 340, p. 19-31.
1872-8286
0925-2312
10.1016/j.neucom.2019.02.016
2-s2.0-85062293932
Autor
Universidade Estadual Paulista (Unesp)
University of Nottingham
Chinese Academy of Sciences
Institución
Resumen
A massive and ever growing amount of data collections, including visual and multimedia content are available today. Such content usually possesses additional information, as text or other metadata, to form a rather sparse and noisy, yet rich and diverse source of annotation. Although the text-based retrieval models are well established, they ignore the rich source of information encoded in the visual data. In contrast, the promising content-based retrieval technologies, capable of considering the multimedia content, still face obstacles for mapping the low level features into high level semantic concepts. Supervised approaches based on relevance feedback techniques have been employed for mitigating such gap on visual retrieval tasks. Although often quite effective, such methods rely only on labeled data, which can severely impact the retrieval effectiveness when the number of user interventions is insufficient. In this scenario, the retrieval approaches are ideally suitable for the emerging weakly supervised and active learning technology to semi-autonomously explore data collections by taking into account the relationships among multimedia objects and saving the user's efforts. In this paper, we discuss a novel semi-supervised learning algorithm for image retrieval tasks. While a manifold learning algorithm uses a reciprocal kNN graph to analyze the unlabeled data, the labeled information obtained through user interactions are represented using similarity sets. Both labeled and unlabeled information are modelled in terms of ranking information to allow a strict link between them. Experimental results obtained on various public datasets and several different visual features have demonstrated the effectiveness of the proposed approach.