dc.creatorMarques, Henrique Oliveira
dc.creatorCampello, Ricardo José Gabrielli Barreto
dc.creatorZimek, Arthur
dc.creatorSander, Jörg
dc.date.accessioned2016-01-06T18:11:29Z
dc.date.accessioned2018-07-04T17:06:19Z
dc.date.available2016-01-06T18:11:29Z
dc.date.available2018-07-04T17:06:19Z
dc.date.created2016-01-06T18:11:29Z
dc.date.issued2015-06
dc.identifierInternational Conference on Scientific and Statistical Database Management, 27th, 2015, La Jolla.
dc.identifier9781450337090
dc.identifierhttp://www.producao.usp.br/handle/BDPI/49414
dc.identifierhttp://dx.doi.org/10.1145/2791347.2791352
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1644716
dc.description.abstractAlthough there is a large and growing literature that tackles the unsupervised outlier detection problem, the unsupervised evaluation of outlier detection results is still virtually untouched in the literature. The so-called internal evaluation, based solely on the data and the assessed solutions themselves, is required if one wants to statistically validate (in absolute terms) or just compare (in relative terms) the solutions provided by different algorithms or by different parameterizations of a given algorithm in the absence of labeled data. However, in contrast to unsupervised cluster analysis, where indexes for internal evaluation and validation of clustering solutions have been conceived and shown to be very useful, in the outlier detection domain this problem has been notably overlooked. Here we discuss this problem and provide a solution for the internal evaluation of top-n (binary) outlier detection results. Specifically, we propose an index called IREOS (Internal, Relative Evaluation of Outlier Solutions) that can evaluate and compare different candidate labelings of a collection of multivariate observations in terms of outliers and inliers. We also statistically adjust IREOS for chance and extensively evaluate it in several experiments involving different collections of synthetic and real data sets.
dc.languageeng
dc.publisherUniversity of California
dc.publisherAssociation for Computing Machinery – ACM
dc.publisherLa Jolla
dc.relationInternational Conference on Scientific and Statistical Database Management, 27th
dc.rightsCopyright ACM
dc.rightsclosedAccess
dc.subjectOutlier detection
dc.subjectunsupervised evaluation
dc.subjectvalidation
dc.titleOn the internal evaluation of unsupervised outlier detection
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución