dc.creatorTrolle, Thomas
dc.creatorMetushi, Imir G.
dc.creatorGreenbaum, Jason A.
dc.creatorKim, Yohan
dc.creatorSidney, John
dc.creatorLund, Ole
dc.creatorSette, Alessandro
dc.creatorPeters, Bjoern
dc.creatorNielsen, Morten
dc.date.accessioned2018-03-07T19:41:40Z
dc.date.available2018-03-07T19:41:40Z
dc.date.created2018-03-07T19:41:40Z
dc.date.issued2015-07
dc.identifierTrolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; et al.; Automated benchmarking of peptide-MHC class i binding predictions; Oxford University Press; Bioinformatics (Oxford, England); 31; 13; 7-2015; 2174-2181
dc.identifier1367-4803
dc.identifierhttp://hdl.handle.net/11336/38180
dc.identifierCONICET Digital
dc.identifierCONICET
dc.description.abstractMotivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join.
dc.languageeng
dc.publisherOxford University Press
dc.relationinfo:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1093/bioinformatics/btv123
dc.relationinfo:eu-repo/semantics/altIdentifier/url/https://academic.oup.com/bioinformatics/article/31/13/2174/196331
dc.rightshttps://creativecommons.org/licenses/by-nc-sa/2.5/ar/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectMhc
dc.subjectBenchmark
dc.titleAutomated benchmarking of peptide-MHC class i binding predictions
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:ar-repo/semantics/artículo
dc.typeinfo:eu-repo/semantics/publishedVersion


Este ítem pertenece a la siguiente institución