dc.creator | Trolle, Thomas | |
dc.creator | Metushi, Imir G. | |
dc.creator | Greenbaum, Jason A. | |
dc.creator | Kim, Yohan | |
dc.creator | Sidney, John | |
dc.creator | Lund, Ole | |
dc.creator | Sette, Alessandro | |
dc.creator | Peters, Bjoern | |
dc.creator | Nielsen, Morten | |
dc.date.accessioned | 2018-03-07T19:41:40Z | |
dc.date.available | 2018-03-07T19:41:40Z | |
dc.date.created | 2018-03-07T19:41:40Z | |
dc.date.issued | 2015-07 | |
dc.identifier | Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; et al.; Automated benchmarking of peptide-MHC class i binding predictions; Oxford University Press; Bioinformatics (Oxford, England); 31; 13; 7-2015; 2174-2181 | |
dc.identifier | 1367-4803 | |
dc.identifier | http://hdl.handle.net/11336/38180 | |
dc.identifier | CONICET Digital | |
dc.identifier | CONICET | |
dc.description.abstract | Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto-bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto-bench/mhci/join. | |
dc.language | eng | |
dc.publisher | Oxford University Press | |
dc.relation | info:eu-repo/semantics/altIdentifier/doi/http://dx.doi.org/10.1093/bioinformatics/btv123 | |
dc.relation | info:eu-repo/semantics/altIdentifier/url/https://academic.oup.com/bioinformatics/article/31/13/2174/196331 | |
dc.rights | https://creativecommons.org/licenses/by-nc-sa/2.5/ar/ | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject | Mhc | |
dc.subject | Benchmark | |
dc.title | Automated benchmarking of peptide-MHC class i binding predictions | |
dc.type | info:eu-repo/semantics/article | |
dc.type | info:ar-repo/semantics/artículo | |
dc.type | info:eu-repo/semantics/publishedVersion | |