dc.creatorSandoval Alcocer, Juan
dc.creatorBergel, Alexandre
dc.date.accessioned2016-11-18T15:30:37Z
dc.date.accessioned2019-04-26T01:02:09Z
dc.date.available2016-11-18T15:30:37Z
dc.date.available2019-04-26T01:02:09Z
dc.date.created2016-11-18T15:30:37Z
dc.date.issued2016
dc.identifierACM Sigplan Notices Volumen: 51 Número: 2 Páginas: 129-139 Feb 2016
dc.identifier10.1145/2816707.2816718
dc.identifierhttp://repositorio.uchile.cl/handle/2250/141271
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/2445363
dc.description.abstractLittle is known about how software performance evolves across software revisions. The severity of this situation is high since (i) most performance variations seem to happen accidentally and (ii) addressing a performance regression is challenging, especially when functional code is stacked on it. This paper reports an empirical study on the performance evolution of 19 applications, totaling over 19 MLOC. It took 52 days to run our 49 benchmarks. By relating performance variation with source code revisions, we found out that: (i) 1 out of every 3 application revisions introduces a performance variation, (ii) performance variations may be classified into 9 patterns, (iii) the most prominent cause of performance regression involves loops and collections. We carefully describe the patterns we identified, and detail how we addressed the numerous challenges we faced to complete our experiment.
dc.languageen
dc.publisherACM
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile
dc.sourceACM Sigplan Notices
dc.subjectLanguages
dc.subjectMeasurement
dc.subjectPerformance
dc.subjectExperimentation
dc.subjectPerformance variation
dc.subjectPerformance analysis
dc.subjectPerformance evolution
dc.titleTracking down performance variation against source code evolution
dc.typeArtículos de revistas


Este ítem pertenece a la siguiente institución