dc.creatorSandoval Alcocer, Juan
dc.creatorBergel, Alexandre
dc.creatorValente, Marco Tulio
dc.date.accessioned2018-11-16T12:13:28Z
dc.date.available2018-11-16T12:13:28Z
dc.date.created2018-11-16T12:13:28Z
dc.date.issued2016
dc.identifierEn: ICPE '16 Proceedings of the 7th ACM/SPEC on International Conference on Performance Engineering Pages 37-48. Delft, The Netherlands — March 12 - 16, 2016
dc.identifier10.1145/2851553.2851571
dc.identifierhttps://repositorio.uchile.cl/handle/2250/152651
dc.description.abstractSource code changes may inadvertently introduce performance regressions. Benchmarking each software version is traditionally employed to identify performance regressions. Although e↵ective, this exhaustive approach is hard to carry out in practice. This paper contrasts source code changes against performance variations. By analyzing 1,288 software versions from 17 open source projects, we identified 10 source code changes leading to a performance variation (improvement or regression). We have produced a cost model to infer whether a software commit introduces a performance variation by analyzing the source code and sampling the execution of a few versions. By profiling the execution of only 17% of the versions, our model is able to identify 83% of the performance regressions greater than 5% and 100% of the regressions greater than 50%.
dc.languageen
dc.publisherACM
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/3.0/cl/
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Chile
dc.subjectPerformance variation
dc.subjectPerformance analysis
dc.subjectPerformance evolution
dc.titleLearning from source code history to identify performance failures
dc.typeCapítulo de libro


Este ítem pertenece a la siguiente institución