dc.creatorDelamaro, Márcio Eduardo
dc.creatorOffutt, Jeff
dc.date.accessioned2015-03-02T14:16:43Z
dc.date.accessioned2018-07-04T16:52:25Z
dc.date.available2015-03-02T14:16:43Z
dc.date.available2018-07-04T16:52:25Z
dc.date.created2015-03-02T14:16:43Z
dc.date.issued2014-03-31
dc.identifierIEEE International Conference on Software Testing, Verification, and Validation Workshops, 7, 2014, Cleveland, Ohio.
dc.identifier9780769551944
dc.identifierhttp://www.producao.usp.br/handle/BDPI/48429
dc.identifierhttp://dx.doi.org/10.1109/ICSTW.2014.22
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/1641532
dc.description.abstractMutation testing is widely used in experiments. Some papers experiment with mutation directly, while others use it to introduce faults to measure the effectiveness of tests created by other methods. There is some random variation in the mutation score depending on the specific test values used. When generating tests to use in experiments, a common, although not universal practice, is to generate multiple sets of tests to satisfy the same criterion or according to the same procedure, and then to compute their average performance. Averaging over multiple test sets is thought to reduce the variation in the mutation score. This practice is extremely expensive when tests are generated by hand (as is common) and as the number of programs increase (a current positive trend in software engineering experimentation). The research reported in this short paper asks a simple and direct question: do we need to generate multiple sets of test cases? That is, how do different test sets influence the cost and effectiveness results? In a controlled experiment, we generated 10 different test cases to be adequate for the Statement Deletion (SSDL) mutation operator for 39 small programs and functions, and then evaluated how they differ in terms of cost and effectiveness. We found that averaging over multiple programs was effective in reducing the variance in the mutation scores introduced by specific tests
dc.languageeng
dc.publisherIEEE Computer Society
dc.publisherCleveland, Ohio
dc.relationIEEE International Conference on Software Testing, Verification, and Validation Workshops, 7
dc.rightsCopyright IEEE
dc.rightsopenAccess
dc.subjectSoftware testing
dc.subjectMutation testing
dc.subjectTest set selection
dc.titleAssessing the influence of multiple test case selection on mutation experiments
dc.typeActas de congresos


Este ítem pertenece a la siguiente institución