dc.creatorMurillo Morera, Juan
dc.creatorQuesada López, Christian Ulises
dc.creatorCastro Herrera, Carlos
dc.creatorJenkins Coronas, Marcelo
dc.date.accessioned2019-09-06T14:55:34Z
dc.date.accessioned2022-10-20T02:09:12Z
dc.date.available2019-09-06T14:55:34Z
dc.date.available2022-10-20T02:09:12Z
dc.date.created2019-09-06T14:55:34Z
dc.date.issued2017
dc.identifierhttps://jserd.springeropen.com/articles/10.1186/s40411-017-0037-x
dc.identifier2195-1721
dc.identifierhttps://hdl.handle.net/10669/79015
dc.identifier10.1186/s40411-017-0037-x
dc.identifier834-B5-A18
dc.identifier.urihttps://repositorioslatinoamericanos.uchile.cl/handle/2250/4545321
dc.description.abstractBackground: Several prediction models have been proposed in the literature using different techniques obtaining different results in different contexts. The need for accurate effort predictions for projects is one of the most critical and complex issues in the software industry. The automated selection and the combination of techniques in alternative ways could improve the overall accuracy of the prediction models. Objectives: In this study, we validate an automated genetic framework, and then conduct a sensitivity analysis across different genetic configurations. Following is the comparison of the framework with a baseline random guessing and an exhaustive framework. Lastly, we investigate the performance results of the best learning schemes. Methods: In total, six hundred learning schemes that include the combination of eight data preprocessors, five attribute selectors and fifteen modeling techniques represent our search space. The genetic framework, through the elitism technique, selects the best learning schemes automatically. The best learning scheme in this context means the combination of data preprocessing + attribute selection + learning algorithm with the highest coefficient correlation possible. The selected learning schemes are applied to eight datasets extracted from the ISBSG R12 Dataset. Results: The genetic framework performs as good as an exhaustive framework. The analysis of the standardized accuracy (SA) measure revealed that all best learning schemes selected by the genetic framework outperforms the baseline random guessing by 45–80%. The sensitivity analysis confirms the stability between different genetic configurations. Conclusions: The genetic framework is stable, performs better than a random guessing approach, and is as good as an exhaustive framework. Our results confirm previous ones in the field, simple regression techniques with transformations could perform as well as nonlinear techniques, and ensembles of learning machines techniques such as SMO, M5P or M5R could optimize effort predictions.
dc.languageen_US
dc.rightshttp://creativecommons.org/licenses/by/4.0/
dc.rightsAtribución 4.0 Internacional
dc.sourceJournal of Software Engineering Research and Development, vol.5(4), pp.1-33
dc.subjectSoftware effort estimation
dc.subjectMachine learning
dc.subjectEffort prediction model
dc.subjectGenetic approach
dc.subjectLearning schemes
dc.subjectFunction points
dc.subjectISBSG dataset
dc.subjectEmpirical study
dc.titleA genetic algorithm based framework for software effort prediction
dc.typeartículo científico


Este ítem pertenece a la siguiente institución