dc.contributorRestrepo Calle, Felipe
dc.contributorPLaS - Programming Languages and Systems
dc.creatorEchavarría Flórez, Ingrid Sofía
dc.date.accessioned2020-04-22T15:03:27Z
dc.date.available2020-04-22T15:03:27Z
dc.date.created2020-04-22T15:03:27Z
dc.date.issued2020-04-14
dc.identifierhttps://repositorio.unal.edu.co/handle/unal/77441
dc.description.abstractThe quality of the software is an important aspect, mainly due to its relation to the future costs associated with the maintenance of the product. In order to quantify the quality, software quality metrics are used, which are a set of measures used to estimate the quality of a software project. Taking into account that the quality of the software must evaluate several aspects, and that it must be measured in the same way for all, standards have been created that provide a guide to the process to be carried out. As of 2019, the standard that is the family of ISO/IEC 25000 standards, whose objective is to create a common framework to assess the quality of a software product considering 8 characteristics: functional adequacy, performance, compatibility, usability, reliability, security, maintainability, and portability. The maintainability feature has a high impact on the total costs of software projects, consuming between 40% and 80% of the total cost of the life cycle of a software product. Additionally, within the Maintenance work, developers spend approximately 70% of the time trying to understand the source code, so being able to measure the readability of a code snippet could help estimate the effort required for a maintenance activity. The readability of software is defined as the degree of ease with which a person can read and understand a piece of source code, written by another person. Despite the efforts and research in the area of readability of the software, there is still no definitive model to assess the level of readability of a fragment of source code in real time. Therefore, it is essential to continue research in the area, but for this, it is necessary to know the studies that exist to date on the legibility of software, identifying the characteristics and metrics proposed to measure it, the models that allow automatic evaluation, the applications they have in the area of software and the challenges that future researchers must face. This final master's work presents a synthesis and analysis of software readability metrics. For this, a systematic review of the literature of works related to the readability of the source code was carried out, presenting a compilation of the main characteristics and the methods used for its measurement. The result of this work aims to serve as a basis for other researchers to propose new readability metrics, and subsequently, to develop strategies that are integrated into integrated development environments (IDEs) to measure and alert on the readability of the source code in time developmental.
dc.description.abstractLa calidad del software es un aspecto importante, principalmente por su relación con los costos futuros asociados al mantenimiento del producto. Para poder cuantificar la calidad, se utilizan las métricas de calidad de software, las cuales son un conjunto de medidas utilizadas para estimar la calidad de un proyecto de software. Teniendo en cuenta que la calidad del software debe evaluar varios aspectos, y que debe ser medida de la misma forma para todos, se han creado normas que brindan una guía del proceso a realizar. Al año 2019, la norma que se encuentra en vigencia es la familia de normas ISO/IEC 25000, cuyo objetivo es crear un marco de trabajo común para evaluar la calidad de un producto de software teniendo en cuenta 8 características: adecuación funcional, rendimiento, compatibilidad, usabilidad, fiabilidad, seguridad, mantenibilidad y portabilidad. La característica de mantenibilidad, tiene un alto impacto sobre los costos totales de los proyectos de software, consumiendo entre 40% y 80% del costo total del ciclo de vida de un producto de software. Adicionalmente, dentro de la labor de mantenimiento, los desarrolladores gastan aproximadamente 70% del tiempo tratando de comprender el código fuente, por lo que poder medir la legibilidad de un fragmento de código, podría ayudar a estimar el esfuerzo requerido para una actividad de mantenimiento. La legibilidad (readability en inglés) del software, es definida como el grado de facilidad con la que una persona puede leer y comprender un fragmento de código fuente, escrito por otra persona. A pesar de los esfuerzos e investigaciones en el área de legibilidad del software, aún no se tiene un modelo definitivo que permita evaluar el nivel de legibilidad de un fragmento de código fuente en tiempo real. Por ello, es indispensable dar continuidad a las investigaciones en el área, pero para ello, es necesario conocer los estudios que existen hasta la fecha sobre la legibilidad del software, identificando las características y métricas propuestas para medirla, los modelos que permiten la evaluación automática, las aplicaciones que tienen en el área del software y los retos que deben afrontar los futuros investigadores. Este trabajo final de maestría presenta una síntesis y análisis de las métricas de legibilidad de software. Para esto, se realizó una revisión sistemática de la literatura de trabajos relacionados con la legibilidad del código fuente, presentando una recopilación de las características principales y los métodos utilizados para su medición. El resultado de este trabajo pretende servir como base para que otros investigadores puedan proponer nuevas métricas de legibilidad, y posteriormente, puedan desarrollar estrategias que sean integradas a los entornos integrados de desarrollo (IDEs) para medir y alertar sobre la legibilidad del código fuente en tiempo de desarrollo.
dc.languagespa
dc.publisherBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.publisherUniversidad Nacional de Colombia - Sede Bogotá
dc.relationJose Luis Martinez Flores, “Metricas de software en lenguajes de cuarta generacion,” universidad autonoma de nuevo leon, 1994.
dc.relationJ. M. Ruiz, C. D. Pacifico, and M. M. Pérez, “Clasificación y Evaluación de Métricas de Mantenibilidad Aplicables a Productos de Software Libre,” Ruiz, J. M., Pacifico, C. D., Pérez, M. M. (n.d.). Clasif. y Evaluación Métricas Mantenibilidad Apl. a Prod. Softw. Libr. Retrieved from http//sedici.unlp.edu.ar/bitstream/handle/10915/61928/Documento_completo.pdf-PDFA.pdf?s.
dc.relationE. Irrazábal and J. Garzás, “Revista española de innovación, calidad e ingeniería del software.,” Rev. española innovación, Calid. e Ing. del Softw., vol. 6, no. 3, 2010.
dc.relationD. Alawad, M. Panta, M. Zibran, and R. Islam, “An Empirical Study of the Relationships between Code Readability and Software Complexity,” 2018.
dc.relationJ. Dorn, “A General Software Readability Model,” 2012.
dc.relationS. Scalabrino, G. Bavota, C. Vendome, M. Linares-Vasquez, D. Poshyvanyk, and R. Oliveto, “Automatically assessing code understandability: How far are we?,” in 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), 2017, pp. 417–427.
dc.relationM. Red de Ingeniería de Software de Latinoamérica, Ó. Pedreira, and C. M. Fernández, Revista latinoamericana de ingeniería de software., vol. 3, no. 3. 2015.
dc.relationI. BARRIO, “Legibilidad y salud: Los métodos de medición de la legibilidad y su aplicación al diseño de folletos educativos sobre salud,” Universidad Autónoma de Madrid, 2007.
dc.relationD. Posnett, A. Hindle, and P. Devanbu, “A simpler model of software readability,” in Proceeding of the 8th working conference on Mining software repositories - MSR ’11, 2011, p. 73.
dc.relationU. A. Mannan, I. Ahmed, and A. Sarma, “Towards understanding code readability and its impact on design quality,” in Proceedings of the 4th ACM SIGSOFT International Workshop on NLP for Software Engineering - NL4SE 2018, 2018, pp. 18–21.
dc.relationD. J. F. Novais, M. J. V. Pereira, and P. R. Henriques, “Program analysis for Clustering Programmers’ Profile,” 2017, pp. 701–705.
dc.relationB. Kitchenham, O. Pearl Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in software engineering – A systematic literature review,” Inf. Softw. Technol., vol. 51, pp. 7–15, 2008.
dc.relationM. Akour and B. Falah, “Application domain and programming language readability yardsticks,” in 2016 7th International Conference on Computer Science and Information Technology (CSIT), 2016, pp. 1–6.
dc.relationB. Pereira, A. Farid, H. Quintero, I. Granadillo, and J. Bustamante, “Métricas de Calidad de Software.” .
dc.relationG. Mena Mendoza, “ISO 9126-3: Métricas Internas de la Calidad del Producto de Software,” 2006. [Online]. Available: http://mena.com.mx/gonzalo/maestria/calidad/presenta/iso_9126-3/. [Accessed: 01-Apr-2018].
dc.relationP. Hegedűs, I. Kádár, R. Ferenc, and T. Gyimóthy, “Empirical evaluation of software maintainability based on a manually validated refactoring dataset,” Inf. Softw. Technol., vol. 95, pp. 313–327, Mar. 2018.
dc.relationJ. Borstler and B. Paech, “The Role of Method Chains and Comments in Software Readability and Comprehension—An Experiment,” IEEE Trans. Softw. Eng., vol. 42, no. 9, pp. 886–898, Sep. 2016.
dc.relationF. Scalone, “ESTUDIO COMPARATIVO DE LOS MODELOS Y ESTANDARES DE CALIDAD DEL SOFTWARE,” UNIVERSIDAD TECNOLOGICA NACIONAL FACULTAD REGIONAL BUENOS AIRES, 2006.
dc.relationS. Butler, M. Wermelinger, Yijun Yu, and H. Sharp, “Exploring the Influence of Identifier Names on Code Quality: An Empirical Study,” in 2010 14th European Conference on Software Maintenance and Reengineering, 2010, pp. 156–165.
dc.relationG. Rincón, M. Pérez, and S. Hernández, “MODELO DE CALIDAD (MOSCA+) PARA EVALUAR SOFTWARE DE SIMULACIÓN DE EVENTOS DISCRETOS,” 2003.
dc.relationB. Kitchenham, “Procedures for performing systematic reviews,” Br. J. Manag., vol. 14, no. 0, pp. 207–222, 2004.
dc.relationF. Garcia, “Revisión sistemática de literatura para artículos,” Tecnológico de Monterrey. Tecnológico de Monterrey y Universidad de Salamanca, Monterrey, 2017.
dc.relationM. Allamanis, E. T. Barr, C. Bird, and C. Sutton, “Suggesting accurate method and class names,” in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering - ESEC/FSE 2015, 2015, pp. 38–49.
dc.relationH. AMAN, S. AMASAKI, T. SASAKI, and M. KAWAHARA, “Lines of Comments as a Noteworthy Metric for Analyzing Fault-Proneness in Methods,” Okayama, JAPON, 2015.
dc.relationH. M. Chen, W. H. Chen, and C. C. Lee, “An automated assessment system for analysis of coding convention violations in Java programming assignments*,” J. Inf. Sci. Eng., vol. 34, no. 5, pp. 1203–1221, 2018.
dc.relationS. Choi, S. Kim, J.-H. Lee, J. Kim, and J.-Y. Choi, “Measuring the Extent of Source Code Readability Using Regression Analysis,” Springer, Cham, 2018, pp. 410–421.
dc.relationR. Coleman, “Aesthetics Versus Readability of Source Code,” Int. J. Adv. Comput. Sci. Appl., vol. 9, no. 9, pp. 12–18, 2018.
dc.relationA. De Renzis, M. Garriga, A. Flores, A. Cechich, C. Mateos, and A. Zunino, “A domain independent readability metric for web service descriptions,” Comput. Stand. Interfaces, vol. 50, pp. 124–141, Feb. 2017.
dc.relationR. M. dos Santos and M. A. Gerosa, “Impacts of coding practices on readability,” in Proceedings of the 26th Conference on Program Comprehension - ICPC ’18, 2018, pp. 277–285.
dc.relationS. Fakhoury, Y. Ma, V. Arnaoudova, and O. Adesope, “The effect of poor source code lexicon and readability on developers’ cognitive load,” in Proceedings of the 26th Conference on Program Comprehension - ICPC ’18, 2018, pp. 286–296.
dc.relationL. Frunzio, B. Lin, M. Lanza, and G. Bavota, “RETICULA: Real-time code quality assessment,” in 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2018, pp. 542–546.
dc.relationY. Liu, X. Sun, and Y. Duan, “Analyzing program readability based on WordNet,” in Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering - EASE ’15, 2015, pp. 1–2.
dc.relationQ. Mi, J. Keung, X. Mei, Y. Xiao, and W. K. Chan, “A Gamification Technique for Motivating Students to Learn Code Readability in Software Engineering,” in 2018 International Symposium on Educational Technology (ISET), 2018, pp. 250–254.
dc.relationQ. Mi, J. Keung, Y. Xiao, S. Mensah, and Y. Gao, “Improving code readability classification using convolutional neural networks,” Inf. Softw. Technol., vol. 104, pp. 60–71, Dec. 2018.
dc.relationQ. Mi, J. Keung, Y. Xiao, S. Mensah, and X. Mei, “An Inception Architecture-Based Model for Improving Code Readability Classification,” in Proceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 2018 - EASE’18, 2018, pp. 139–144.
dc.relationQ. Mi, J. Keung, and Y. Yu, “Measuring the Stylistic Inconsistency in Software Projects using Hierarchical Agglomerative Clustering,” in Proceedings of the The 12th International Conference on Predictive Models and Data Analytics in Software Engineering - PROMISE 2016, 2016, pp. 1–10.
dc.relationA. Pahal and R. S. Chillar, “Code Readability: A Review of Metrics for Software Quality,” Int. J. Comput. Trends Technol., vol. 46, no. 1, 2017.
dc.relationS. Scalabrino, M. Linares-Vásquez, R. Oliveto, and D. Poshyvanyk, “A comprehensive model for code readability,” J. Softw. Evol. Process, vol. 30, no. 6, p. e1958, Jun. 2018.
dc.relationT. Sedano, “Code Readability Testing, an Empirical Study,” in 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), 2016, pp. 111–117.
dc.relationA. Wulff-Jensen, K. Ruder, E. Triantafyllou, and L. E. Bruni, “Gaze Strategies Can Reveal the Impact of Source Code Features on the Cognitive Load of Novice Programmers,” 2019.
dc.relationW. Xu, D. Xu, and L. Deng, “Measurement of Source Code Readability Using Word Concreteness and Memory Retention of Variable Names,” in 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), 2017, pp. 33–38.
dc.relationI. B. Sampaio and L. Barbosa, “Software readability practices and the importance of their teaching,” in 2016 7th International Conference on Information and Communication Systems (ICICS), 2016, pp. 304–309.
dc.relationS. Scalabrino, G. Bavota, C. Vendome, M. Linares-Vasquez, D. Poshyvanyk, and R. Oliveto, “Automatically assessing code understandability: How far are we?,” in 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), 2017, pp. 417–427.
dc.relationS. Scalabrino, M. Linares-Vásquez, R. Oliveto, and D. Poshyvanyk, “A comprehensive model for code readability,” J. Softw. Evol. Process, vol. 30, no. 6, p. e1958, Jun. 2018.
dc.rightsAtribución-NoComercial 4.0 Internacional
dc.rightsAcceso abierto
dc.rightshttp://creativecommons.org/licenses/by-nc/4.0/
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightsDerechos reservados - Universidad Nacional de Colombia
dc.titleMétricas de legibilidad del software: una revisión sistemática de literatura
dc.typeOtro


Este ítem pertenece a la siguiente institución