SciELO - Scientific Electronic Library Online

 
vol.19 número3Predicting Software Product Quality: A Systematic Mapping StudyIdentificación del factor humano en el seguimiento de procesos de software en un medio ambiente universitario índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.19 no.3 Ciudad de México jul./sep. 2015

 

Artículos

 

All Uses and Statement Coverage: A Controlled Experiment

 

Diego Vallespir1, Silvana Moreno1, Carmen Bogado1, Juliana Herbert2

 

1 Universidad de la República, School of Engineering, Montevideo, Uruguay. dvallesp@fing.edu.uy, smoreno@fing.edu.uy, cmbogado@gmail.com

2 Herbert Consulting, Porto Alegre, Brazil. juliana@herbertconsulting.com

Corresponding author is Diego Vallespir.

 

Article received on 06/06/2014.
Accepted on 05/06/2015.

 

Abstract

This article presents a controlled experiment that compares the behavior of the testing techniques Statement Coverage and All Uses. The design of this experiment is typical for a factor with two alternatives. A total of 14 subjects carry out tests on a single program. The results indicate that there is enough statistical evidence to state that the cost of executing All Uses is higher than that of executing Statement Coverage - a result that we expected to find. However, no statistical differences were found as regards the effectiveness of the techniques.

Keywords: Empirical software engineering, testing techniques, test effectiveness, test cost.

 

DESCARGAR ARTÍCULO EN FORMATO PDF

 

References

1. Andrews, J. H., Briand, L. C., Labiche, Y., & Namin, A. S. (2006). Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Transactions on Software Engineering, Vol. 32, pp. 608-624.         [ Links ]

2. Basili, V., Caldiera, G., & Rombach, H. (1994). Goal question metric approach. Encyclopedia of Software Engineering, pp. 528-532.         [ Links ]

3. Frankl, P. G. & Weiss, S. N. (1993). An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering, Vol. 19, No. 8, pp. 774-787.         [ Links ]

4. Frankl, P. G., Weiss, S. N., & Hu, C. (1997). All-uses vs mutation testing: An experimental comparison of effectiveness. Journal of Systems and Software, Vol. 38, pp. 235-253.         [ Links ]

5. Frankl, P. G. & Weyuker, E. J. (1988). An applicable family of data flow testing criteria. IEEE Transactions on Software Engineering, Vol. 14, No. 10, pp. 1483-1498.         [ Links ]

6. Harrold, M. J. & Rothermel, G. (1994). Performing data flow testing on classes. SIGSOFT Softw. Eng. Notes, Vol. 19, No. 5, pp. 154-163.         [ Links ]

7. Harrold, M. J. & Soffa, M. L. (1989). Interprocedual data flow testing. ACM SIGSOFT '89 third Symposium on Software Testing, Analysis, and Verification (TAV3), ACM, New York, NY, USA, pp. 158-167.         [ Links ]

8. Hutchins, M., Foster, H., Goradia, T., & Ostrand, T. (1994). Experiments on the effectiveness of dataflow- and control-flow-based test adequacy criteria. 16th International Conference on Software Engineering (ICSE-16), pp. 191 -200.         [ Links ]

9. Kakarla, S., Momotaz, S., & Namin, A. (2011). An evaluation of mutation and data-flow testing: A meta-analysis. IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 366-375.         [ Links ]

10. Li, N., Praphamontripong, U., & Offutt, J. (2009). An experimental comparison of four unit test criteria: Mutation, edge-pair, all-uses and prime path coverage. International Conference on Software Testing, Verification and Validation Workshops (ICSTW'09), pp. 220 -229.         [ Links ]

11. Mathur, A. P. & Wong, W. E. (1994). An empirical comparison of data flow and mutation-based test adequacy criteria. Software Testing, Verification and Reliability, Vol. 4, pp. 9-31.         [ Links ]

12. Moreno, A., Shull, F., Juristo, N., & Vegas, S. (2009). A look at 25 years of data. IEEE Software, Vol. 26, No. 1, pp. 15-17.         [ Links ]

13. Offutt, A. J., Pan, J., Tewary, K., & Zhang, T. (1996). An experimental evaluation of data flow and mutation testing. Software Practice and Experience, Vol. 26, pp. 165-176.         [ Links ]

14. Rapps, S. & Weyuker, E. J. (1982). Data flow analysis techniques for test data selection. 6th international conference on Software engineering (ICSE'82), IEEE Computer Society Press, Los Alamitos, CA, USA, pp. 272-278.         [ Links ]

15. Vallespir, D., Apa, C., De Le�n, S., Robaina, R., & Herbert, J. (2009). Effectiveness of five verification techniques. XXVIII International Conference of the Chilean Computer Society.         [ Links ]

16. Vallespir, D., Bogado, C., Moreno, S., & Herbert, J. (2010). Comparing verification techniques: All uses and statement coverage. Ibero-American Symposium on Software Engineering and Knowledge Engineering, pp. 85-95.         [ Links ]

17. Vallespir, D. & Herbert, J. (2009). Effectiveness and cost of verification techniques: Preliminary conclusions on five techniques. Mexican International Conference on Computer Science (ENC), pp. 264-271.         [ Links ]

18. Weyuker, E. (1990). The cost of data flow testing: An empirical study. IEEE Transactions on Software Engineering, Vol. 16, pp. 121-128.         [ Links ]

19. Weyuker, E. J. (1984). The complexity of data flow criteria for test data selection. Information Processing Letters, Vol. 19, No. 2, pp. 103-109.         [ Links ]

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons