Traceability in Acceptance Testing

DOI: 10.4236/jsea.2013.610A005   PDF   HTML     4,418 Downloads   6,184 Views   Citations


Regardless of which (model-centric or code-centric) development process is adopted, industrial software production ultimately and necessarily requires the delivery of an executable implementation. It is generally accepted that the quality of such an implementation is of utmost importance. Yet current verification techniques, including software testing, remain problematic. In this paper, we focus on acceptance testing, that is, on the validation of the actual behavior of the implementation under test against the requirements of stakeholder(s). This task must be as objective and automated as possible. Our first goal is to review existing code-based and model-based tools for testing in light of what such an objective and automated approach to acceptance testing entails. Our contention is that the difficulties we identify originate mainly in a lack of traceability between a testable model of the requirements of the stakeholder(s) and the test cases used to validate these requirements. We then investigate whether such traceability is addressed in other relevant specification-based approaches.

Share and Cite:

J. Corriveau and W. Shi, "Traceability in Acceptance Testing," Journal of Software Engineering and Applications, Vol. 6 No. 10A, 2013, pp. 36-46. doi: 10.4236/jsea.2013.610A005.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] P. Kruchten, “The Rational Unified Process,” AddisonWesley, Reading, 2003.
[2] D. Rosemberg and M. Stephens, “Use Case Driven Object Modeling with UML,” Apress, New York, 2007.
[3] K. Beck, “Test-Driven Development: By Example,” Addison-Wesley Professional, Reading, 2002.
[4] C. Jones and O. Bonsignour, “The Economics of Software Quality,” Addison-Wesley Professional, Reading, 2011.
[5] R. Johnson, “Avoiding the Classic Catastrophic Computer Science Failure Mode,” Proceedings of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Santa Fe, 7-11 November 2010, pp. 5-6.
[6] M. Surhone, M. Tennoe and S. Henssonow, “Cisq,” Betascript Publishing, New York, 2010.
[7] P. Ammann and J. Offutt, “Introduction to Software Testing,” Cambridge University Press, Cambridge, 2008.
[8] A. Bertolino, “Software Testing Research: Achievements, Challenges and Dreams,” Proceedings of Future of Software Engineering (FOSE 07), Minneapolis, 23-25 May 2007, pp. 85-103.
[9] W. Grieskamp, “Multi-Paradigmatic Model-Based Testing,” Technical Report, Microsoft Research, Seattle, 2006, pp. 1-20.
[10] Microsoft, “Spec Explorer Visual Studio Power Tool,” 2013.
[11] R. Binder, “Testing Object-Oriented Systems,” AddisonWesley Professional, Reading, 2000.
[12] J.-P. Corriveau, “Testable Requirements for Offshore Outsourcing,” Proceedings of Software Engineering Approaches for Offshore and Outsourced Development (SEAFOOD), Springer, Berlin, 2007, pp. 27-43.
[13] B. Meyer, “The Unspoken Revolution in Software Engineering,” IEEE Computer, Vol. 39, No. 1, 2006, pp. 121-123.
[14] J.-P. Corriveau, “Traceability Process for Large OO Projects,” IEEE Computer, Vol. 29, No. 9, 1996, pp. 63-68.
[15] “List of Testing Tools,” 2013.
[16] Wikipedia, “Second List of Testing Tools,” 2013.
[17] “Testing Tools for Web QA,” 2013.
[18] “JUnit,” 2013.
[19] B. Meyer, et al., “Programs that Test Themselves,” IEEE Computer, Vol. 42, No. 9, 2009, pp. 46-55.
[20] J. Ryser and M. Glinz, “SCENT: A Method Employing Scenarios to Systematically Derive Test Cases for System Test,” Technical Report, University of Zurich, Zurich, 2003.
[21] D. Arnold, J.-P. Corriveau and W. Shi, “Validation against Actual Behavior: Still a Challenge for Testing Tools,” Proceedings of Software Engineering Research and Practice (SERP), Las Vegas, 12-15 July 2010.
[22] B. Meyer, “Design by Contract,” IEEE Computer, Vol. 25, No. 10, 1992, pp. 40-51. 10.1109/2.161279
[23] “IBM Rational Robot,” 2013.
[24] “HP Quality Centre,” 2013. 1172141#.UkDyk79AiHk
[25] “Team Foudation Server,” 2013.
[26] “Blueprint,” 2013. %20Administration/Tasks/Managing%20ALM%20targets/Creating%20ALM%20targets.htm
[27] Object Management Group (OMG), “UML Superstructure Specification v2.3,” 2013.
[28] M. Utting and B. Legeard, “Practical Model-Based Testing: A Tools Approach,” Morgan Kauffmann, New York, 2007.
[29] “Special Issue on Model-Based Testing,” Testing Experience, Vol. 17, 2012.
[30] M. Prasanna, et al., “A Survey on Automatic Test Case Generation,” Academic Open Internet Journal, Vol. 15, No. 6, 2005.
[31] A. Neto, R. Subramanyan, M. Vieira and G. H. Travassos, “A Survey of Model-Based Testing Approaches,” Proceedings of the 1st ACM International Workshop on Empirical Assessment of Software Engineering Languages and Technologies (WEASELTech 07), Atlanta, 5 November 2007, pp. 31-36.
[32] P. Baker, Z. R. Dai, J. Grabowski, I. Schieferdecker and C. Williams, “Model-Driven Testing: Using the UML Profile,” Springer, New York, 2007.
[33] S. Bukhari and T. Waheed, “Model Driven Transformation between Design Models to System Test Models Using UML: A Survey,” Proceedings of the 2010 National S/w Engineering Conference, Rawalpindi, 4-5 October 2010, Article 08.
[34] “Seppmed,” 2013.
[35] M. Shafique and Y. Labiche, “A Systematic Review of Model Based Testing Tool Support,” Technical Report SCE-10-04, Carleton University, Ottawa, 2010.
[36] “Conformiq Tool Suite,” 2013.
[37] “Conformiq Manual,” 2013.
[38] R. Hierons, et al., “Using Formal Specifications to Support Testing,” ACM Computing Surveys, Vol. 41, No. 2, 2009, pp. 1-76.
[39] S. D. Stoller, et al., “Runtime Verification with State Estimation,” Proceedings of 11th International Workshop on Runtime Verification (RV'11), Springer, Berlin, 2011, pp. 193-207.
[40] Y. Zhao and F. Rammig, “Online Model Checking for Dependable Real-Time Systems,” Proceedings of the IEEE 15th International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, Shenzhen, 11-13 April 2012, pp. 154-161.
[41] P. Arcaini, A. Gargantini and E. Riccobene, “CoMA: Conformance Monitoring of Java Programs by Abstract State Machines,” Proceedings of 11th International Workshop on Runtime Verification (RV'11), Springer, Berlin, 2011, pp. 223-238.
[42] D. Jin, P. Meredith, C. Lee and G. Rosu, “JavaMOP: Efficient Parametric Runtime Monitoring Framework,” Proceedings of the 34th International Conference on Software Engineering (ICSE), Zurich, 2-9 June 2012, pp. 1427-1430.
[43] R. Grigor, et al., “Runtime Verification Based on Register Automata,” Proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), Springer, Berlin, 2013, pp. 260-276.
[44] Z. Zhou, et al., “Jasmine: A Tool for Model-Driven Runtime Verification with UML Behavioral Models,” Proceedings of the 11th IEEE High Assurance Systems Engineering Symposium (HASE), Nanjing, 3-5 December 2008, pp. 487-490.
[45] X. Li, et al., “UML Interaction Model-Driven Runtime Verification of Java Programs,” Software, IET, Vol. 5, No. 2, 2011, pp. 142-156.
[46] S. Ciraci, S. Malakuti, S. Katz, and M. Aksit, “Checking the Correspondence between UML Models and Implementation,” Proceedings of 10th International Workshop on Runtime Verification (RV'10), Springer, Berlin, 2011, pp. 198-213.
[47] L. Briand and Y. Labiche, “A UML-Based Approach to System Testing,” Software and Systems Modeling, Vol. 1, No. 1, 2002, pp. 10-42.
[48] D. Chelimsky, et al. “The RSpec Book: Behaviour Driven Development with Rspec, Cucumber and Friends,” Pragmatic Bookshelf, New York, 2010.
[49] I. H. Kruger, M. Meisinger and M. Menarini: “Runtime Verification of Interactions: From MSCs to Aspects,” Proceedings of 7th International Workshop on Runtime Verification (RV'07), Springer, Berlin, 2007, pp. 63-74.
[50] International Telecommunication Union (ITU), “Message Sequence Charts, ITU Z.120,” 2013.
[51] M. Cristia, P. Rodriguez Monetti, and P. Albertengo, “The Fastest 1.3.6 User’s Guide,” 2013.
[52] Microsoft, “FORMULA,” 2013.
[53] B. Selic, “Filling in the Whitespace,”
[54] “Rational Technical Developer,”
[55] C. Nebut, et al., “Automatic Test Generation: A Use Case Driven Approach,” IEEE Transactions on Software Engineering, Vol. 32, No. 3, 2006, pp. 140-155.
[56] A. Miga, “Applications of Use Case Maps to System Design with Tool Support,” Master’s Thesis, Carleton University, Ottawa, 1998.
[57] D. Amyot and G. Mussbacher, “User Requirements Notation: The First Ten Years”, Journal of Software, Vol. 6, No. 5, 2011, pp. 747-768.
[58] J. Zander, et al., “From U2TP Models to Executable Tests with TTCN-3—An Approach to Model Driven Testing,” Proceedings of the 17th International Conference on Testing Communicating Systems, Montreal, 31 May-2 June 2005, pp. 289-303.
[59] P. Baker and C. Jervis, “Testing UML 2.0 Models using TTCN-3 and the UML 2.0 Testing Profile,” Springer, Berlin, 2007, pp. 86-100.
[60] D. Arnold, J.-P. Corriveau and W. Shi, “Modeling and Validating Requirements Using Executable Contracts and Scenarios,” Proceedings of Software Engineering Research, Management & Applications (SERA 2010), Montreal, 24-26 May 2010, pp. 311-320.

comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.