Fundamentals of Software Testing

Fall 2006

Course Number: CMSC737.

Meeting Times: Tue. Thu. - 9:30AM - 10:45AM (CSIC 2120)

Office Hours: Tue. Thu. - 10:45AM - 12:00PM (4115 A. V. Williams Building)

Catalog Course Description: This course will examine fundamental software testing and related program analysis techniques. In particular, the important phases of testing will be reviewed, emphasizing the significance of each phase when testing different types of software. The course will also include concepts such as test generation, test oracles, test coverage, regression testing, mutation testing, program analysis (e.g., program-flow and data-flow analysis), and test prioritization.

Course Summary: This course will examine fundamental software testing and program analysis techniques. In particular, the important phases of testing will be reviewed, emphasizing the significance of each phase when testing different types of software. Students will learn the state of the art in testing technology for object-oriented, component-based, concurrent, distributed, graphical-user interface, and web software. In addition, closely related concepts such as mutation testing and program analysis (e.g., program-flow and data-flow analysis) will also be studied. Emerging concepts such as test-case prioritization and their impact on testing will be examined. Students will gain hands-on testing/analysis experience via a multi-phase course project. By the end of this course, students should be familiar with the state-of-the-art in software testing. Students should also be aware of the major open research problems in testing.

The grade of the course will be determined as follows: 25% mid-term, 25% final exam, 50% project.

Credits: 3

Prerequisites: Software engineering CMSC435 or equivalent.

Status with respect to graduate program: MS qualifying course (Midterm+Final exam), PhD core (Software Engineering).

Syllabus: The following topics will be discussed. The reading lists are expected to evolve as new papers in these areas emerge.

  1. Introduction to software testing (1 week)
    • [Aug. 31, Sep. 5, 7, 14] Contents: The need for testing; testing as an integral part of software engineering; software engineering processes and testing.
    • Slides: 1.pdf, 2.pdf
    • Reading List
      1. Testing: a roadmap, Mary Jean Harrold, Proceedings of the conference on the future of Software engineering May 2000.
      2. Introduction to special section on software testing, R. Hamlet, Communications of the ACM June 1988, Volume 31 Issue 6.
      3. Testing: principles and practice, Stephen R. Schach, ACM Computing Surveys, (CSUR) March 1996, Volume 28 Issue 1.
      4. Software safety: why, what, and how, Nancy G. Leveson, ACM Computing Surveys (CSUR) June 1986, Volume 18 Issue 2.
      5. Validation, Verification, and Testing of Computer Software, W. Richards Adrion, Martha A. Branstad, John C. Cherniavsky, ACM Computing Surveys (CSUR) June 1982, Volume 14 Issue 2.
    • [Sep. 13] Guest Lecture: Qing Xie, Developing Cost-Effective Techniques for GUI Testing.
  2. The overall testing process (2 weeks)
    • [Sep. 19] Contents: Test case generation; test oracles; test coverage.
    • Slides: 3.pdf.
    • Reading List
      1. The category-partition method for specifying and generating functional tests, T. J. Ostrand, M. J. Balcer, Communications of the ACM June 1988, Volume 31 Issue 6.
    • Tools
      1. Contents: JUnit; Bugzilla.
      2. Reading List
        1. Various documents at http://www.junit.org.
  1. Test case generation (3 weeks)
    • General test-case generation concepts
      1. Slides: 5.pdf, 7.pdf, 8.pdf.
      2. Contents: Sampling the program's input space; path-testing; branch and predicate testing.
      3. Reading List
        1. [Sep. 21, 28] A test generation strategy for pair-wise testing, Kuo-Chung Tai; Yu Lei, Software Engineering, IEEE Transactions on, Volume: 28 Issue: 1, Jan. 2002, Page(s): 109 -111.
        2. [Sep. 28] Predicate-based test generation for computer programs, Kuo-Chung Tai, Software Engineering, 1993. Proceedings of the 15th International Conference on, 1993, Page(s): 267 -276.
        3. [Sep. 28]  A heuristic approach for test case generation, Kai-Hsiung Chang, W. Homer Carlisle, James H. Cross, II, David B. Brown, Proceedings of the 19th annual conference on Computer Science, 1991, Page(s): 174 – 180.
      4. Web Testing: [Sep. 26] Guest Lecture: Cyntrica Eaton

Þ    ``An Empirical Approach to Testing Web Applications Across Diverse Client Platform Configurations,'' Cyntrica Eaton and Atif M. Memon, International Journal on Web Engineering and Technology (IJWET), Special Issue on Empirical Studies in Web Engineering, Inderscience Publishers. (accepted for publication; to appear; a slightly older version is available here.)

    • GUI testing
      1. Contents: event-flow model; event-space.
      2. Slides: 6.pdf.
      3. Reading List
        1. [Oct. 5] Hierarchical GUI test case generation using automated planning, Memon, A.M.; Pollack, M.E.; Soffa, M.L., Software Engineering, IEEE Transactions on, Volume: 27 Issue: 2, Feb. 2001, Page(s): 144 -155.
        2. [Oct. 5] Using a goal-driven approach to generate test cases for GUIs, Atif M. Memon, Martha E. Pollack, Mary Lou Soffa, Proceedings of the 21st international conference on Software engineering May 1999.
        3. [Oct. 10] Studying the Fault-Detection Effectiveness of GUI Test Cases for Rapidly Evolving Software, Atif M. Memon and Qing Xie, IEEE Transactions on Software Engineering, vol. 31, no. 10, Pages. 884-896, October, 2005.
  1. Test oracles (1 week)
    • Contents: The need for name spaces; specifications name space; implementation name space.
    • Slides: 13.pdf.
    • Reading List
      1. [Oct. 10] Automated test oracles for GUIs, Atif M. Memon, Martha E. Pollack, Mary Lou Soffa, ACM SIGSOFT Software Engineering Notes, Proceedings of the eighth international symposium on Foundations of software engineering for twenty-first century applications, November 2000, Volume 25 Issue 6.
      2. [Oct. 10] Specification-based test oracles for reactive systems, Debra J. Richardson, Stephanie Leif Aha, T. Owen O'Malley, Proceedings of the 14th international conference on Software engineering June 1992.
      3.  [Oct. 12, 16] Designing and Comparing Automated Test Oracles for GUI-based Software Applications," Qing Xie and Atif Memon, ACM Transactions on Software Engineering and Methodology, to appear. (A slightly longer (but older) version is available here.)
  1. Regression testing (2 weeks) [Oct. 16, 24]
    • Contents: Repairing test cases; obsolete test cases.
    • Slides: 15.pdf
    • Reading List
      1. An empirical study of regression test selection techniques, Todd L. Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porter, Gregg Rothermel, ACM Transactions on Software Engineering and Methodology (TOSEM) April 2001, Volume 10 Issue 2.
      2. Regression testing of GUIs, Atif M. Memon, Mary Lou Soffa, September 2003, Proceedings of the 9th European software engineering conference held jointly with 10th ACM SIGSOFT international symposium on Foundations of software engineering.
  1. Data-flow testing [Oct. 26, Nov. 2]
    • Contents: Data definitions; data-use; def-use chains
    • Slides: 12.pdf.
    • Reading List
      1. An applicable family of data flow testing criteria, Frankl, P.G.; Weyuker, E.J., Software Engineering, IEEE Transactions on, Volume: 14 Issue: 10, Oct. 1988, Page(s): 1483 -1498.
      2. Interprocedural data flow testing, M. Harrold, M. Soffa, ACM SIGSOFT Software Engineering Notes, Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verification November 1989, Volume 14 Issue 8.
  1. Test coverage (2 weeks)
    • Contents: Code instrumentation; test prioritization.
    • Slides: 9.pdf, 10.pdf, 11.pdf, 14.pdf.
    • Reading List
      1. [Nov. 7, 9] Software unit test coverage and adequacy, Hong Zhu, Patrick A. V. Hall, John H. R. May, ACM Computing Surveys (CSUR) December 1997, Volume 29 Issue 4.
      2. [Nov. 7, 9] The evaluation of program-based software test data adequacy criteria, E. J. Weyuker, Communications of the ACM June 1988 Volume 31 Issue 6.
      3. [Nov. 7, 9] Coverage criteria for GUI testing, Atif M. Memon, Mary Lou Soffa, Martha E. Pollack, ACM SIGSOFT Software Engineering Notes, Proceedings of the 8th European software engineering conference held jointly with 9th ACM SIGSOFT symposium on Foundations of software engineering September 2001, Volume 26 Issue 5.
      4. [Nov. 7, 9] Experiments on the effectiveness of dataflow- and control-flow-based test adequacy criteria, Hutchins, M.; Foster, H.; Goradia, T.; Ostrand, T., Software Engineering, 1994. Proceedings. ICSE-16., 16th International Conference on, 1994, Page(s): 191-200.
      5. [Nov. 14, 16] Call Stack Coverage for Test Suite Reduction,'' Scott McMaster and Atif Memon, International Conference on Software Maintenance (ICSM 2005), Budapest, Hungary, Sept. 25-30, 2005.
      6. [Nov. 14, 16] Call Stack Coverage for GUI Test-Suite Reduction, Scott McMaster and Atif M. Memon, Proceedings of the 17th IEEE International Symposium on Software Reliability Engineering (ISSRE 2006), Raleigh, NC, USA, Nov. 6-10 2006.
  2. MC/DC Coverage [Nov. 13] Presented by Christopher Ackermann [Slides]
  3. Test Scheduling Strategies (2.5 weeks)
  4. Mutation and Fault Seeding
  5. Web Testing [Nov. 30] Presented by Yee Lin Tan [Slides]
  6. Model-based Testing [Dec. 5] Presented by William Schwartz [Slides]
    • Ibrahim K. El-Far and James A. Whittaker, Model-based Software Testing, Encyclopedia on Software Engineering, Wiley, 2001.
    • Bertolini, C.; Farina, A.G.; Fernandes, P.; Oliveira, F.M., "Test case generation using stochastic automata networks: quantitative analysis," Software Engineering and Formal Methods, 2004. SEFM 2004. Proceedings of the Second International Conference on , vol., no.pp. 251- 260, 28-30 Sept. 2004.
    • Whittaker, J.A.; Thomason, M.G., "A Markov chain model for statistical software testing," Software Engineering, IEEE Transactions on , vol.20, no.10pp.812-824, Oct 1994.
    • Maurer, P.M., "Generating test data with enhanced context-free grammars," Software, IEEE , vol.7, no.4pp.50-55, Jul 1990
    • Hyoung Seok Hong, Young Gon Kim, Sung Deok Cha, Doo Hwan Bae and Hasan Ural, A test sequence select ion method for statecharts, SOFTWARE TESTING, VERIFICATION AND RELIABILITY, 2000; 10: 203-227.
  7. Analysis of Different Oracles [Dec. 7] Presented by Mudit Agrawal [Slides]
  8. Testing Component-Based Software [Dec. 12] Presented by Sharath Srinivas [Slides]
    • “A Process and Role-Based Taxonomy of Techniques to Make Testable COTS Components,” Atif M. Memon, Testing Commercial-off-the-shelf Components and Systems, (S. Beydeda and V. Gruhn ed.), Springer, pp. 109-140, 2004.
    • "An Integrated Testing Technique for Component-Based Software," by Volker Gruhn.

Course Project

Phase 1

Goal: Black-box test-case generation.

Procedure: Take subject applications from the TerpOffice web-site; select five methods, each with at least 5 parameters; create JUnit test cases for these methods using the category-partition method. Reduce the number of test cases using pair-wise testing. Compare the statement coverage of the original and reduced suites.

Deliverables: Source code of the five methods and all the JUnit test cases; and a document describing the categories/choices and constraints. At least two sets of test cases, each of which is sufficient to satisfy the pair-wise testing criterion. Coverage reports. Also a one-page document describing the difficulties you faced with using the subject applications – including downloading, installing, etc. and how you handled the difficulties.

Due on Oct. 5 in class.

Phase 2

Goal: Making each JUnit tests fail by seeding artificial faults in the method source code.

Procedure: Examine each JUnit test cases and the method source code. Obtain a set of source-code changes (from this paper) that will cause each test case to fail. Insert, in the method, a comment /*FAULT## FAILURE INDUCING CODE */ at line N. Simple string replacement of line N with “FAILURE INDUCING CODE” should cause the JUnit test case to fail. If the change requires changes to multiple lines, then replace ## with an integer; use the same value of the integer for all lines that are related to one failure. Write a script to perform the string replacement automatically, one fault at a time to avoid fault interaction. Use the classification of the faults from this paper.

Deliverables: The modified source of methods, the script, and JUnit test cases. An standalone executable that will demonstrate the entire process automatically.

Due on Oct. 19 in class. [Late submission policy – you lose 10% (of the maximum points) per day]

Phase 3

Goal: Running the pair-wise test cases on fault-seeded code. Also, running the application (via the GUI) to detect the seeded faults.

Procedure: Execute all your pair-wise test cases on the methods seeded with faults. Report the number of faults that you detected. Compare with full suite. For the second part of this phase, compile and run the application (with one fault turned ON at a time) and come up with an interaction (sequence of GUI events) that reveals the fault, i.e., produces an outcome that is different from that of the original code.

Deliverables: The failure report and the event-sequences.

Due on Nov. 2 in class. [Late submission policy – you lose 10% (of the maximum points) per day]