Fundamentals of Software Testing
Fall 2006
Course Number: CMSC737.
Meeting Times: Tue. Thu. - 9:30AM - 10:45AM (CSIC 2120)
Office Hours: Tue. Thu. - 10:45AM - 12:00PM (4115 A. V. Williams
Building)
Catalog Course Description: This course will examine fundamental
software testing and related program analysis techniques. In particular, the
important phases of testing will be reviewed, emphasizing the significance of
each phase when testing different types of software. The course will also
include concepts such as test generation, test oracles, test coverage,
regression testing, mutation testing, program analysis (e.g., program-flow and
data-flow analysis), and test prioritization.
Course Summary: This course will examine fundamental software testing
and program analysis techniques. In particular, the important phases of testing
will be reviewed, emphasizing the significance of each phase when testing
different types of software. Students will learn the state of the art in testing
technology for object-oriented, component-based, concurrent, distributed,
graphical-user interface, and web software. In addition, closely related
concepts such as mutation testing and program analysis (e.g., program-flow and
data-flow analysis) will also be studied. Emerging concepts such as test-case
prioritization and their impact on testing will be examined. Students will gain
hands-on testing/analysis experience via a multi-phase course project. By the
end of this course, students should be familiar with the state-of-the-art in
software testing. Students should also be aware of the major open research
problems in testing.
The grade of the course will be determined as follows: 25% mid-term, 25%
final exam, 50% project.
Credits: 3
Prerequisites: Software engineering CMSC435 or equivalent.
Status with respect to graduate program: MS qualifying course
(Midterm+Final exam), PhD core (Software Engineering).
Syllabus: The following topics will be discussed. The reading lists are expected to evolve as new papers in
these areas emerge.
- Introduction to software
testing (1 week)
- [Aug. 31, Sep. 5, 7, 14]
Contents: The need for testing; testing as an integral part of
software engineering; software engineering processes and testing.
- Slides: 1.pdf, 2.pdf
- Reading List
- Testing: a roadmap,
Mary Jean Harrold, Proceedings of the conference on the future of
Software engineering May 2000.
- Introduction to
special section on software testing, R. Hamlet, Communications of the
ACM June 1988, Volume 31 Issue 6.
- Testing: principles
and practice, Stephen R. Schach, ACM Computing Surveys, (CSUR) March
1996, Volume 28 Issue 1.
- Software safety: why,
what, and how, Nancy G. Leveson, ACM Computing Surveys (CSUR) June 1986,
Volume 18 Issue 2.
- Validation,
Verification, and Testing of Computer Software, W. Richards Adrion,
Martha A. Branstad, John C. Cherniavsky, ACM Computing Surveys (CSUR)
June 1982, Volume 14 Issue 2.
- [Sep. 13] Guest Lecture: Qing Xie, Developing Cost-Effective
Techniques for GUI Testing.
- The overall testing process
(2 weeks)
- [Sep. 19] Contents:
Test case generation; test oracles; test coverage.
- Slides: 3.pdf.
- Reading List
- The
category-partition method for specifying and generating functional
tests, T. J. Ostrand, M. J. Balcer, Communications of the ACM June 1988,
Volume 31 Issue 6.
- Tools
- Contents:
JUnit; Bugzilla.
- Reading List
- Various documents at http://www.junit.org.
- Test case generation (3
weeks)
- General test-case
generation concepts
- Slides: 5.pdf, 7.pdf, 8.pdf.
- Contents:
Sampling the program's input space; path-testing; branch and predicate
testing.
- Reading List
- [Sep. 21, 28] A test generation strategy for
pair-wise testing, Kuo-Chung Tai; Yu Lei, Software Engineering, IEEE
Transactions on, Volume: 28 Issue: 1, Jan. 2002, Page(s): 109 -111.
- [Sep. 28] Predicate-based test generation for
computer programs, Kuo-Chung Tai, Software Engineering, 1993.
Proceedings of the 15th International Conference on, 1993, Page(s): 267
-276.
- [Sep. 28] A heuristic approach for test
case generation, Kai-Hsiung Chang, W. Homer Carlisle, James H. Cross,
II, David B. Brown, Proceedings of the 19th annual conference on
Computer Science, 1991, Page(s): 174 – 180.
- Web Testing: [Sep. 26] Guest
Lecture: Cyntrica Eaton
Þ
``An Empirical Approach to Testing Web
Applications Across Diverse Client Platform Configurations,'' Cyntrica Eaton
and Atif M. Memon, International Journal on Web Engineering and Technology
(IJWET), Special Issue on Empirical Studies in Web Engineering, Inderscience
Publishers. (accepted for publication; to appear; a slightly older version is
available here.)
- GUI testing
- Contents:
event-flow model; event-space.
- Slides: 6.pdf.
- Reading List
- [Oct. 5] Hierarchical GUI test case generation
using automated planning, Memon, A.M.; Pollack, M.E.; Soffa, M.L.,
Software Engineering, IEEE Transactions on, Volume: 27 Issue: 2, Feb.
2001, Page(s): 144 -155.
- [Oct. 5] Using a goal-driven approach to generate
test cases for GUIs, Atif M. Memon, Martha E. Pollack, Mary Lou Soffa,
Proceedings of the 21st international conference on Software
engineering May 1999.
- [Oct. 10] Studying the Fault-Detection
Effectiveness of GUI Test Cases for Rapidly Evolving Software, Atif M.
Memon and Qing Xie, IEEE Transactions on Software Engineering, vol. 31,
no. 10, Pages. 884-896, October, 2005.
- Test oracles (1 week)
- Contents: The
need for name spaces; specifications name space; implementation name
space.
- Slides: 13.pdf.
- Reading List
- [Oct. 10] Automated test oracles for GUIs, Atif M.
Memon, Martha E. Pollack, Mary Lou Soffa, ACM SIGSOFT Software
Engineering Notes, Proceedings of the eighth international symposium on
Foundations of software engineering for twenty-first century
applications, November 2000, Volume 25 Issue 6.
- [Oct. 10] Specification-based test oracles for
reactive systems, Debra J. Richardson, Stephanie Leif Aha, T. Owen
O'Malley, Proceedings of the 14th international conference on Software
engineering June 1992.
- [Oct. 12, 16]
Designing and Comparing Automated Test Oracles for GUI-based Software
Applications," Qing Xie and Atif Memon, ACM Transactions on
Software Engineering and Methodology, to appear. (A slightly longer (but
older) version is available here.)
- Regression testing (2 weeks) [Oct. 16, 24]
- Contents:
Repairing test cases; obsolete test cases.
- Slides: 15.pdf
- Reading List
- An empirical study of
regression test selection techniques, Todd L. Graves, Mary Jean Harrold,
Jung-Min Kim, Adam Porter, Gregg Rothermel, ACM Transactions on Software
Engineering and Methodology (TOSEM) April 2001, Volume 10 Issue 2.
- Regression testing of
GUIs, Atif M. Memon, Mary Lou Soffa, September 2003, Proceedings of the
9th European software engineering conference held jointly with 10th ACM
SIGSOFT international symposium on Foundations of software engineering.
- Data-flow testing [Oct. 26, Nov.
2]
- Contents: Data
definitions; data-use; def-use chains
- Slides: 12.pdf.
- Reading List
- An applicable family
of data flow testing criteria, Frankl, P.G.; Weyuker, E.J., Software
Engineering, IEEE Transactions on, Volume: 14 Issue: 10, Oct. 1988,
Page(s): 1483 -1498.
- Interprocedural data
flow testing, M. Harrold, M. Soffa, ACM SIGSOFT Software Engineering
Notes, Proceedings of the ACM SIGSOFT '89 third symposium on Software
testing, analysis, and verification November 1989, Volume 14 Issue 8.
- Test coverage (2 weeks)
- Contents: Code instrumentation;
test prioritization.
- Slides: 9.pdf, 10.pdf, 11.pdf, 14.pdf.
- Reading List
- [Nov. 7, 9] Software unit
test coverage and adequacy, Hong Zhu, Patrick A. V. Hall, John H. R.
May, ACM Computing Surveys (CSUR) December 1997, Volume 29 Issue 4.
- [Nov. 7, 9] The evaluation of
program-based software test data adequacy criteria, E. J. Weyuker,
Communications of the ACM June 1988 Volume 31 Issue 6.
- [Nov. 7, 9] Coverage criteria
for GUI testing, Atif M. Memon, Mary Lou Soffa, Martha E. Pollack, ACM
SIGSOFT Software Engineering Notes, Proceedings of the 8th European software
engineering conference held jointly with 9th ACM SIGSOFT symposium on
Foundations of software engineering September 2001, Volume 26 Issue 5.
- [Nov. 7, 9] Experiments on
the effectiveness of dataflow- and control-flow-based test adequacy
criteria, Hutchins, M.; Foster, H.; Goradia, T.; Ostrand, T., Software
Engineering, 1994. Proceedings. ICSE-16., 16th International Conference
on, 1994, Page(s): 191-200.
- [Nov. 14, 16] Call
Stack Coverage for Test Suite Reduction,'' Scott McMaster and Atif
Memon, International Conference on Software Maintenance (ICSM 2005), Budapest, Hungary, Sept. 25-30, 2005.
- [Nov. 14, 16] Call
Stack Coverage for GUI Test-Suite Reduction, Scott McMaster and Atif
M. Memon, Proceedings of the 17th IEEE International Symposium on
Software Reliability Engineering (ISSRE 2006), Raleigh, NC, USA, Nov. 6-10 2006.
- MC/DC Coverage [Nov. 13]
Presented by Christopher Ackermann [Slides]
- J.J. Chilenski and
S.P. Miller, "Applicability
of Modified Condition/Decision Coverage to Software Testing,"
Software Eng.
J., vol. 9, no. 5, pp. 193-200, 1994.
- D. Richard Kuhn.
"Fault
classes and error detection capability of specification-based testing,"
ACM Transactions on Software Engineering and Methodology, 8(4):411--424,
October 1999.
- Jones, J. and Harrold,
M. "Test-Suite
Reduction and Prioritization for Modified Condition/Decision Coverage",
Proceedings of the IEEE International Conference on Software Maintenance
( ICSM'01), Florence,
Italy,
7-9 November 2001, pp. 92--101.
- Dupuy, A. and Leveson,
N. "An empirical
evaluation of the MC/DC coverage criterion on the HETE-2 satellite
software", Proceedings of the Digital Aviation Systems
Conference (DASC), Philadelphia,
USA,
October 2000.
- Test Scheduling Strategies
(2.5 weeks)
- [Nov. 16] Skoll: Distributed Continuous Quality
Assurance, Atif M. Memon, Adam Porter, Cemal Yilmaz, and Adithya
Nagarajan Douglas C. Schmidt and Bala Natarajan, International Conference
on Software Engineering, 2004. Presented by Morimichi Nishigaki [Slides]
- [Nov. 28] Main Effects Screening: A Distributed
Continuous Quality Assurance Process for Monitoring Performance
Degradation in Evolving Software Systems, Cemal Yilmaz, Arvind
Krishna, Atif Memon, Adam Porter, Douglas C. Schmidt, Aniruddha Gokhale,
and Bala Natarajan, The 27th International Conference on Software
Engineering (ICSE 2005), St. Louis, MO, May 15-21, 2005. Presented by
Walaa El-Din Moustafa [Slides]
- Mutation and Fault Seeding
- Web Testing [Nov. 30]
Presented by Yee Lin Tan [Slides]
- Sara Sprenkle,
Sreedevi Sampath, Emily Gibson, Amie Souter, Lori Pollock, "An Empirical
Comparison of Test Suite Reduction Techniques for User-session-based
Testing of Web Applications," International Conference on
Software Maintenance (ICSM), September 2005.
- Sreedevi Sampath,
Valentin Mihaylov, Amie Souter, Lori Pollock "A
Scalable approach to User-session based Testing of Web Applications
through Concept Analysis," Automated Software Engineering
Conference (ASE), September 2004.
- Model-based Testing [Dec. 5]
Presented by William Schwartz [Slides]
- Ibrahim K. El-Far and
James A. Whittaker, Model-based Software Testing, Encyclopedia on
Software Engineering, Wiley, 2001.
- Bertolini, C.; Farina,
A.G.; Fernandes, P.; Oliveira, F.M., "Test case generation using
stochastic automata networks: quantitative analysis," Software
Engineering and Formal Methods, 2004. SEFM 2004. Proceedings of the
Second International Conference on , vol., no.pp. 251- 260, 28-30 Sept. 2004.
- Whittaker, J.A.;
Thomason, M.G., "A Markov chain model for statistical software
testing," Software Engineering, IEEE Transactions on , vol.20,
no.10pp.812-824, Oct 1994.
- Maurer, P.M.,
"Generating test data with enhanced context-free grammars,"
Software, IEEE , vol.7, no.4pp.50-55, Jul 1990
- Hyoung Seok Hong,
Young Gon Kim, Sung Deok Cha, Doo Hwan Bae and Hasan Ural, A test
sequence select ion method for statecharts, SOFTWARE TESTING,
VERIFICATION AND RELIABILITY, 2000; 10: 203-227.
- Analysis of Different Oracles
[Dec. 7]
Presented by Mudit Agrawal [Slides]
- Testing Component-Based
Software [Dec.
12] Presented by Sharath Srinivas [Slides]
Course Project
Phase 1
Goal: Black-box test-case generation.
Procedure: Take subject applications from the TerpOffice web-site;
select five methods, each with at least 5 parameters; create JUnit test cases
for these methods using the category-partition method. Reduce the number of
test cases using pair-wise testing. Compare the statement coverage of the
original and reduced suites.
Deliverables: Source code of the five methods and all the JUnit
test cases; and a document describing the categories/choices and constraints.
At least two sets of test cases, each of which is sufficient to satisfy the pair-wise
testing criterion. Coverage reports. Also a one-page document describing the
difficulties you faced with using the subject applications – including
downloading, installing, etc. and how you handled the difficulties.
Due on Oct. 5 in class.
Phase 2
Goal: Making each JUnit tests fail by seeding artificial faults in
the method source code.
Procedure: Examine each JUnit test cases and the method source
code. Obtain a set of source-code changes (from this paper)
that will cause each test case to fail. Insert, in the method, a comment /*FAULT##
FAILURE INDUCING CODE */ at line N. Simple string replacement of line N
with “FAILURE INDUCING CODE” should cause the JUnit test case
to fail. If the change requires changes to multiple lines, then replace ##
with an integer; use the same value of the integer for all lines that are
related to one failure. Write a script to perform the string replacement
automatically, one fault at a time to avoid fault interaction. Use the
classification of the faults from this paper.
Deliverables: The modified source of methods, the script, and JUnit
test cases. An standalone executable that will demonstrate the entire process
automatically.
Due on Oct. 19 in class. [Late submission policy – you lose 10%
(of the maximum points) per day]
Phase 3
Goal: Running the pair-wise test cases on fault-seeded code. Also, running
the application (via the GUI) to detect the seeded faults.
Procedure: Execute all your pair-wise test cases on the methods
seeded with faults. Report the number of faults that you detected. Compare with
full suite. For the second part of this phase, compile and run the application
(with one fault turned ON at a time) and come up with an interaction (sequence
of GUI events) that reveals the fault, i.e., produces an outcome that is
different from that of the original code.
Deliverables: The failure report and the event-sequences.
Due on Nov. 2 in class. [Late submission policy – you lose 10%
(of the maximum points) per day]