Regression Test

In practice, the cost of maintenance and testing is enormous. Consider the problem of validating modified software. One common way to do this is to rerun tests from existing test suites (called regression testing).  Although valuable, this is often very expensive.  For instance, we know one company that regression tests every third weekend for 24 to 36 hours per testing session.  In this case, management would like to test more frequently, but the time required is too great.

To address these problems we are participating a long-term, collaborative research project with the following objectives:

  • Construct a program-analysis infrastructure. We are building an extensible infrastructure to implement and evaluate a program-analysis-based testing and maintenance techniques.  Because the infrastructure must support large-scale experimentation, we are collecting a repository of artifacts, including programs with multiple versions, test suites, test scripts, and fault data, that will serve as benchmark suites for use in experimentation.
  • Develop scalable program-analysis techniques.  We are developing and evaluating several analysis approaches, including demand-driven and layered approaches. We are also evaluating the trade-offs between storing intermediate program representations on secondary storage and recomputing this information. 
  • Perform large-scale experimentation. We are conducting a family of experiments to compare the cost-benefits of existing approaches, evaluate the gains offered by our new approaches, and determine the value of experimental features of our infrastructure.

This project is sponsored by the National Science Foundation’s Experimental Software Systems program and the research is being performed by myself, Mary Jean Harrold and Renee Miller of the Ohio State University, and Gregg Rothermel of Oregon State University. Our industrial partner is the Microsoft Corporation.