Scenarios
Picture

During Individual analysis, reviewers normally use Ad Hoc or Checklist defect detection methods, both of which involve nonsystematic techniques with general and identical responsibilities. Along with Drs. Basili and Votta, I hypothesized such approaches lead to overlap and gaps in defect coverage, and are therefore less effective than systematic techniques with specific, distinct responsibilities.

To conduct this experiment, we prototyped a set of defect-specific techniques which we called Scenarioscollections of procedures for detecting particular classes of defects. Each reviewer executes a single Scenario and all reviewers are coordinated to achieve broad coverage of the document. We then conducted a controlled experiment using 48 graduate students in computer science (Spring and Fall 1993) and 21 professional software developers at Lucent Technologies (Fall 1995) as subjects. They were assembled into 23, three-member teams.

The experimental results showed that the Scenario method had a higher defect detection rate than either the Ad Hoc or Checklist methods; that Scenario reviewers were more effective at detecting the defects their scenarios were designed to uncover, and were no less effective at detecting other defects. In addition, Checklist reviewers were no more effective than Ad Hoc reviewers.

After completing the experiment, I compiled an extensive set of notes, describing the experiment. Several research groups around the world have used them to replicate and extend this study, making this experiment one of the most widely replicated experiments in the software engineering literature. (See Basili et al, Brooks et al, Fusaro et al., and Jeffery and Cheng). These studies are consistent with our original findings. Also, the Scenario method has since been formalized by Heitmeyer et al. and extended in new directions by Basili et al.

The most complete descriptions of this work appear in Porter et al. and Porter and Votta.

Picture
Picture