Inputs
Picture

This experiment’s data shows considerable variation, which may obscure some significant effects. This has been a chronic limitation of previous software engineering experiments. Therefore, I, along with Audris Mockus, and Drs. Votta and Siy, investigated the possibility that differences in the process inputs (e.g., reviewers, authors, and code quality), masked the effects of process structure. Our approach was to use regression techniques to model variation in the data as a function of process inputs and process structure.

For effectiveness data, we found that the code’s size, its functionality, and the reviewers who inspected the code explained 50% of the variation, while the process structure explained only 2%.  We also found that even when the variation due to these inputs was factored out, process structure did not have a significant effect on effectiveness.

For interval data, we found that the code’s author, and the presence of certain reviewers explained 36% of the variation, while process structure explained only 3%. The model for pre-meeting interval was similar, but included the structure variable, Repair. That is, by factoring out the variation due to process inputs, we discovered an effect due to process structure (namely Repair). Our interpretation is that process inputs (mostly the code unit’s author) have a greater influence on interval than process structure does. Nevertheless, we found that inspection processes involving multiple, sequential inspections, significantly lengthen interval.

These results reinforce our previous findings--that recently proposed changes to inspection process structure are largely ineffective in improving effectiveness and, in some cases, can require substantially more effort and interval.

See Porter et al.[10] for a more details.

 In our follow-on work we concentrated on the techniques by which artifacts are inspected.

Picture
Picture