Many companies are developing software using multiple, geographically separated teams. When this happens, dependencies between tools, processes, and people can substantially increase development interval. We used this situation as an opportunity to perform a field test of the results of the experiments described above. That is, if our results are reasonably general, causal, and actionable, then they should enable us to change the inspection process to reduce its interval, without sacrificing effectiveness.
To validate our theories, I, along with Jim Perpich, Dewayne Perry, Lawrence Votta, and Michael Wade of Lucent Technologies, identified several contributors to inspection delays. The most interesting of these is blocking due to synchronization and sequencing of inspection subtasks. To reduce these delays, we considered three strategies: Reduce paper, automatically generate necessary reports, and change process to reduce synchronization and coordination. The first two strategies are straightforward, but the best way to reduce synchronization was less obvious. We considered three strategies: Sharing preparation results among the review team, eliminating the inspection meeting, and overlapping preparation and repair.
Sharing preparation results. In the manual process, reviewers perform individual analysis privately. That is, each reviewer's findings are unknown to the other reviewers until the inspection meeting occurs. Our approach was to make each reviewer’s findings public in nearly real-time.
Eliminating the inspection meeting. Our previous research suggests that meetings significantly lengthen inspection interval, but contribute little to effectiveness. Therefore, we eliminated the inspection meeting.
Overlapping preparation and repair. Because we eliminated the meeting, the process has only two major phases, preparation and repair. Although these two phases are normally performed sequentially, we allowed them to overlap. That is, the author may begin repairs as soon as defects are found.
To enforce these process changes and to collect performance data, I designed a web-based, workflow system called TkDAF. This system was later extended into the HyperCode system used in the actual experiment. HyperCode is currently being used by several development groups at Lucent. Our main experiment involves two development teams; one in Naperville, IL and the other in Whippany, NJ.
Our hypothesis is that the HyperCode process will have a smaller interval than the manual one, but be no less effective. The experiment is currently running and the initial results suggest that HyperCode reduces inspection interval by about 25% with no apparent reduction in effectiveness. However, we will have to continue running the experiment to better support these findings.
HyperCode is intriguing in that it lacks many of the features (e.g., support for work groups) that other systems such as FTARM, Icicle, AISA, and Scrutiny have. This is because our empirical work suggests that these features create unacceptable cost-benefit tradeoffs.
The most complete description of this work appears in Perpich, et al.