The design process for the next version of Navigator has been open to the public from its beginning. Source code is made available, free of charge, to developers and users outside of Netscape; they, in turn, are encouraged to test the code and report errors to Netscape's database. This is a very different style of product design from the usual "closed shop" method, and its efficacy has been much debated. This "open" technique will be the main focus of our investigation: this shifts the focus from examination of the more technical issues involved in software design.
The ordinary interface to this data consists of an HTML wrapper around
a query engine. Once the query has been submitted, a table of response data
is generated. This is obviously not very useful when trying to gather information
about the data.
As can be seen from these screen shots, this has all of the traditional
limitations of text information; for example, few records can be seen per screen,
there is
no simple way to navigate among them, and they have no graphical representation.
Click on the thumbnail to view a full sized picture.
The Results
In
the screen capture to the right, we have hidden the data that comes from
within Netscape. As we can see, there is a definite increase over time
in the number of bugs being reported from outside. This could indicate
that as the project becomes more usable (and therefore more interesting),
more people are contributing.
A total of 1451 out of 5487 bugs in the data were reported by outside sources.
Another possible question is that of the quality of the bugs being reported:
are the errors reported by sources outside Netscape important ones, or are they
simply trivial? The next two figures show pie charts indicating the distribution of
the severity
of the bugs: again, the information is divided into Netscape and outside sources.
Errors classified as "normal" are excluded because their overwhelming majority
in both cases makes it difficult to see the skew. Here, one can see
that outside sources identify far more minor and trivial errors, and people
inside Netscape identify far more blockers. It is interesting to note,
however, that the percentage of major and critical errors are roughly the
same for both sources. The one on the right is the internal Netscape data, the one
one the left, the external.
A final, perhaps surprising observation can be made about the correlation between
the severity of a bug and the priority of its repair. This is unrelated to the
efficacy of the "open source" method. There were few bugs at
all in the lower priorities. In every category of severity, including major and
critical, the majority of bugs were assigned a middle priority. There were fewer of
all bugs (again, including the higher severities) in the high priorities as well.
This is shown below with the data with normal severity (which overwhelms the rest)
hidden. This is somewhat counter-intuitive: you would think that there would be
a stronger correlation between greater severity and stronger priority.
You will notice that this is a two dimensional bar chart: it would have
been possible to do this with a scatter plot, but the axes were made
more difficult to see.
Spotfire proves amply useful in examining much of the data about the bug reports, including data about the sources, and the severity of the bugs being reported. This is clear from the results that were gleaned from the data. Furthermore, and perhaps more importantly, it makes exploration of the data easier: its ability to hide irrelevant data is vey useful; above, we hid the data of normal severity in one set, and the Netscape internal data in another. However, as a tool to examine the meaning of an individual bug, it leaves much to be desired. For example, it would be impossible to determine whether a given bug is an error in the program's specification or an error in the implementation. It also cannot encapsulate dependencies: if one bug's resolution depends on the resolution of another, Spotfire cannot see it. In spite of its limitations, Spotfire is clearly a great improvement over a simple table of results in viewing and sorting through this information. Rather than submitting multiple queries to the web interface, and comparing the results by hand, it allows the user to filter through the information quickly and compare it visually.
Visit the Netscape corporate home page
Visit the web server for the next version
of Netscape Navigator
Visit the web server for the
bug data
Visit Spotfire's web server