skoll

A process & Infrastructure for Distributed, continuous Quality assurance

 

Software testing techniques and tools have historically been designed for application to the system. Today, however, there is no single system. Instead there is a complex component assembly, organized around large and complex design spaces. A system’s design space refers to the dimensions of variation it supports, i.e., there is typically a core system that supports controlled variation in its features, versions, algorithms, platforms, architectures, standards implementations, and so forth. We believe that design spaces have identifiable structure, and that this structure can be leveraged to define powerful testing and analysis algorithms.

The Skoll project is building and validating tools, techniques and processes to support the following key activities:

  1. 1. Explicitly modeling the system design space. Component providers create design space models for their individual components. Each model exposes dimensions of variability for that component, such as hardware platforms, operating systems, feature sets, compile- and run-time options, etc. Infrastructure tools then automatically integrate each individual component model with the models of its dependent components to create an integrated model of the system under test.

  2. 2. Defining test coverage criteria and generating test plans. System models implicitly define all configurations of the system to test. Since exhaustive testing is infeasible, we define sampling strategies over the design space. Applying a sampling strategy to a  model yields the specific set of configurations to be tested, called the test plan.

  3. 3. Executing the test plan across a large-scale computing grid. Executing the test plan involves decomposing the test plan into independent test jobs, where each job typically focuses on one configuration or group of equivalent configurations. Numerous optimizations can be applied at this stage to limit duplicated effort, and to coordinate the activities of multiple test plans. Tools then distribute the jobs to client machines on the grid.

  4. 4. Executing and measuring the system under test. As the test jobs execute, Skoll collects and stores execution data in a community-accessible information repository. The data can include test results, coverage information, detailed crash reports, etc. Depending on how the test plan is defined, incremental results may be merged and analyzed to guide subsequent iterations of the test process.

  5. 5.Analyzing and publishing results. Data collected ongoing test processes is analyzed and published via standardized visualizations, which show, for example, the stability of particular configurations, the test status of the latest system version, the effect of particular configuration parameters on standard performance benchmarks, etc. Skoll also supports tools that enable community members to develop and share their own data analyses.


Previous Applications

  1. A Distributed Continuous Testing Process for MySQL



 

Web Accessibility