We developed an automated design methodology to work hand-in-hand with real-time design tools [TSE95]. The objective is to guarantee a system's end-to-end real-time requirements by automatically assigning intermediate timing constraints, and by restructuring the code.

We believe that this type of strategy can significantly streamline the design process, since it supports a variety of resource-specific considerations early on in the life-cycle. The method is is applicable to control-domain systems, image-processing and multimedia applications -- and it supports a variety of different system topologies and real-time constraints.

In our real-time CAD tool, end-to-end constraints are entered as properties such as propagation delay, temporal input-sampling correlation, and allowable separation times between updated output values. These requirements are then automatically transformed into a set of rate constraints on the tasks. At this point new tasks are created to correlate related inputs. The constraints are solved by an optimization algorithm, whose objective is to minimize CPU utilization. If the algorithm fails, our program-slicing tool attempts to eliminate bottlenecks by transforming the application. The final result is a set of schedulable tasks, which collaboratively maintain the end-to-end constraints.

We recently generalized this method to handle distributed real-time systems with statistical quality constraints, and with underlying stochastic resource requirements [RTAS97]. Computations flow though distributed pipelines of tasks, which communicate in pairwise producer-consumer relationships. We use the tasking abstraction to represent any activity requiring nonzero load from some CPU or network resource.

A pipeline has two performance requirements, which are postulated between the input and output points: Firstly, delay constraints denote the maximum amount time computations can flow through the system, from input to output; outputs that exceed this time are dropped. Second, a pipe's quality constraint mandates a minimum allowable success rate for outputs that meet their delay constraints.

The objective, then, is to meet every pipe's quality constraints, with a low variance in output loss. But since a task's resource requirements can be arbitrarily distributed, and since resources can host multiple tasks from many chains, meeting all of the system's delay and quality constraints is a nontrivial problem. Our design method solves it by using techniques based on real-time scheduling theory and Markovian analysis.

We attacked this problem by (1) automatically generating a proportional load assignment to every task and network link; and (2) assigning a fixed sample-to-output rate for each pipeline. The assignment algorithm relies on some key techniques from both real-time scheduling theory and Markovian analysis; this strategy remains tractable by making some key approximations afforded by the system structure.

After the pipeline parameters are generated, the resulting solution is simulated, and the estimated success rates are compared with those derived via simulated on-line behavior.