Multimedia Systems

One of the principal challenges in building a multi-media system lies in balancing the platform's resources against the demands of the presentation. In this regard, real-time digital video applications present unique difficulties, since a video stream's transfer requirements can easily exceed the capability of most workstations. Part of the problem stems from the asymmetry between the high-end platform where the video is produced, and the the lower-end platform on which it gets played. The challenge of adjusting a video to a particular target platform demands a "load-balancing" solution, applied with both quantitative and qualitative metrics.

To date, we have investigated several aspects of this problem. In [SIGMETRICS96] , we investigated the trade-offs between what we call static tuning and dynamic tuning. Static tuning takes place during the production phase; it is the process of adjusting the video's intrinsic quality before it gets exported. We studied the results of different static tuning alternatives, by altering key parameters (e.g., Codec type, frame size, digitized rate, spatial quality, keyframe distribution), and then charting their effects at playback.

Dynamic tuning occurs during playback itself; the idea is to process a video stream as smoothly as possible, and to achieve a controlled, deterministic coordination between the various system components. To accomplish this, we we built our own system-level support to perform dynamic tuning, which attempts to use the system resources in as optimal a manner as possible. The software periodically estimates the playback requirements of a particular video, and allocates buffers, prefetch window sizes, IO bandwidth, and CPU cycles so that the computer can best meet the video's requirements. Our software significantly outperformed that available via commercially available APIs, with improvements of over 300% in measured playout rates.

We augmented this software with a multi-platform simulation package [Simulate97], which allows developers to predict playout performance on a wide spectrum of end-user systems. The simulator uses eleven deterministic and stochastic time-generating functions, which capture key parts of the playout datapath. These distribution functions are retrieved via a set of profiling tools, and then -- at simulation time -- they are composed into a virtual end-user system. This scheme allows one to extend the range of target platforms, by benchmarking new components, and postprocessing the results into distribution functions, which are stored in the simulator's library. (These component models need only be profiled once, after which they can be shared.) Hence, using this system, a developer can quickly estimate a video's performance on a wide spectrum of target platforms -- without possessing the actual system, or even any of its component devices.