As Artificial Intelligence (AI) techniques become mature, there has been growing interest in applying these techniques to controlling complex real-world systems which involve hard deadlines. Unfortunately, many AI techniques are characterized by unpredictable or high-variance performance, making them unsuited to the performance guarantees required for real-time control systems. Most research on RTAI focuses on restricting AI techniques to make them more predictable.
Our research to date has focused on a new approach, the Cooperative Intelligent Real-time Control Architecture (CIRCA). In this architecture, an AI subsystem reasons about task-level problems that require its powerful but unpredictable reasoning methods, while a cooperating, parallel real-time subsystem uses its predictable performance characteristics to deal with control-level problems that require guaranteed response times. We are investigating several aspects of this architecture, including planning for real-time control tasks, interfacing real-time and non-real-time subsystems, explicitly making performance tradeoffs when resources are overconstrained, and utilizing resources that become available dynamically.
Some of this work is being done in conjunction with the Real-Time Intelligent Control project in the Autonomous Mobile Robotics Lab.
In domains where the failure is to take appropriate and timely action is potentially catastrophic, behavioral adequacy for a control system cannot be established by testing alone. In addition to the requirement of logical correctness, which is desirable for any program, such mission-critical systems typically have strict temporal constraints as well. Hard real-time systems have been developed to address these requirements, but achieving intelligent behavior in this context has proven probelmatic. The requirement for hard real-time response is clearly incompatible with the fundamentally time-bound, high-variance techniques of classical AI, and the inability to precisely characterize the performance and resource requirements of current reactive systems makes them equally unsuitable for use in hard real-time systems.
We propose to develop a system for representing the semantics of low-level competences and for reasoning about their use in isolation and in combination. Such a representation will allow principled proofs of the correctness of reaction-based systems, as well as provide a formal basis for automated reasoning about the use of reactive competences. This will support the use of engineered reactive systems with guaranteed logical performance features in a hard real-time context, while at the same time providing a link to classical AI methodologies. In short, this formal semantics for reaction will bridge the gap between mission-critical domains and deliberative AI techniques.