Device interfaces are Janus-faced, with one face looking outward and interacting with the user and the other looking inward and controlling the device. The actual tasks that a device performs determine the inner face of the interface. One of the most common strategies applied for designing the outer face of the interface is to focus on these tasks, relying on familiar controls for each task. For instance, simple video players have controls for play, rewind, forward and stop corresponding to the four functions that the player can perform. The advantage of this strategy is that the capabilities of the device are easily accessible to the user and hence in effect, the user controls the device directly. However, the number of controls on the outer face of the interface increases in proportion to the number of capabilities. Therefore, as devices get more complex it becomes increasingly difficult to display all the capabilities, without making them resemble airplane cockpits.
Another common strategy employed to design the outer face is to focus on the user needs, providing controls for what the designer perceives to be the most common user needs. In this strategy, there is a clear mapping layer that provides the proper mapping from the outer face controls to the inner face device functions; wherein a control on the outer face may map to more than one capability of the device. Thus, in some video players there is a control that automatically rewinds the tape before it plays the tape. The advantage of this strategy is that the users have easy access to some commonly required behavior. However, the behavior that the designer perceives to be the most common might not match some users' requirements since each user is unique and the behavior that users require from a device differ. For the same reasons, it is impossible to foresee all possible user-specific needs. Even if a designer succeeds in foreseeing all possible user-specific needs and providing access controls that can trigger behaviors that satisfy those needs, the resulting outer face will become too complex for any practical use.
A third strategy employed is to allow flexible outer faces that users can adapt to suit their needs. For instance, some CD players allow users to program the tracks to be played repeatedly. Here, the mapping layer between the outer face and the inner face is not fixed by the designer, rather it is programmed by the user. Therefore, the user chooses the inner face device functions that are to be mapped to an outer face control, and the interface allows and keeps track of such user programmed mappings.
In fact, when flexible outer faces are allowed, the interface as a whole, can be thought of as an agent that (i) keeps track of the current mappings in the mapping layer, (ii) translates the outer face user selections to appropriate sets of device capabilities and (iii) allows changes in the mapping layer through the outer face. The amount of dynamic mapping change that is facilitated depends on how sophisticated the particular interfacing agent is; the more sophisticated the agent is, the more intelligent the resulting device is considered to be. Examples of such intelligent devices include wheel chairs [GG98,LBJ$^+$99], construction vehicles [GI99] and Portable Satellite Assistant(PSA) [GBWT00].
The advantage of having flexible outerfaces is that the users have additional flexibility in operating the device in a manner that suits their needs. As Kay [Kay90] points out, interfacing agents can revolutionize computing since a user need not manipulate a system directly, but can indirectly control a system by interacting with the agent. The disadvantage is that one has to learn to program a device in the manner one desires, before one can operate it in that fashion.
Today, each device interface is designed as part of the device itself. The disadvantage of this is two-fold. First, there is very little in common between the outer faces of the different device interfaces. Hence, as users acquire more electronic devices (cameras, cell phones, PDAs, etc.) they may find learning each new interface (outer face) increasingly burdensome. And second, the inner face of an interface is very closely knit with the underlying device functionalities and hence they are more-or-less inseparable. Therefore, the interfacing agent designed for one device cannot be easily reused to interface with another device.
Hence, as consumer electronics become more complex and various, designers will need to consider the possibility of employing a single, shared, general and flexible interfacing agent with an outer face that users can easily learn to interact with, a mapping layer that can be modified easily and an inner face that can be adapted to control many different devices. An interfacing agent equipped with a natural language outer face and having appropriate mechanisms to alter the mapping layer and the inner face will serve this goal. It can be used to control not only hardware devices but also software applications that are task-oriented. With the help of such an agent, users can tailor the behavior of task-oriented systems to suit their needs, without learning specialized vocabularies to do so. Crangle and Suppes [CS94] describe these two features--tailoring the behavior of an application to suit a user's needs and users not having to learn specialized vocabularies--as the principles that govern human-computer interaction. Such a universal interfacing agent could revolutionize the use of task-oriented, specialized systems in the same manner that the Windows Operating System has revolutionized the use of personal computers.
For example, consider a pool controller that accepts commands to heat the pool, stop heating and provide temperature of the water. To maintain the temperature of the pool at between 8:00 pm and 9:00 pm if it is a working day, and between 10:00 am and 1:00 pm if it is a non-working day, a user directly interacting with the pool controller has to (i) keep track of the time (ii) keep track of the temperature by observing the temperature reading periodically during the interval in which the temperature has to be maintained and (iii) issue commands to heat and stop heating based on the observed temperature. On the other hand, a user interacting with an interfacing agent can instruct the agent to maintain a temperature of between 8:00 pm and 9:00 pm if it is a working day and between 10:00 am and 1:00 pm if it is a non-working day; the agent then issues commands at appropriate times to heat and stop heating based on the temperature, time and whether the user is working or not on a particular day. Thus, by integrating a rational interfacing agent with a task-oriented system, the user gains the flexibility to adapt the system to meet his/her unique requirements.