... agents[*]
The intelligent interfacing agents that are in use today are mainly for web related applications. For instance, intelligent information agents like search engines (e.g., Google, HotBot), news watchers [BP99] and browsing assistants [Lie97] find, evaluate and filter information based on the user's personal interests. Another class of intelligent interfaces concentrate on cooperation with users or other agents to find solutions to complex problems. These agents have been applied to provide help in the areas of organizing email [LMM94], shopping online [CM96], scheduling meetings [HSAN97], automating tasks [HL97] and providing advice [T.S94].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... agent[*]
Therefore, the interfacing agent cannot manipulate, monitor or control the TOS domain directly; it can access the domain only indirectly using the TOS.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... reasoning[*]
Audi [Aud89] argues that practical reasoning might exhibit the structure of acting for a reason, so that actions for a reason can be considered rational in view of the agent's reason(s) for these actions.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... subintervals,[*]
e.g., February 1, 2005 has 28 days is not true.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... hold.[*]
See Section 2.2.5.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... axiom[*]
Note the similarity between this axiom and axiom (3.7) in Section 3.4.1.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... K[*]
$ \mathbf{B} \phi \wedge \mathbf{B}(\phi \implies \psi)
\implies \mathbf{B} \psi$
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... necessitation[*]
From $ \models \phi$ infer $ \models \mathbf{B}\phi$
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... D[*]
$ \mathbf{B} \phi \implies \neg \mathbf{B} \neg \phi$
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... rule''[*]
Inheritance and disinheritance are directly related to belief revision [Gär88] and to the frame problem [MH69]; see [NKMP97,Nir94] for further discussion.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... capabilities[*]
The importance of real-time capabilities in AI systems is discussed in [MHA$^+$95].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...$ closed$[*]
with no variables
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... action(s)[*]
Note that the TOS readings do not include all the effects of actions. In fact, the set of TOS readings is a proper subset of the set of effects.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... intentions[*]
Note that a desire is achievable does not mean that the corresponding intention is achievable. A desire may be achievable and yet the corresponding intention may not be achievable because some precondition is not met or the intention interferes with other active intentions.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... belief[*]
Contrast this with an observation $ \observed(Obj, Prop, Val, T)$.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... do[*]
A human can desire to be on Saturn before the year 2010 without ever intending to achieve it. However, if an agent based on ALFA creates the same desire it will intend to reach Saturn and then realize that it does not have the knowledge about the actions to be executed to achieve that result and hence mark that intention as unachievable until it gathers more information on how to go to Saturn.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.