TO DREAM THE UNBREAKABLE DREAM A well-known and particularly pesky difficulty in AI is the so-called brittleness problem: automated systems tend to "break" when confronted with even slight deviations from the situations specifically anticipated by their designers. Of course, this is no great surprise. Why would something "work" in a given situation if it was not built to do the right thing for that situation? We would not expect Deep Blue to be able to play even mediocre checkers -- it was built specifically for chess and only chess. And yet this is what we strive for in AI -- a system that has an approach that works in a wide variety of situations including ones not anticipated by the designer. We want a robot which, if it were to bump into a chain-link fence that it could not see or feel, would realize that it's forward motion is impeded and that it is a waste of time to spin its wheels forever; and at the very least, it could shut down, or even better, it could try to free itself from whatever is in the way. We want a satelite that, if it receives no Earth signal within a certain amount of time, realizes that something is wrong, perhaps for some reason no signals are being sent (so it could send one itself, to see if an answer comes back), or perhaps for some reason it cannot receive signals and so should try moving its antenna until it again gets Earth signals. And yet we do not want to have to think of chain-link fences as we design the robot, nor of unreceived signals as we design satelites. We do not even want to have to build in that if forward motion does not occur while wheels rotate forward, then do this or that; nor that if signals are not received within a certain time, send a query or change antenna direction. There are far too many specific failures that we would have to plan for in our design. But then how can this be done? Is general-purpose artificial intelligence a will-o'-the-wisp, an impossible dream? And yet people are (biological) systems that do precisely this "impossible" thing. We are very good at dealing with surprise, anomalies, deviations from the expected, situations we have no training or experience with. A mysterious sort of "commonsense" guides us; while we often do not find optimal or even completely satisfactory results, we do tend to avoid disaster and also tend achieve something useful even if only part of -- or even different from -- the initial aims. We would not have any problem at all if we were in the situation of the fence-impeded robot or the lonely satellite. We'd very quickly see that something is wrong, and come up with various attempts to right things. Even if in the end we fail (to achieve forward motion or Earth signals) we would do reasonable things pertaining to the difficulty, rather than wasting energy by spinning wheels, or listening forever for a signal that does not come. So, what is the secret of our success? Is it that evolution has simply "built in" -- over millions of years and with millions of mostly failed ancestral experiments -- appropriate responses for vast numbers of particular failures, so that we now simply have a huge repertoire of such responses, paired up with those failures, ready to use at a moment's notice? Is that why we so effortlessly deal with glitches in our everyday plans, often seemingly without even having to ponder? If so, then perhaps AI is doomed to failure, or perhaps to reduction to "adaptive" systems, ones that -- over millions or billions or more of trials -- learn millions of special-case strategies for special-case problems, with no discernible principles or architecture. Then we'd have to build a system and simply let it "evolve" for a long long time until it had encountered (and eventually resolved) virtually all conceivable anomalies. In the interim, the system would be highly breakable; and when "done" would afford us little insight into its workings. Moreover, it seems implausible on the face of it that a system could ever encounter even close to all anomalous situations and thus be ready for whatever comes next. But it is also possible that, instead, we have a small but very powerful set of general-purpose anomaly-handling strategies that work over a vast range of situations, that there is a rhyme to our reason. This is our COMMONSENSE-CORE HYPOTHESIS (CCH): 1. Commonsense may usefully be defined as unbreakability in the face of a very large, unpredicted, real-world-encountered set of situations: disaster is avoided, (some) goals are met, and the system is ready for another day. Damage may occur, but not total breakdown (other than from outright destruction of the system by external means). 2. This unbreakability rests on a modest number of general-purpose anomaly-handling strategies. 3. There is an overall simple architecture for deployment of those strategies, consisting of three phases: -Note an anomaly -Assess its type and possible responses -Guide one or more responses into place Now, this may sound glib: of course these three steps must happen (so a counter-argument might go) but the second phase -- assess the anomalous situation and come up with an appropriate array of fixes -- is precisely the original problem dressed in new terminology but no more solved than before. But this is where the CCH makes a major claim: that anomalies can be usefully classified into a small number of types such that for each such type there is a small set of appropriate responses, yet powerful enough to provide the desired commonsense behavior: unbreakability under virtually all situations other than outright destruction of the system by external means (i.e., breaking is not due to the system's blunders). In fact, we have designed preliminary sets of anomaly types and of response types -- with an in-between type as well: explanations (of anomalies) that suggest appropriate responses -- and are in the process of implementing and testing successively more general versions. We call this architecture the MetaCognitive Loop (MCL). One very exciting feature that we have barely begun to explore, is the use of MCL to decide what, when, and how to learn a new skill. Thus a particular anomaly might lead to the response (conclusion) that training is needed in order to be better equipped for the given situation. The guide phase then amounts to initiating a training program and monitoring its progress.