I'm a senior here in the Computer Science Department at the University of Maryland, College Park. If everything goes as planned, I'll graduate this Spring ("Out of this dive in '95!"), and go to work for TRW Space & Defense. I could tell you what I do there, but it's CLASSIFIED, so I'd have to kill you.
Since January of 1994, I've been working as an Undergraduate Research Assistant to Dr. James Hendler on a variety of projects run out of the Parallel Understanding Systems Laboratory, an eclectic group of lads and lasses working on various things artificially intelligent using massively parallel supercomputers.
In particular, I've worked on CaPER, a masssively parallel case-based planning system being developed by Brian Kettler. My work on CaPER has included aspects of designing and encoding an extended transport logistics planning domain, as well as developing a system for randomly generating problems in this domain, solving them using Nonlin, a generative planning system, and then converting the problems and solutions into a form that can be read and used by CaPER. This allows us to begin testing of CaPER's algorithms using a large case-base of solved problems so that we can better fine-tune the retrieval, merging, and adaptation phases of the planner.
The other major project I've been involved with is the conversion of large relational databases into knowledge bases. Using a database supplied by the MITRE Corporation, we have successfully developed an ontology (knowledge hierarchy) which sits on top of the database, and encoded the database into Parka, a massively parallel knowledge representation system developed here at Maryland. Moving the database to Parka affords us two significant advantages:
Another discipline in AI that I'm considering as a graduate school research area is Data Mining (a.k.a. Knowledge Discovery in Databases). The driving commercial force in Data Mining is GTE's Knowledge Discovery Mine.
Data Mining encompases a variety of research efforts which center around extracting useful information from extremly large databases. These databases are far too large and complex for inferences to be drawn by people, and it is computationally intractable to search them exhaustively. The challenge, then, is to generate and select a useful subset of the database to serve as a training set for one or more machine learning systems.
As an example of the type of problem that Data Mining attempts to address, consider databases of astronomical information maintained by NASA. These databases receive new data from satellites and other observatories at such a phenominal rate that they have grown to many terabytes in size. It is impossible for scientists to recognize trends in this data, and machine learning systems can not process the data quickly enough to produce results. By carefully selecting a small portion of the database to use as a training set, and then fine-tuning the classification rules, new data can be categorized accurately, and new trends can be tracked in real-time.
This same methodology can be applied to other forms of scientific data, as well as data from the financial arena, and any other domains where very large databases are maintained. The significance of Data Mining technology is that it can be applied to existing databases and used to categorize and track trends in new data as it is inserted into the database.
A few of my more consuming pastimes include...
I also belong to the following organizations...
For more information on anything listed here, just send mail to... email@example.com.