SRL2004: Statistical Relational Learning and
its Connections to Other Fields

CFP

Pointers
Papers
Instructions
Software
Data

Contact Information
Information
Mailing List

Statistical machine learning is in the midst of a "relational revolution". After many decades of focusing on independent and identically-distributed (iid) examples, many researchers are now studying problems in which the examples are linked together into complex networks. These networks can be a simple as sequences and 2-D meshes (such as those arising in part-of-speech tagging and remote sensing) or as complex as citation graphs, the world wide web, and relational data bases.

Statistical relational learning raises many new challenges and opportunities. Because the statistical model depends on the domain's relational structure, parameters in the model are often tied. This has advantages for making parameter estimation feasible, but complicates the model search. Because the "features" involve relationships among multiple objects, there is often a need to intelligently construct aggregates and other relational features. Problems that arise from linkage and autocorrelation among objects must be taken into account. Because instances are linked together, classification typically involves complex inference to arrive at "collective classification" in which the labels predicted for the test instances are determined jointly rather than individually. Unlike iid problems, where the result of learning is a single classifier, relational learning often involves instances that are heterogeneous, where the result of learning is a set of multiple components (classifiers, probability distributions, etc.) that predict labels of objects and relationships between objects.

There have been several workshops on relational learning in recent years. The goal of this workshop is to reach out to related fields that have not participated in previous workshops. Specifically, we seek to invite researchers in computer vision, spatial statistics, social network analysis, language modeling and probabilistic inference to attend the workshop and give tutorials on the relational learning problems and techniques developed in their fields.

FORMAT

Because our goal is to build links with other fields, a significant amount of time in the workshop will be devoted to invited tutorials and discussion. Tutorials will include (a) overview of relational learning, (b) relational learning in spatial statistics, (c) relational learning in social network analysis, (d) relational learning in computer vision, and (e) approximate probabilistic inference for large networks. Focus topics for discussion will include (a) methodology (e.g., how to evaluate machine learning research on linked data by using connections between the test set and training set in a principled manner). (b) barriers to progress (the cost of inference, the need for benchmark data sets), and (c) new application directions (short talks describing interesting new applications). To give participants an opportunity to share their research with others, we plan to have at least one poster session

Participation is open. Participants are encouraged to submit papers (maximum 6 pages). Accepted papers may be presented orally or as posters.

IMPORTANT DATES

April 2 Workshop papers due
April 19 Author notification
May 7 Final papers
July 8 Workshop

ORGANIZERS

Tom Dietterich, Oregon State University
Lise Getoor, University of Maryland, College Park
Kevin Murphy, MIT AI lab

PROGRAM COMMITTEE

James Cussens, University of York, UK
Luc De Raedt, Albert-Ludwigs-University, Germany
Pedro Domingos, University of Washington, USA
David Heckerman, Microsoft, USA
David Jensen, University of Massachusetts, Amherst, USA
Michael Jordan, University of California, Berkeley, USA
Daphne Koller, Stanford University, USA
Andrew McCallum, University of Massachusetts, Amherst, USA
Foster Provost, NYU, USA
Stuart Russell, University of California, Berkeley, USA
Taisuke Sato, Tokyo Institute of Technology, Japan
Padhraic Smyth, University of California, Irvine, USA
Ben Taskar, Stanford University, USA
Lyle Ungar, University of Pennsylvania, USA