JavaMemoryModel: Dealing with scopes in real-time Java

From: Doug Lea (dl@cs.oswego.edu)
Date: Thu Jul 06 2000 - 09:24:07 EDT


Bear with me. It takes some set-up to get to the point of this posting.
(Also, this is a little tangential to both java-genericity and jmm, but
I thought people on these lists might be interested.)

The new Real Time Java spec (see the new book, or full text on-line at
http://www.rtj.org) introduces the notion of ScopedMemory areas. The
main goal here is to permit things like stack-allocation in real-time
tasks that cannot afford unpredictabilty in GC-based memory
allocation. But the constructions seen here introduce some intriguing
possibilities for isolating objects in more general contexts,
especially those used in multithreaded programs and in
multiple-application servers (e.g., those running multiple independent
applets, agents, etc).

The basic idea in RTJ is that you can create MemoryAreas that will be
used for all object allocations performed within a given Runnable's
run method. The most interesting kind of MemoryArea is
ScopedMemory. ScopedMemory areas nest under each other in just the way
you think they should.

In order to maintain safety, RTJ imposes rules that preclude the
possibility of dangling references to extinct memory areas. However,
scopes are not syntactic constructs (although they interact with
them). Instead, you enter a scope by starting the run() method of
Runnable (via MemoryArea.enter()) or by starting a new RealTimeThread
associated with the memory area. Here's an example that I hope
illustrates everything you need to know for present purposes:

class X {
  X ref;

 static void f() {
   X hX = new X(); // normal, heap-allocated

   MemoryArea a = new VTScopedMemory(...); // a's parent scope is heap

   a.enter(new Runnable() { // sequential scope-entry
     public void run() {

       X aX = new X(); // allocated in scope a

       MemoryArea b = new VTScopedMemory(...); // b's parent scope is a

       new RealTimeThread(b, new Runnable() { // thread-based scope-entry

         public void run() {
           X bX = new X(); // allocated in scope b

           bX.ref = hX; // OK -- scope b can ref heap
           bX.ref = aX; // OK -- scope b can ref outer scope a

           aX.ref = hX; // OK -- scope a can ref heap
           aX.ref = bX; // Error -- scope a cannot ref inner scope b

           hX.ref = aX; // Error -- heap cannot ref scope a
           hX.ref = bX; // Error -- heap cannot ref scope b
        }
      }),start();
    }
   });
  }
}
       
The spec mandates that the above OK/Error checks be present. The rules
amount to:
  p.f = q;
is legal if (in addition to normal type and accessibility checks)
  q's memory area is referenceable from p's memory area.
where
   HeapMemory is referenceable from every memory area,
   so can be treated as the root of all scopes.
and
  Scoped memory area a is referenceable from scoped memory area b
  if b is within a's scope (i.e., a and b are same or a is a scope-ancestor).

Notes and asides:

  * RTJ "ImmortalMemory" has the same referenceability properties
    as HeapMemory, but this can be safely ignored for now.

  * Conspicuously missing here are MemoryArea subclasses
    that aren't tied to stack-allocation rules; supporting for example
    access-controlled "extrusion". Adding these would make for
    some interesting complications.

  * Also surprisingly missing is a MemoryArea that is guaranteed to
    be strictly local to a Thread. (One would expect that the vast
    majority of them would be, but there is no way to enforce this.)

  * MemoryAreas may introduce a couple of minor interactions with Java
    Memory Model visibility rules across threads. (For example,
    MemoryAreas used by Threads must be visible to them. I'm
    not yet sure whether all cases of this are covered.)
      
  * The "public review" period of the RTJ spec is already over, but
    the authors have said that a few changes may be expected before final
    approval later this year, in part to reflect experiences in
    current attempt to build a Reference Implementation.

All of this leads to some new challenges. The need to perform dynamic
scope-checks on every reference assignment in a program could cripple
performance in some applications. And perhaps more importantly,
relying on dynamic checks is not particularly helpful for RealTime
programmers who require predictable performance, and so cannot afford
some of the surprises that would occur if certain checks fail during
run-time.

The text of the RTJ spec mentions the resulting need for tools. But
in Java and most languages, the most productive approach to solving
such problems has been to somehow rely on types and type checking.
Type (and related accessibility) checks are the only kinds of checks
that java has been designed for. Integrating scopes and types looks
far from easy, and might turn out to be a bad idea, but seems worth
exploring.

Consider an approach metaphorically similar to GJ. Here, every class
is actually treated as if it were parameterized under a given
MemoryArea, as in class X above appearing as ...

class X[MemoryArea scope] {
  X[scope] ref;
  ...
}

... where you impose the above rules for assignments across different
scope parameters. It doesn't look hard to devise a type hierarchy here
that transforms these into more ordinary-looking typecheck rules for
fields, although method arguments and results are still problematic.

But there are of course some big differences from ordinary generic
types, mainly stemming from the fact that scopes are instances of
memory areas, not types themselves. In some cases (like the example
above), it would be fairly easy to use a little bit of flow analysis
to "promote" scope instances to types, but in many other cases this
would not be possible.

The only way out is some kind of "soft-typing" (alternatvely viewed as
an "extended static checking") approach that falls back on dynamic
checks when necessary. The "when necessary" is troublesome, since it
will depend in part on how much static analysis you can afford to do
here. And the current Java verifier does not understand at all how to
accept some static judgements but to include dynamic checks for
others.

However, as a first approximation, this seems to fit well under the GJ
approach of how to evolve Java in a backwards-compatible manner, which
seems like the only viable plan of attack. Without static analysis, it
amounts to a form of type-erasure, requiring full dynamic checks. But
it also admits all sorts of future improvements in compile-time,
load-time, and JIT-time analysis and support.

These issues are related to work on Region analysis in ML, and escape
analysis in Java (Aside: Note that scopes offer several ways to
improve accuracy and reduce analysis times of escape analysis.) They
are also very closely related to work on aliasing and access
control. Of these, the most similar approach I know is Peter Mueller's
"Universes" type system for an extended form of Java. See
  http://www.informatik.fernuni-hagen.de/import/pi5/veroeffentlichung/techreport263/poe_report.html
Universes relies on additional programmer-specified declarations that
wouldn't be available here.

But I'm posting this message because, even though I find the issues
very interesting and pragmatically important, I don't have any
particularly good concrete ideas about how to go about dealing with
them. Maybe you do!

-- 
Doug Lea, Computer Science Department, SUNY Oswego, Oswego, NY 13126 USA
dl@cs.oswego.edu 315-341-2688 FAX:315-341-5424 http://gee.cs.oswego.edu/  
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:26 EDT