Re: JavaMemoryModel: Why CnC

From: victor.luchangco@sun.com
Date: Tue Jul 29 2003 - 10:55:27 EDT


First, I also want to thank Jeremy and Bill for their work pushing the
whole JSR-133 effort, and especially in sorting through what kinds of
behaviors should or should not be allowed. I feel very ill-equipped
for that task, and I'm glad that they have worked so diligently on it.

Second, I think the JMM proposal should be conservative, so it can be
refined in the future, maybe in "minor" releases, as the Java
community builds up a wealth of experience from both programmers and
implementors. Unfortunately, "conservative" means different things
for programmers than for implementors. Specifically, a conservative
model for programmers (what Sarita has been pushing) contains minimal
guarantees: Most programmers should not write incorrectly synchronized
programs, and no programmer should have to do it very much. On the
other hand, a conservative model for implementors is stronger, giving
programmers as many guarantees as possible.

Would it be possible to propose two models, one "for programmers" and
the other "for implementors"? Such a proposal could have a flavor
like that of deprecation: Although Java-compliant implementations
will need to guarantee the stronger model, programmers should rely
only on the weaker model if they want their code to be compatible
with future versions of Java.

A big advantage of this approach is that it allows us to gain
experience in this area, where I think our ideas about what properties
are hard to guarantee (for implementors) and what properties are hard
to live without (for programmers) are mostly speculative. (No offense
is intended to the many whom I'm sure know a lot more and have thought
a lot more about implementing the memory model than I have.) Building
on this experience, we can narrow the gap between the models in the
future, and we can do so without significantly disrupting either the
implementors or the programmers because old code (both Java programs
and implementations) will not be broken. So these changes can be made
on an as-needed basis in minor releases.

There are two main disadvantages to this approach: First, it requires
we come up with two models, and that we check carefully that one is
stronger than the other. Second, having two models may be confusing
to many (though programmers should only see the programmers model, so
perhaps the confusion can be contained), and invites abuse.

The first disadvantage is actually an advantage in a way: We don't
need to agree about whether a property is necessary or not. If we
cannot agree, then it should be guaranteed by the implementors model
but not by the programmers model. Of course, this solution has the
effect of making both sides unhappy. But they may be comforted that,
if it turns out the property is really too difficult to guarantee, or
conversely, if it is too hard to live without, it can be changed in
the future. (I do *not* advocate leaving too many of these decisions
unmade, just those for which there are legitimate arguments on either
side that may be resolved with more experience.) For example, Bill
thinks that "the only limitation on compilers in synchronization-free
code is that they may not introduce additional reads or writes of
shared variables if the additional reads/writes could be detected."
Sarita says, "I can think of a future compiler that [does some
optimization]. Sounds hard to believe, but how can we predict the
future?" I have no clue how to resolve this conflict.

To lessen the difficulties of having two models, and to make future
refinements easier, it would be good if the two models were similar
in form, and had a few dimensions in which they could be "tweaked".
I fully expect that some of these dimensions would be tweaked fairly
quickly (e.g., trying to guarantee certain properties turns out to
completely destroy performance), but other issues, especially those
concerning programmer ease of use, will take longer to resolve.

Making the models similar also reduces the second difficulty. In
any case, I'm not sure how severe this difficulty will be, because,
as Doug points out, most people will read his A document (for all
programmers), and the two models should be identical at that level.
The B documents would describe as much as appropriate of the
programmer model, and the C document would do the same for the
implementor model. D would probably be a careful description of
the programmer model, and E would have both.

As an aside, although I agree that documents A, B and C are the most
important, I believe they must be built on a solid foundation that
would be a good E document. All my experience with memory models thus
far indicates that we won't get it right if we don't have a fully
formal spec (and even then, we'll probably still have some mistakes).
Thus, although probably almost no one will read it, I think E should
be paramount in our consideration. However, Doug's main point I
think supports my proposal to have two models--most people will only
need to look at one.

As another aside, another advantage of having two models is that
the requirements for simplicity are different for their intended
audiences. For the most part, we want the programmer's model to be
as simple as possible. Of course, any difficulty can be masked by
"guidelines" that describe only the easy cases (e.g., correctly
synchronized code). But we can allow the implementor's model to be
more complicated and subtle, as the audience for that model should
be more sophisticated. (In fact, this argument suggests that even
the style of the two models might be different, as certain
formalizations may be more conducive to being embodied in an
implementation than others. But following this route would obviate
the advantage of similarity mentioned above, and would require a
careful proof of the relation between the two models.)

If we do choose to have two models, one desirable (but perhaps
unrealistic) tool would be a program that could check whether
a Java program has behaviors that are allowed by the programmer
model but not by the implementor. A perfect checker is probably
at least NP-hard if not undecidable, but a conservative one may
not be too bad--in fact, data-race detectors do this.

Finally (this mail is much longer than I intended!), to reiterate the
answer to the objection that the two models are really the worst of
both worlds (i.e., weak guarantees to the programmer and strong
requirements for the implementor), I think it's important to get
something out there so that people can start to actually use and
implement the model, and being conservative allows us to correct
our mistakes relatively painlessly in the future. I don't think
anyone is trying to propose a model that cannot be implemented,
or that would be completely unworkable for programmers, but rather
we are negotiating where in the middle to put the stake. I'm
suggesting that perhaps we can delay that decision, rather than
make it in haste.

                                Victor
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:47 EDT