First off, as Bill Pugh referred to our CRF-based Java Memory Model
paper in his mail on Volatile semantics, I thought I should note that
we (Jan-Willem Maessen, Arvind, and Xiaowei Shen) are in the process
of revising the paper. We'll be making further changes to our
proposed model based on some of the current discussion. Here's a
pointer to a constantly-changing revision. Don't link to this on your
web page; it'll go away and be replaced by a permanent publication
link when we're happy with the model we've got:
The "official" (but currently outdated) version is still at:
There'll be a revised version there soon.
Our focus is on capturing potential instruction reordering at the
source code level. This actually makes reasoning about most of the
litmus tests pretty easy once you've become comfortable with our
Because we give the model by translation into CRF, it's very easy for
us to make changes. If you don't understand CRF, read the paper; it
describes what you'll need to know. I'm currently encoding a spectrum
of possible models; hopefully this will give us an idea of what the
tradeoffs really are, and we can settle on one we're happy with.
One particular attraction for us is the ability for the compiler (and
possibly the architecture) to do fine-grained memory synchronization.
David Bacon alluded to this idea in his message on finalization. The
advantage is that the compiler has much greater scope for reordering
around memory fences.
It pains me a great deal to see discussion of memory barriers which
assumes that no operations may ever pass them at all. This may be
true for a machine-language memory barrier on some architectures, but
it's not at all true for the compiler which emits them, or for
architectures with finer-grained barriers such as PowerPC, or for DSM
implementations of Java. It's a huge mistake to let this
architectural view cloud our mindset.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:24 EDT