I just wanted to give everyone an update on what has been happening
the past two months.
As you know, we had decided to relax volatile variables so that they
weren't quite sequentially consistent. In particular, for the
a and b are volatile and initially 0
a = 1
u = a
v = b
b = 1
w = b
x = a
We allowed the behavior u = 1 and v = 0 (suggesting the write to a
occurred before the write to b) and w = 1 and x = 0 (suggesting that
the write to b occurred before the write to a). This relaxation was
made because some people believed it was needed for efficient
implementation on future platforms, and while it complicates the
memory model, it didn't seem to cause any problems for programmers
trying to use volatiles to build reliable and efficient software.
Sarita Adve is unconvinced that this relaxation is needed for
efficient implementation on future platforms, and is talking with
Rick Hudson and Hans Boehm about the issue.
We've put together this nice simulator, and we haven't gotten
feedback from anyone that has used it. We're not sure if anyone else
has even tried using it. We really need some feedback on the
simulator, because we feel that it is important that other people be
able to use the simulator to understand parts of our semantics.
Please give it a try.
TEMPORAL LOOPS AND OTHER PARADOXES
First, to get yourself in the mood for thinking about this stuff, go
see the movie Minority Report. Thinking about temporal loops,
precognition and predestination will get you in the right mood.
In talking with Sarita Adve, we found a problem with our semantics
for prescient writes. We also found a problem with her proposal for
"not out of thin air" safety.
(In all of these examples, assume variables are normal variables and
initially 0 unless otherwise specified).
The problem with our semantics is the following:
x = 1
r1 = z
r2 = x
y = r2
r3 = y
z = r3
The question is: can this program result in r1=1?
We need for this result to be possible. A compiler could easily
decide to reorder the read of z in thread 2 to the last action in
thread 2, and then an a sequentially consistent execution would allow
r1=1. Under our previous semantics however, it could not. For that to
happen, we would need to perform an initWrite of y=1 in thread 2
before reading z. But at that point, we can't guarantee that a 1 will
in fact be written to y. We can't use a guaranteed read because
guaranteed reads are restricted to see properly synchronized reads.
Sarita's "not out of thin air" proposal has the problem that it
depends upon being able to statically determine when one statement is
dependent upon another, without any reference to the memory model or
For example, consider:
r1 = x
r2 = r1*r1 - r1 + 1
y = r2
r3 = y
r4 = r4*r4 - r4 + 1
x = r4
Can this program result in r1 = r3 = 1?
If the compiler performs an analysis that shows that x and y can only
have the value 0 or 1, then the compiler can determine that r2 and r4
are always 1, reorder the writes to x and y, and get the result r1 =
r3 = 1.
Jeremy and I have been banging our heads on this for the past month.
We have solutions that fix these problems, but we need to think about
for another week or so to feel really comfortable with them.
These are the really hard cases, and the cases where pretty much
every other approach has fallen down. I don't know of any other
multithreaded semantics that would allow r1=r3=1 for Ex3. CRF does
not, and neither does the Utah reformulation of our semantics.
We are making a few changes to the final field semantics, both to allow the
possibility that final fields can change, and to allow for some
important use cases where references to objects are stored during
their construction. We had been pursuing this, then get distracted
into dealing with temporal loops and predestination.
ODDS AND ENDS
We cleaned up a few details, such as a missing action in
initVolatileWrite (Yue Yang noted this in his paper; we had actually
already found that bug when we did our simulator implementation).
A FORMAL PROOF
We are working on something that has been missing: a formal proof
that our memory model allows standard compiler reordering
transformations. Efforts to derive this proof are proving to be very
useful in helping us to put the memory model on a firmer footing.
EXPECTED DATE FOR PUBLIC REVIEW
Jeremy and I are focused on this now, and we hope to have JSR ready
for public review by the end of July.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:40 EDT