Thanks Jeremy. I'm actually fairly happy with those answers. My real goal here is
to come up with an alternate description which I can understand more easily, and
which makes the implementation consequences a bit clearer. Here's an attempt:
1) We define the relation ddd as follows:
We say b ddd1 a ("b is definitely data dependent in one step on a") if action a either
- copies a value into r1, which is one of the inputs to b, or
- loads a value into r1, which is one of the inputs to b, and this is
the first time that value is read in the thread. (????)
Ddd is the reflexive-transitive closure of ddd1. (More on the terminology later.)
2) A write action w in t0 is seen by a read action r in t1 through a final
field x.f if there are write actions wf and wx in thread t0 writing x.f and x respectively
and read actions rx and rf in a different thread t1 reading x and x.f respectively, such that
- wf and w happen before the constructor for x (i.e. the freeze action on x.f) completes,
and the freeze happens before wx.
- r ddd rf and rf ddd rx, i.e. t1 reads x (the published pointer), then x.f (the final field),
and then performs r, which is definitely data dependent on the final field x.f.
- rx reads the value written by wx.
3) If a write action w is seen by a read r thread a final field, then the read must see w or
another write that happened no earlier. This is the only additional constraint imposed by final fields.
Is this correct? If not, can it be tweaked to become correct?
If it were, that would have several nice consequences:
1) I can understand it. Of course that may not apply to anyone else. (I find the recursive
definitions in the current documents very hard to follow. The informal version is nice and clear,
but isn't very precise about the case in which there is a dereference chain between the
final field and the field I'm interested in.)
2) I think the implementation consequences become clearer. On a machine like IA64 where data
dependency implies memory ordering, the optimizer must ensure that if b ddd a, then the generated
code contains a data dependency of b on a. (I suspect this is mostly automatic, but I can
certainly think of speculative techniques that break it.) And I need a barrier at the end of
a constructor, which I may sometimes be able to optimize into release stores of all published
pointers to the object.
3) It points out that this model can be somewhat arbitrarily tweaked by adjusting the definition
of ddd. If we generalize it, we get stronger guarantees for the programmer, but place a
stronger burden on the optimizer (at least for machines that respect data dependency for memory
ordering), since the optimizer must now preserve a larger set of dependencies as data dependencies
in the object code. For example, if we wanted to adjust the definition to include the array
case below, it would preclude the optimizer from eliminating the subscript operation if the array
size is known to be one. (I'm not convinced this knob needs to be used at all. But it helps
me to understand that it's there.)
4) If I'm right so far, there seems to be an issue in the definition of ddd with precisely what
constitutes a prior read of the value along the dereference chain, and under what circumstances
the compiler is allowed to do CSE to instead use previously read values. I haven't yet fully
understood the answer you propose there. And as esoteric as it is, I think this actually
matters in real programs.
> -----Original Message-----
> From: Jeremy Manson [mailto:firstname.lastname@example.org]
> Sent: Tuesday, July 29, 2003 5:23 PM
> To: Boehm, Hans
> Cc: email@example.com
> Subject: Re: JavaMemoryModel: final fields, again
> > Assume T has a field x, and o and p are of class C with
> final field f.
> > Initially q is null.
> > Assume thread 1 does
> > o.f = new T();
> > freeze o.f;
> > q = o;
> > If I understand correctly, thread 2 is guaranteed to either see q ==
> > null, or to see q.f.x as written by the T constructor?
> > Now assume I have a static array A of Ts, and a static
> index variable
> > next_t. Assume final field f now holds an integer index
> instead of a
> > reference. Effectively I'm just replacing a reference by
> an integer
> > array index.
> > Assume thread 1b does:
> > o.f = next_t++; // reserve slot.
> > A[o.f] = new T();
> > freeze o.f;
> > q = o;
> > Now if thread 2b sees a non-null q, there is no guarantee that the
> > initialization by T() of A[q.f].x is visible, since A[q.f].x is not
> > reached by following pointers from q.f?
> Sad, but true. Otherwise, it gets bad to implement on extremely weak
> memory orders and it kills some potential compiler optimizations.
> On the up side, if you want more guarantees, you can:
> 1) Set a final field (call it final int  arr) to point to A, and
> 2) Make sure Thread 2b never looks at A through any reference
> other than
> This ensures that you will see the up to date values as of
> the freeze.
> Note that if Thread 2b ever looks at A through the reference
> A instread of
> the reference arr, then you don't have your guarantees anymore.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:48 EDT