At 02:13 PM 30/04/2004 -0400, Bill Pugh wrote:
>I don't remember any strong justification for this either way.
>The guarantees you get from forbidding r = 0 are so weak that
>it is difficult to use them to build correct finalization built
>On the other hand, it is hard to imagine compiler or processor
>optimizations that could result in r = 0.
>If we did want to forbid r = 0, we could add:
>>If an object $X$ is marked as unreachable at $d_i$,
>> and there are two conflicting writes $w_1$ and $w_2$ to a field
>> or element of $X$,both $w_1$ and $w_2$ come-before $d_i$,
>> and $w_1 \hb w_2$, then no read that comes-after $d_i$
>> sees the write $w_1$.
If I'm reading this right, for the non-volatile case it amounts to saying
that the finalizer sees all of the writes to its object, but not
necessarily writes that _hb_ them. So better to make it clear to
programmers that the non-volatile case is not drf. Guaranteeing the writes
to the object probably doesn't make the program correct in most interesting
cases, and would simply engender a false sense of security. It also
complicates the model for little tangible benefit.
As a further observation, a finalizer that accessed only fields of its own
object would still not strictly be drf according to the rules, because of
the lack of a hb edge between the writes seen by the finalizer's reads.
We'd effectively be saying that this is a case where programmers are
expected to write code that contains a data race. Not a good policy, IMHO.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:01:06 EDT