Re: JavaMemoryModel: New approach to defining the Java Memory Model

From: Yue Yang (
Date: Thu Jul 25 2002 - 16:57:29 EDT

I like the new approach better! I think the notation using happens-before
edges, or equivalently, ordering constraints, offers people more direct
intuition regarding the underlying requirement. I have several suggestions
as follows.

* Direct graph vs. partial orders

Fundamentally these notations offer the same thing, although the latter
might be more appealing to the general audience.

A partial order (strictly speaking, it is an irreflexive partial
order) for Location Consistency, represented by =>, can be defined
something like this:

Instructions i1 and i2 are ordered, i.e., i1=>i2, iff:
- i1 and i2 are ordered by program order from some thread, or
- i1 and i2 are synchronized by the same lock or the same volatile
variable, or
- there exists another i such that i1=>i=>i2.

Note that similar definition has been applied to define Causal
Consistency, except that a "write-into" order was used in the second case.

This ordering requirement would set up all the ordering paths, or
happens-before edges, for an execution trace. Then the requirement of
_serialization_ can be applied to filter out overwritten writes, i.e., a
read gets its value from the most recent previous write on the same
variable along the ordering path.

* Reordering Lock->ReadNormal

I understand that figure 3 was not exhaustive. The following observation
addresses the "completeness" aspect in case it might be overlooked by some

Since operations are only synchronized through the same lock, thread local
or nested locks have no effects and can be removed. Therefore, a later
write instruction should not be "blocked" by such "redundant" locks. As a
result, the entry for Lock->ReadNormal (and similarly,
ReadVolatile->ReadNormal) should be conditional depending on whether the
lock is redundant. For any implementations, a global analysis is needed to
take the full advantage of the relaxations allowed by the spec.

* Validating PUR vs. Specifying bypassing policy

It seems to me that the mechanism of validating PUR using a valid
execution trace is not as intuitive as specifying the memory model
directly with a bypassing table similar to Table 3. Because it is really
hard to reason if a trace is legal when it relies on the validity of
another possible trace. I would like to suggest using the reordering table
as part of the spec. People can then reason about all the interleaving
based on that. This is probably going to be a major shift in the
notation. But as demonstrated by our UMM framework, this is quite
elegant. (For those of you who are not aware of our work on UMM, please
check our web site at

* Non-operational vs. operational spec

You mentioned that you've moved away from an operational model. I think
both styles have their merits. Non-operational specs, if done properly,
might be more concise and easy to understand at the high level. An
operational spec, especially for those based on guarded commands, might be
more precise and easier to be verified. And it doesn't necessarily have
to be more complicated. So at least an alternative operational spec can be
used as a useful companion for system designers and programmers.


-- Jason

JavaMemoryModel mailing list -

This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:40 EDT