JavaMemoryModel: My final summary of strong vs. weak volatile semantics

From: Sarita Adve (
Date: Wed Mar 24 2004 - 11:33:45 EST

I thought I'd summarize one final time (my view of) the issues of weak vs.
strong synchronization semantics, and move to put this up for a final vote.

First, both off-list and on-list, people have brought up the notion of
whether the stronger or weaker semantics are more or less intuitive and/or
fragile. In general, I think multithreaded Java programmers should be wary
of using intuition. Not because the JMM proposals are non-intuitive, but
because most programmers have their intuition honed through SC. With the
wrong intuition, neither the weak nor the strong volatile semantics are
intuitive. With the right intuition, they are both intuitive. So until
enough people program with the JMM and new intuitions start to develop, my
opinion is that we shouldn't bring intuition into this choice.

Next issue is related to the use of volatiles as memory barriers. It is my
understanding that most programmers' view of a memory barrier instruction
is: if a thread's instructions I1 and I2 are separated by a memory barrier
instruction (with I1 before I2 in program order), then I1 is seen before I2
by all processors. We have spent a lot of time in the last few years trying
to get rid of this perception, and neither the strong nor the weak semantics
gives this type of memory barrier semantics. The strong semantics does allow
usage of volatiles as a *sort of* a memory barrier, but again it ain't the
memory barrier that (I think) most people think of. And in fact, if you
think of it as the memory barrier that most people think of, you would get
unexpected results for some programs. To truly understand what kind of
memory barrier is given by the strong semantics, we simply have to
understand happens-before. To put another way, multithreaded Java
programmers can't get around understanding happens-before, regardless of
whether we choose strong or weak semantics - there's no way to correctly
explain the semantics through barriers or roach motels.

I fear though that once we do start talking about using volatiles as a *sort
of* memory barrier with whatever caveats added in, many people will start
thinking again in terms of conventional, full (wrong) memory barriers. So I
guess one way to set this decision up is: convenience for very few
programmers vs. confusion for very many?

Ok, so it is not intuition and it is not memory barriers as most people know
them. What is the real issue then?

The real issue is quite simply this: when determining whether a write-read
conflicting synchronization edge is part of happens before, do we allow
ourselves to use information about a total order on conflicting
synchronization writes?

Now both semantics do guarantee the existence of a total order on
write-write conflicting synchs (to guarantee SC for synchronization
accesses). The difference between the two semantics is in the inferences
allowed from that ordering for data accesses, particularly for
happens-before. Note that there are other synch-synch orderings guaranteed
to ensure SC synchs, but we all agree these should not be used for making
inferences about data accesses. For example, read-write conflicting synch
edges are not part of happens-before and neither are write-write edges
themselves. (There are reasonable models where these edges could also be
included in happens-before.) So again, it is not clear or intuitive whether
we should or shouldn't include inferences from write-write conflicting
synchs in happens-before, and therein lies the debate.

The weak semantics does not consider inferences from write-write conflicting
synch order when defining happens-before - it simply says a write-read
synchronization edge is in happens-before if the read returns the value of
the write. The strong semantics allows us to consider the write-write
synchronization total order. Given such an order, we can totally order all
writes with respect to a conflicting synchronization read, and it starts to
make sense to talk about a write-read edge where the write came before the
read (but not necessarily supplied the value for the read). Practically,
this allows the following type of reasoning (as would be required for Bill's
earlier example): given two conflicting synch writes W1 and W2, we know that
one will come before the other (although we don't necessarily know which
one). So if one read is guaranteed to see W1 (e.g., because W1 is program
ordered before the read or the read returns the value of W1) and the other
read is guaranteed to see W2, then for at least one of these reads, we can
reason that both writes must come before that read. So with the stronger
semantics, we can be sure that there is a happens-before edge from both
writes to at least one of these reads. Note that to exploit the strong
semantics, there is no substitute for going through this type of reasoning.

So the real issue then is whether this type of reasoning is useful to
programmers and what is its impact on implementations. We already conceded
that this type of reasoning is useful only to a handful of programmers. We
also already conceded that the total ordering on conflicting synch writes is
already required of implementations (for SC synchs). So the impact on
implementations is in the impact of making inferences via this total
ordering for data accesses. For software DSMs, this will certainly add some
complexity. But how much more relative to everything else? It's hard to
quantify. Just like it's hard to quantify the impact of the weaker semantics
on the handful of programmers who want to use the stronger reasoning. This
is the summary of the real issues.

Overall, as I said before, this is not a huge deal. But given the
possibility of confusion in using the barrier terminology, the potential for
impact on software DSMs (even though we have put other baggage for software
DSMs to handle), and the marginally simpler definition of happens-before
with the weak semantics (since we don't have to define a total order to
understand happens-before), I am inclined towards the weak semantics. But in
the grand scheme of things, neither semantics is worth fighting for in my
opinion - the important thing is that the JMM is happens-before based and
that the causality issues finally have a reasonable (if not 100% perfect)
solution. I'd say take a quick vote and flip a coin in case of a tie (the
public review argument doesn't hold much water in my view since many of us
were paying attention to more important things until that point, and it is
unclear how many people realized the subtleties of this decision).

For reference purposes, my next email will include exchange with Sandhya
Dwarkadas from the Treadmarks group on this issue.

Finally, apologies for yet another long message, but as you can see, this
issue is pretty subtle. It seemed worthwhile to put all the issues down
together so we could make an informed decision.

Oops, one more thing - regardless of the final choice, I propose we
obliterate the term memory barrier from the allowed vocabulary for this
topic (other than to say that JMM doesn't provide a memory barrier) :-)



JavaMemoryModel mailing list -

This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:01:01 EDT