JavaMemoryModel: More ordering constraints does not imply more reliable software

From: Bill Pugh (
Date: Tue Jul 10 2001 - 14:31:39 EDT

With regards to "roach motel" ordering (we really need to come up
with a better name) and other topics, some people arguing for
stronger ordering constraints are essentially making the argument

* If the JMM imposes more ordering constraints, it is more likely that software
   will perform the way the author intended, and therefore be more reliable.

While this is true about any one piece of existing software, I don't
think that it follows that

  - more ordering constraints will lead to more reliable multithreaded software

because it ignores the impact of the JMM on the way people will write software.

I think there is a lot to be said for a JMM that encourages/imposes
on discipline on how people write concurrent software. The more rope
you give people, the more rope people will use to hang themselves.

For example, I suspect that having a readMemoryBarrier() function
call available would be a horrible mistake. 99% of all programmers
who tried to use it would use it incorrectly. I'm not sure I could
use it correctly (what is a "read" memory barrier, as opposed to a
"write" memory barrier? Does is prevent reordering of previous reads
with following reads, but allow arbitrary reordering of writes?)

Similarly, I suspect that, even if it were feasible, sequential
consistency would be a horrible mistake. Most programmers who tried
to write code that depended on sequential consistency would get bit
by subtle bugs.

I think the acquire/release ordering (i.e., "roach motel" ordering)
provides an intuitive model that can be easily encapsulated into a
number of design patterns for concurrent software and is appropriate
for the new JMM. I haven't seen any arguments to convince me

JavaMemoryModel mailing list -

This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:33 EDT