I've had this post sitting around unfinished for a while now.
Time to just send it...
1. In distributed systems, it has long been acknowledged that different
programs or subsystems have different consistency requirements.
Here though, consistency most commonly refers to message ordering
across nodes of a system. There are several strengths available
for maintaining ordering consistency among sets and sequences of
(normally multicasted) oneway messages, ranging from none (as in
UDP) to per-node FIFO, to per-group ordered, to globally
consistent. Different costs are associated with each of these
(and other) protocols. So, choices among them become important
design decisions by developers. The choices are not usually all
that hard. For example, a hot-standby system requires per-group
ordering; no other choice would make much sense. (See, among
others, papers on the Isis and Horus systems at
sometimes start out believing that they want global consistency,
but later find that weaker protocols are not only much faster and
scale better, but also make more sense. Similarly, programmers
sometimes start out using RPC-style communication that provides
pair-wise massage ordering and then later discover that the cool
things you can do with oneway messages and multicasts both
outperform RPCs and help simplify designs, although at the
conceptual cost of needing to understand the surrounding design
issues and tradeoffs.
2. These days, multiprocessor consistency issues are becoming
increasingly indistinguishable from distributed consistency
issues. (I think the main source of difference is just that MP
models generally do not deal with failure.) This is especially
evident in Location Consistency (LC), which seems very amenable
for implementation via distributed shared memory (DSM) protocols,
which are in turn closely related to strongly ordered distributed
Together, these observations might help explain why no single abstract
memory model discussed on this list (or elsewhere) seems to gain wide,
unreserved acceptance. Upon investigation, each has been shown to
contain some unexpected property or anomaly with respect to
programmers, compilers, and/or VMs, that would make certain programs
perform in surprising ways, certain optimizations invalid, or certain
VM implementation strategies fail to work. One memory model does not
seem to fit all programs or programmers.
The parallels to distributed systems argue that the most productive
way out of this problem is to support multiple memory consistency
models, in the same way that distributed systems support multiple
ordering protocols. Momentarily ignoring how unrealistic this would
be, imagine a language/VM that would allow, for example, code to
declare/require that it relies on the strongest forms of SC (sometimes
due to naivite, sometimes after careful thought), or on the weakest
forms of LC, or anything in between.
It is revealing here that RMI uses RPC-based pairwise
strong ordering as a default. Well, "default" is too weak a term.
If you can live with weaker consistency, you probably want
to be using some other Java distributed framework such as Ninja.
Distributed programming using weak ordering protocols has about
the same macho reputation as does concurrent programming using weak
memory models. Even though both can be accused of being very
simple, both seem to require a lot of experience to do
well. I suppose this is the simple != intuitive issue.
But, of course, we have less flexibility here than do distributed
frameworks. While any given hardware platform and VM is capable of
supporting any given model (although sometimes at great cost), it's
hard to imagine a language/system where you can suddenly switch
models/modes in the same way that you can switch protocols for a given
This is the place where I should be listing the perfect solution to
the problem I have drawn you in to. Unfortunately, I don't know what
that is, so I am reduced to rambling, hoping that someone else might
see somewhere to go with this....
Note that the problem is not so much at the strong-model end, but
at the weak-model end. Right now, you can essentially get SC by
relying solely on volatiles (although you'd have to somehow
mark all the fields in JDK classes as volatile too). But
once you get past this, how can you express the fact that you
need or don't need other consistency guarantees?
Both common sense and experience suggest that the very weakest memory
consistency models are only tolerable using process-style
message-passing rather than thread-style shared-memory
constructions. (If you never share memory, you are never surprised by
memory rules). So one attractive general line of attack would be to
somehow enable support for strong consistency when dealing with shared
memory, but weak consistency when dealing with message-passing. Part
of this attraction is that synch rules for the weakest models are
essentially identical to typical message passing rules -- i.e.,
synchronization is used as a means to communicate values across
threads, even when not conceptually necessary as an exclusion
But Java has no interthread message-passing constructs, and doesn't
have built-in actor/process-style constructs to conveniently represent
the entities exchanging messages. Arguably, it doesn't need any of
this since it is relatively easy to build the associated frameworks
(for example, even the AWT Event framework qualifies, as do CSP-style
packages such as Peter Welch's JCSP), where the frameworks themselves
ensure consistency by synchronizing message exchanges etc., across
threads. Since there is so much variation in exactly how various
message-passing styles work you cannot impose just one. (Note that by
adopting message passing, you now have all the choices available in
distributed systems, plus others, for example involving hand-offs as
opposed to copying, that wouldn't make sense or be impractical for
remote messages). On the other hand, lack of standardization makes it
impossible to support underlying differences in how plain sequential
aspects of method code are translated with respect to memory barriers
And in any case, you would not gain many friends by insisting that
people ONLY use message-passing constructs when dealing with very weak
consitency. Shared memory can almost always be tuned to have better
performance, although sometimes at great cost in human effort and
likelihood of error.
I don't know of any other tolerable options. So, right now I don't
know of any tactic for supporting multiple models, at least not for a
language that you could still call Java.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:23 EDT