Re: JavaMemoryModel: Most (all?) JVM's incorrectly handle volatile reads-after-writes

From: Doug Lea (dl@altair.cs.oswego.edu)
Date: Sun Nov 28 1999 - 07:38:37 EST


> In order for removing memory barriers associated with useless locks a
> legal optimization, synchronized(new Object()) { } _must_ be a no-op.
>

Sorry, now I see why synchronized(new Object()) { ... } cannot have
any effect under your proposal. That new object's set of
previous/overwritten values will always be empty. That rules this out.

> The guidelines I'm keeping in mind at the moment is that we should be
> designing a memory model for the masses, not for experts like Doug.

As I've mentioned, I DO absolutely agree that the basic rules must be
simple and generally easy to check. But I've been trying to make the
case lately that they must also be complete. I've been offering
possible solutions that I think have the least impact on the
simplicity and error-proneness stories, yet can be used in those few
cases where they are needed.

I do not want to see a set of rules that drive people to use native
code to obtain required functionality or performance; or worse to give
up on Java as a concurrent programming language. Any reasonable
concurrent algorithm, data structure, or design should somehow be
expressible in Java. This is not a matter of advocacy. Considering
that the only practical alternative for multithreaded programming
these days is C, people abandoning Java in these contexts are pretty
much guaranteed produce software with more errors and portability
limitations.

> Doug seems worried that volatile will be expensive, but I'm not.

No, I think volatile (with strong read-after-write guarantees) is
fine. I'm instead worried that the collection of memory effects
associated with {synchronized, volatile, final} does not hit all the
needs that arise in practice.

Perhaps a better way to capture the main case I've been hitting on is
to resurrect the idea of "scoped volatile": This applies in cases
where you cannot have a lock (for the sake of liveness), or do not
want one (for the sake of performance), and you can live with looser
coupling among threads, yet do need the memory effects associated with
synchronization in order to bound otherwise infinitely stale reads or
infinitely unflushed writes. My first idea about this (long ago) was a
syntactic extension:

  volatile(anObject) {
    // ... reads and/or writes ...
  }

That means exactly: perform all the memory effects of
synchronized(anObject) { ...}, but without acquiring
and releasing the lock.

As I've been saying, you need something like this in order to support,
among other things, most concurrently readable data structures. Plain
volatiles do not work since the read/writes are an arbitrary
collection of variables, possibly including array elements -- for
which volatile does not even apply.

I've always thought this syntax was sorta nice, and now think that it
is even nicer since it fits in well with your LC-based proposal. Note
that, for example, the volatile qualifier for variables could then be
defined as automatically placing each read and write inside such a
block. And maybe "final" could even be defined similarly? And even
"synchronized", as scoped-volatile-plus-exclusion. This would then
lead to a very short definition of the full memory model.

But I had given up trying to pursue syntactic extensions, especially
those that might require adding new bytecodes. I've instead tried
proposing special meanings for unused constructions or special library
functions. None of these are very pretty, or even plausible, but I'm
otherwise out of ideas.

...

BTW, for those curious about it: I finally found a cheap way to make
FJTasks work even when read-after-write of volatiles is not
implemented correctly on JVMs. The result (to be in next release of
util.concurrent) is just about as fast as previous version.

-Doug



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:23 EDT