RE: JavaMemoryModel: fusing synch blocks

From: David Holmes (dholmes@dstc.edu.au)
Date: Sun Feb 04 2001 - 21:49:32 EST


> I think we all agree that merging synch blocks is always legal
> with respect to our new proposed memory model.

I most certainly do not agree! If merging kills liveness (ie causes deadlock
not affects "fairness") then it is not legal. This is not a "quality of
implementation" issue - killing liveness should *never* be considered legal.

The more general question of how merging may impact the general runtime
behaviour of a program is so open I don't see how you can possibly hope to
come up with rules for making it "reasonable". This is one case where, in my
view, the compiler can never have sufficient knowledge to understand the
implications of its code changes and so should *just leave things alone*!

Seriously, some of the follow-on's from the JMM update make the current Ch17
rules seem almost trivial by comparison. Compilers are now going to try and
figure out what's best for the synchronization in my program based on some
(what?) tenuous notion of "optimization"?

What about the poor engineer who now has to try and figure out why random
lock-ups and poor performance plague some systems but not others? How are
they supposed to deal with compiler/JIT affects that may never be
encountered in a debugging environment?

Consider a simple example - using a read/write lock to allow concurrent
reads of a shared data structure for searching:

    rw.acquireReadLock();
    // lengthy search code
    rw.releaseReadLock();

The only real sync blocks occur in the acquire/release methods of the
read/write lock. If those methods get inlined then the "smart" compiler may
decide to merge the two sync blocks and totally kill the whole point of
having read/write locks. Does this make sense to you? It certainly makes no
sense to me! What prevents this from happening? Do we need rules to prevent
this?

I do not agree that fusing sync blocks is a desirable goal and in fact
should be prohibited unless the sync itself can be shown to be redundant.
Any "optimisations" such as loop unrolling, must respect the placement of
sync blocks and maintain them. Could this lead to sub-optimal code? Probably
in some cases. But I'd rather see new idioms for synchronization that
account for possible loop unrolling by the compiler, than see compilers
think they know how to deal with application level synchronisation!
 
David Holmes
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:29 EDT