RE: JavaMemoryModel: JMM and caches.

From: Doug Lea (dl@cs.oswego.edu)
Date: Mon Oct 27 2003 - 20:39:30 EST


On Mon, 2003-10-27 at 19:27, Boehm, Hans wrote:
> Jerry -
>
> Actually, I was referring to the Java semantics, though I may
> have misunderstood Sylvia's question. By "volatile read" I
> meant a read of a volatile Java variable, which I would expect
> to be implemented purely as a compiler reordering restriction
> on most X86 processors.
>
> (See http://gee.cs.oswego.edu/dl/jmm/cookbook.html, x86-PO.
> There is a likely additional cost of a memory barrier when the
> Java volatile is written, but only to prevent reordering with
> a subsequent load of another volatile. LoadLoad, LoadStore
> and StoreStore barriers are no-ops on x86-PO.)

Further, you almost always need something (a lock or atomic update)
that entails a StoreLoad barrier in the code that does writing (to
provide exclusion for or coordination of multiple writers), and can
usually arrange that any volatile field that you also write be done in
a way (by not requiring intervening loads) that a good JVM will
optimize to "share" that barrier.

The need for extra loads and stores associated with these volatiles CAN
cause more traffic between CPU caches and main memory, but the effects
seem to be small in practice.

-Doug

-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:52 EDT