RE: JavaMemoryModel: JMM and caches.

From: Boehm, Hans (hans_boehm@hp.com)
Date: Mon Oct 27 2003 - 19:27:16 EST


Jerry -

Actually, I was referring to the Java semantics, though I may
have misunderstood Sylvia's question. By "volatile read" I
meant a read of a volatile Java variable, which I would expect
to be implemented purely as a compiler reordering restriction
on most X86 processors.

(See http://gee.cs.oswego.edu/dl/jmm/cookbook.html, x86-PO.
There is a likely additional cost of a memory barrier when the
Java volatile is written, but only to prevent reordering with
a subsequent load of another volatile. LoadLoad, LoadStore
and StoreStore barriers are no-ops on x86-PO.)

Hans

> -----Original Message-----
> From: Jerry Schwarz [mailto:jerry.schwarz@oracle.com]
> Sent: Monday, October 27, 2003 3:54 PM
> To: Boehm, Hans; 'Sylvia Else'
> Cc: javaMemoryModel@cs.umd.edu
> Subject: RE: JavaMemoryModel: JMM and caches.
>
>
>
> The semantics of reading a volatile variable in the new JMM
> is not the same
> as what Hans means by a "volatile read" (which I think is
> essentially the
> C/C++ meaning). If thread A writes a volatile variable x,
> and thread B
> subsequently reads x, then the semantics of JMM says that any
> subsequent
> read of any variable in thread B will see the last value
> written to that
> variable in A (subject of course to the possibility of
> reading a value of a
> write by another thread).
>
> As far as I can figure out there is no way in the new JMM. To
> designate
> certain reads and writes to be "volatile reads" and "volatile
> writes" and
> it is this lack that is bothering Sylvia.
>
> At 02:44 PM 10/27/2003, Boehm, Hans wrote:
> >The cost of a volatile read is highly architecture dependent.
> >An architecture that doesn't normally reorder reads (as far as we
> >know, this includes all current X86 implementations), it should
> >affect only the allowable compiler transformations, and should not
> >require a memory barrier instruction. On some other architectures,
> >it does require a memory barrier of some kind. But the cost of that
> >barrier is again highly variable. (On Itanium, it may be essentially
> >zero, depending on context, at least on the reader side.
> >On most machines, the cost should be less than a full cache
> miss to memory.)
> >
> >Hans
> >
> > > -----Original Message-----
> > > From: owner-javamemorymodel@cs.umd.edu
> > > [mailto:owner-javamemorymodel@cs.umd.edu]On Behalf Of Sylvia Else
> > > Sent: Monday, October 27, 2003 11:59 AM
> > > To: javaMemoryModel@cs.umd.edu
> > > Subject: Re: JavaMemoryModel: JMM and caches.
> > >
> > >
> > > Yes.
> > >
> > > The issue I'm trying to resolve is that any code that keeps
> > > running into a
> > > memory barrier is going to compromise the gains it should be
> > > getting from
> > > memory caches. This is a price that must be paid if the code
> > > is really in a
> > > continuous interaction with other threads. In my
> > > configuration example,
> > > though, the barrier occurs simply because of the extremely
> > > rare situations
> > > where the cache may be out of date. In addition, a short
> > > period of using
> > > stale configuration data would be of no consequence.
> > >
> > > I've discarded the ThreadLocal subcache approach because
> accessing a
> > > ThreadLocal is even slower than synchronization (on an
> > > Pentium). This may
> > > be an artefact of the Pentium's memory model, which
> eliminates cache
> > > flushes in this case, but at the moment I have no other types
> > > of system to
> > > try it on.
> > >
> > > In this type of problem, data races seem to be inherent
> > > (exactly when did
> > > the configuration change anyway?), but potentially
> > > manageable. The existing
> > > mechanisms for doing the management are unnecessarily
> > > expensive to use.
> > >
> > > Sylvia.
> > >
> > > >Doug Lea wrote
> > > > > What I'm looking for is a mechanism that introduces a
> > > happens-before
> > > > > relationship between some defined action in a thread, and
> > > some other
> > > > > defined action that will occur at a more-or-less
> > > arbitrary future time,
> > > > and
> > > > > in a different thread.
> > > >
> > > >I might be misinterpreting your intent here, but I think
> you may be
> > > >making this out to be harder than it is. If you need
> > > >ordering/visibility without locking, make sure that reader
> > > threads read
> > > >a volatile field that is written by writer threads. Some
> variant of
> > > >this is used (sometimes in conjunction with locking or
> atomic updates
> > > >to coordinate writers) in most "concurrently readable data
> > > structures",
> > > >for example in JSR-166 ConcurrentHashMap and
> ConcurrentLinkedQueue.
> > > >(See http://gee.cs.oswego.edu/dl/concurrency-interest/index.html)
> > > >
> > > >-Doug
> > >
> > >
> > > -------------------------------
> > > JavaMemoryModel mailing list -
> >http://www.cs.umd.edu/~pugh/java/memoryModel
> >-------------------------------
> >JavaMemoryModel mailing list -
http://www.cs.umd.edu/~pugh/java/memoryModel
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:52 EDT