RE: JavaMemoryModel: JMM and caches.

From: Boehm, Hans (hans_boehm@hp.com)
Date: Mon Oct 27 2003 - 17:44:43 EST


The cost of a volatile read is highly architecture dependent.
An architecture that doesn't normally reorder reads (as far as we
know, this includes all current X86 implementations), it should
affect only the allowable compiler transformations, and should not
require a memory barrier instruction. On some other architectures,
it does require a memory barrier of some kind. But the cost of that
barrier is again highly variable. (On Itanium, it may be essentially
zero, depending on context, at least on the reader side.
On most machines, the cost should be less than a full cache miss to memory.)

Hans

> -----Original Message-----
> From: owner-javamemorymodel@cs.umd.edu
> [mailto:owner-javamemorymodel@cs.umd.edu]On Behalf Of Sylvia Else
> Sent: Monday, October 27, 2003 11:59 AM
> To: javaMemoryModel@cs.umd.edu
> Subject: Re: JavaMemoryModel: JMM and caches.
>
>
> Yes.
>
> The issue I'm trying to resolve is that any code that keeps
> running into a
> memory barrier is going to compromise the gains it should be
> getting from
> memory caches. This is a price that must be paid if the code
> is really in a
> continuous interaction with other threads. In my
> configuration example,
> though, the barrier occurs simply because of the extremely
> rare situations
> where the cache may be out of date. In addition, a short
> period of using
> stale configuration data would be of no consequence.
>
> I've discarded the ThreadLocal subcache approach because accessing a
> ThreadLocal is even slower than synchronization (on an
> Pentium). This may
> be an artefact of the Pentium's memory model, which eliminates cache
> flushes in this case, but at the moment I have no other types
> of system to
> try it on.
>
> In this type of problem, data races seem to be inherent
> (exactly when did
> the configuration change anyway?), but potentially
> manageable. The existing
> mechanisms for doing the management are unnecessarily
> expensive to use.
>
> Sylvia.
>
> >Doug Lea wrote
> > > What I'm looking for is a mechanism that introduces a
> happens-before
> > > relationship between some defined action in a thread, and
> some other
> > > defined action that will occur at a more-or-less
> arbitrary future time,
> > and
> > > in a different thread.
> >
> >I might be misinterpreting your intent here, but I think you may be
> >making this out to be harder than it is. If you need
> >ordering/visibility without locking, make sure that reader
> threads read
> >a volatile field that is written by writer threads. Some variant of
> >this is used (sometimes in conjunction with locking or atomic updates
> >to coordinate writers) in most "concurrently readable data
> structures",
> >for example in JSR-166 ConcurrentHashMap and ConcurrentLinkedQueue.
> >(See http://gee.cs.oswego.edu/dl/concurrency-interest/index.html)
> >
> >-Doug
>
>
> -------------------------------
> JavaMemoryModel mailing list -
http://www.cs.umd.edu/~pugh/java/memoryModel
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:52 EDT