Re: JavaMemoryModel: Idiom for safe, unsynchronized reads

From: Bill Pugh (pugh@cs.umd.edu)
Date: Mon Jun 28 1999 - 15:50:38 EDT


At 12:17 PM -0700 6/28/99, Paul Haahr wrote:
>
>Right now, it appears to me that the only feasible answer is memory
>barriers around almost all reads. Which scares me a lot, because I have
>a customer who wants to ship my technology on SMP Alpha hardware in the
>near future, and (1) we haven't done the work to support this yet and
>(2) the performance would be much worse than uniprocessor.
>
>Now, please, tell me where I'm wrong.
>
>--p
>-------------------------------

This is what we are worried about. While nobody is thrilled about the
prospect of slowing down Java code, having your JVM core dump because
you got a null vtbl is much worse. And getting surprised by
uninitialized objects is almost as bad.

CALLING ALL COMPUTER ARCHITECTS....

I'm not an expect on processor memory models. I know we have some
people on the mailing list who are. Help us out....

On what processors can this sequence of events happen? Will the fact
that processor 1 did a memory barrier help us out?

                                  Processor 2 has cached the
                                  contents of address 0x1234

   Processor 1 allocates a new
   object at address 0x1234

   Processor 1 initializes the
   object at address 0x1234

   Processor 1 does a memory
   barrier

   Processor 1 stores 0x1234
   into address 0x5678

                                  Processor 2 reads address 0x5678,
                                  gets 0x1234

                                  Processor 2 reads contents of object at
                                  address 0x1234 out of stale cache line

At 12:17 PM -0700 6/28/99, Paul Haahr wrote:
>(Blue skying for a second, I guess it's possible to envision a GC design
>where all processors ensure that newly allocated objects are not cached
>in any stale lines. For that to work, it probably prohibits allocation
>of two objects from the same cache line.)

This is the fall back position. You would still be able to allocate
multiple objects on a cache line if the first object doesn't escape
between allocation of the two objects. Essentially, what you would
have to do is that before "first publication" of an object (writing a
reference to an object into the heap for the very first time), you
would have to bump the allocation pointer up to the next multiple of
a cache line.

   You might be able to do a little better than that; it is an
interesting research question.

        Bill

-------------------------------
This is the JavaMemoryModel mailing list, managed by Majordomo 1.94.4.

To send a message to the list, email JavaMemoryModel@cs.umd.edu
To send a request to the list, email majordomo@cs.umd.edu and put
your request in the body of the message (use the request "help" for help).
For more information, visit http://www.cs.umd.edu/~pugh/java/memoryModel



This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:13 EDT