Re: JavaMemoryModel: How relaxed should our memory model be?

From: Jerry Schwarz (
Date: Fri Oct 05 2001 - 18:22:09 EDT

Part of the problem is that I don't think there is a consensus on what it
is we're trying to give a semantics to, or in what a semantics for that
thing would look like.

My answer to those questions is that we're trying to give semantics to two

A. Threads: We want to give semantics to individual threads in terms of
memory operations that the threads might perform. However, you can't
specify the allowed memory operations of a thread in isolation, you need to
know what other threads might be doing. So the meaning of a thread can be
specified only as a map from contexts to sequences of memory operations
(and ordering between the memory operations of the thread and the
context). What is a context? Part of it is the state of the memory when
the thread begins, and another part of it looks a lot like a thread.

In order to specify thread-meanings we will start with the naive semantics
of Java code and allow certain reorderings (and replacements). Mostly the
allowed reorderings will be arbitrary subject to certain data dependencies
and synchronization dependencies.

B. Programs: The semantics of a Java program are the I/O operations (or
possibly calls to other native methods) that it performs. It does not
include memory operations. In order to define program-meanings based on
thread-meanings we need an operation that puts together thread-meanings in
a fashion where each provides (part of) the context of the other. When
we've combined all the thread's in a program we can throw away the memory

The basic requirement on the system (including compilers) is that they
preserve the meaning of programs. The simplest way to do that is to
generate code for methods in such a way that it will work in any thread in
any context. However, they're free to do something better if they can tell
that the particular contexts in which the threads will execute this code is
limited in some fashion.

At 11:00 AM 10/05/2001, Bill Pugh wrote:
>I'd like to try once again to come to some understanding on the issues
>I've been raising over the past few weeks.
>We've been trying to devise a denotational and executable semantics for
>multithreaded Java that is as relaxed/weak as possible, with the idea of
>allowing more compiler transformations.
>Jan-Willem suggest not allowing the use of any information inferred about
>the heap in reordering past potential control flow boundaries.
>At 6:38 PM -0400 9/12/01, Jan-Willem Maessen wrote:
>>2) A write may be reordered across a control flow boundary iff it can
>> be shown that the particular write will occur in any execution,
>> regardless of the existence or behavior of other threads in the
>> system.

In order to make sense of this rule in my framework I need to recast to be
not a part of the semantics, but as a rule for the compiler that follows
as a consequence of the semantics. As stated, the rule allows a certain
reordering only if certain things (the write) occurs ln "any
execution". But control flow boundaries aren't visible in thread-meanings
so the reordering is allowed in any context in which the write occurs. So
if the write occurs in all context then the compiler can perform the
reordering. If it occurs only in some contexts, but the compiler knows that
the code will only be executed in contexts in which the write occurs, then
it can also do the reordering.

>Consider the following example:
>Initially, p = new Point(0,0), q = new Point(0,0);
>Thread 1:
>r1 = p
>r2 = q
>r3 = r1.x
>r2.y = 1
>Thread 2
>r4 = p
>r5 = q
>r6 = r5.y
>r4.x = 2
>Can this program result in r3 = 2 and r6 = 1? If a VM determined that
>reads of p and q never returned null, and thus the reads of r1.x and r5.y
>could never throw an exception, it might decide to reorder the last two
>statements in each thread. This reordering would allow the result r3 = 1
>and r6 = 2. But as far as I can tell, any semantics that respected
>Jan-Willem's proposed rule (2) would make this execution illegal.
>Therefore, any compiler analysis that inferred that p and q always held
>non-null values could only be used with great care.

Consider the low level operations that appear (implicitly) in the example.

    if ( r1 == null ) throw ....;
    r3 = FETCH(r1.x);

As I read Jan-Willem's rule it says if the compiler knows that the
condition of is false in all contexts it can move the FETCH. This is
certainly correct.

But we can go further. If the compiler can prove that the condition is true
in all relevant contexts (where it determines some properties of the
relevant contexts by analyzing the code of the program) then it can also
apply the rule.

We leave this determination up to the ingenuity of the compiler writers
where it belongs.

>At 8:48 PM -0700 8/12/01, Cliff Click wrote:
>>This all leads down a big black hole: you define
>>the memory semantics based on some reasonably
>>sane dataflow analysis. Now everybody has to do
>>analysis that's strictly weaker (so they don't
>>accidently remove a required dependence and float
>>things around illegally) OR do an exact analysis
>>to come up with the required dependencies, then
>>use the AS-IF rule and stronger analyses to try
>>and improve things by removing required dependencies
>>that do not lead to observable program behavior
>>I'd prefer you Don't Go There as it leads to a
>>nightmare of conflicting very slightly broken
>A weaker semantics is intended to provide more options for for compiler
>writers. If you want to perform analysis/transformations under a model
>where you make no assumptions/inferences about the heap, you are perfectly
>free to do so. That will be completely legal.
>I will admit that if you want to do compiler analysis that makes
>inferences about the heap, it will be tricky. Among other things, you will
>have to take into account the memory model. For example, in proving that p
>and q are never null, you would have to prove that the reads of p and q
>could not see the initial null values stored in those variables.
>On the other hand, if we argue that you can't make inferences about the
>heap, it isn't clear we should be doing things like determining that
>objects are thread local.
>So life is hard. But a lot of people have been arguing for the freedom to
>perform "all reasonable" compiler optimizations. Although they may not be
>used or important now, I don't think we should be ruling out large
>categories of seemingly reasonable compiler optimizations.
>For the example above, I believe r3 = 2 and r6 = 1 should be a legal
>Does anyone wish to argue that it should not?
> Bill
>JavaMemoryModel mailing list -

JavaMemoryModel mailing list -

This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:35 EDT