Not a problem.
I tried to convince Sarita that this was a problem with her
understanding of the informal intuition behind our model on
consistency, but I guess I wasn't successful.
At 5:58 PM -0500 7/30/03, Sarita Adve wrote:
>I don't think we can argue that r1 == r2 == 1 does not violate any intuitive
>notion of causality.
Sorry, but it does not violation my intuition notion of causality.
Informal intuitions are of course subject to misunderstandings, which
is why we have formal specifications.
Here is our informal intuition behind causality:
At 7:10 AM -0700 7/26/03, Bill Pugh wrote:
>It must be possible to explain how a execution occurred, without
> there existing an action x requiring that x be assumed to occur in order to
> show that x did occur.
OK, for us, an explanation is a causal order.
We use r1 = A (seeing 1) as the first action in the causal order.
We can explain this by noting that in all executions, a write of 1 to
A is guaranteed to occur and be visible to the read of A by thread 1.
We do not need to suppose that r1 = A saw 1 in order to explain how
r1 = A could see 1.
Given that we know r1 = A (seeing 1) occurs, we can then explain that
the following actions occur without any need to reason about future
B = 1
r2 = B (seeing 1)
A = 1
The problem is that Sarita is using the wrong causal order. If you
start by trying to explain why A=1 happened, then you do indeed get
into trouble, and find that you valid validate the causal order.
>I believe Bill's argument is going to be that this isn't their intuitive
>notion of causality.
No it isn't.
I think Sarita's mental model of our intuition is that for each
action x in an execution, you can justify how that action occurred
without needing to assume that x occurred.
But that isn't our model. We long ago determined that such a model won't work
Our model is that you have an explanation, a story, a script, a court
case, in which you show a series of facts. As you show each fact, you
are allowed to rely upon the things you have already shown.
Of course, informal descriptions are never precise and I can see how
Sarita might have misinterpreted our previous description. We can
lengthen our informal description to make it more precise, but
informal descriptions can never be as precise as formal descriptions,
and the point of an informal description is to be short.
Also, the type of causality we are arguing about is similar to the
notion of causality in a world with time machines. I've told people
that our memory model is:
* You can't kill your father before he met your mother (consistency)
* You can't give yourself the time machine blueprints, unless you
would have obtained them anyway (causality)
I'm sure if you put 10 science fiction authors in a room, you would
come up with 6-10 different notions of causality in a world with time machines.
So perhaps there is more than one notation of causality for memory
models. Sarita certainly seems to have a different one than Jeremy
At any rate, I don't think that arguing about who's interpretation of
who's informal intuition is value is very useful at this point.
Instead, we should work to come up with a series of examples to
illustrate the differences in the models of causality allowed by
different memory models.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:49 EDT