Keith Randall <firstname.lastname@example.org> writes:
> In any case, keep in mind that any rule for merging synch blocks at
> compile time necessarily relates to the run-time rule for
> reacquisition of locks. There is no point in making the merging rules
> more restrictive or less restrictive than the corresponding run-time
> rules you want to enforce.
Exactly. The run-time lock behavior precisely defines the permissible
scope of optimization. For example, the more complex lock semantics
in our CRF paper posit that a monitorEnter has no fence behavior if
the last monitorExit on that lock was in the same thread. In effect,
we explicitly dynamically merge the lock regions and permit
re-ordering between them.
That said, we didn't say anything about lock behavior in the presence
of contention. I'd always been thinking of this in terms of
temporal-logic-style assertions ("If a thread tries to enter a
monitor, then it will always eventually succeed") but quickly realized
many simple conditions degenerate to liveness assertions about the
program itself. We don't want to solve the halting problem.
Keith's assertions are the simplest ones I could come up with as
well. We have to measure progress in terms of monitorEnter/Exit
actions in the original source code:
> The weakest non-empty liveness condition I can think of is the
> following: no thread can wait forever to acquire a lock while a single
> other thread acquires that lock infinitely often. A slightly stronger
> rule would replace the phrase "a single other thread acquires" with
> "other threads acquire".
Keith mentions his two conditions are identical from the perspective
of the compiler; we may merge finite numbers of synch blocks.
However, this may not be true if the compiler is statically
interleaving thread executions, something I know is being worked on by
Martin Rinard and his students.
I think the strongest assertion we might want to make beyond the ones
above is "the number of consecutive monitorExit actions which may be
elided by the compiler is bounded by some fixed but
implementation-dependent number". I see two real benefits from such
an assertion: First, it makes it clear that a limited amount of
unrolling and so forth is perfectly kosher, but that limits must be
established ahead of time. Second, it makes it clear that high-level
libraries should be used for strong guarantees.
I suspect any stronger assertion starts to encode a policy for
fairness (or even a policy for unfairness, if we consider priority).
I've seen enough work on (work,time,space)-efficient scheduling models
to be leery of such an endeavor; I suspect it would preclude efficient
scheduling for some interesting class of programs.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:29 EDT