Array data dependence analysis is really alias analysis: it asks if references to an array might refer to the same memory location. Alternatively, we can compute value-based data flow dependences, that determine if two accesses to an array access the same value (i.e., they reference the same memory location and there aren't any intervening writes).

Programs often make use of temporary work arrays that are reused in every iteration of a loop. To execute such a loop in parallel, we must give each iteration of the loop its own copy of the temporary array via array privatization or expansion [MAL93][Fea91][Li92][PW86]. To determine when array privatization is legal, we need to have information about the value-based flow dependences of a program. Array privatization/expansion has been shown to be essential for achieving significant parallelism in many important programs [EHLP91]. Value-based dependences can also be used for optimizing communications in distributed memory multicomputers [AL93].

Several papers describe algorithms for array privatization [DGS93][MAL93][Li92][Ros90][GS90][Fea91][Rib90][Bra88]. Many of these techniques are conservative and cannot exactly characterize the value-based dependence: they can determine which loop level carries value-based dependences, but cannot determine which iterations receive values from which source.

Paul Feautrier [Fea91][Fea88] invented a
method for computing exact information about which iterations receive
values from which source. This method is exact only within the domain
of programs with affine loop bounds, subscripts and `if` guards.
Feautrier's technique incorporates two distinct
mechanisms: one for computing value-based dependences from a single write
to a single read, and one that combines these
results to derive the dependences from multiple writes to a single read.
Feautrier's prototype implementation of his technique is very
slow, requiring 82 seconds to analyze a 50 line program.

Monica Lam et al. [MAL93] developed a variant of Feautrier's first mechanism that is guaranteed to be fast in certain commonly occurring cases; in other cases, they use Feautrier's algorithm. They do not describe any method for improving what is potentially the most expensive aspect of Feautrier's technique: combining the results from multiple writes.

We [PW93b] describe a method of computing value-based dependences that achieves the same accuracy of Feautrier. Our method derives from a single equation that formalizes the intuitive definition of value-based dependences, as opposed to the 2-phase approaches of Feautrier and of Lam et al. However, simplifying this equation is non-trivial, since it generally involves formulas of the form: Here, the 's are sets of variables and the 's are conjunctions of linear constraints. Such formulas also arise in, but are not addressed in, the second phase of Feautrier's and Lam et al.'s work.

If the terms are nonconvex, evaluating the negation is tricky. We [PW93b] showed that the technique used by Feautrier, based on quasilinear constraints [AI91], is incomplete. Also, a naive conversion of such a formula to disjunctive normal form can produce a huge explosion in the number of terms (most techniques for checking satisfiability require the formula be converted to disjunctive normal form).

We [PW93b] give an exact method for simplifying/verifying such formulas and heuristics that makes the simplification efficient. In one example arising from the INTERF code, our techniques allow us to handle a problem with 12 terms without any increase in the number of terms. A naive translation to disjunctive normal form would produce terms.

Feautrier's algorithms are currently implemented only in prototype form, but our algorithms are 20-60 times faster than his prototype on simple, realistic examples [PW93a], and only 2-8 times what the Omega test requires for standard array alias/data-dependence analysis.