5 Floyd Hoare Logic
In this lecture, we will explore the mathematical foundations of the concepts introduced in the previous lecture. We will use mathematical formulas to formally describe the semantics of a program.
5.1 Program State
Values of the local variables, input and output parameters defines the state of a program at a program point.
Dafny
method MyMethod(x: int) returns (y: int) requires 10 <= x ensures 25 <= y {var a, b; 3; a := x + if x < 20 { 32 - x; b := else { } 16; b := } y := a + b; }
Dafny
method MyMethod_forward(x: int) returns (y: int) requires 10 <= x ensures 25 <= y {with the precondition. We start assert 10 <= x; var a, b; // Variable declaration. The variables could store anything! 3; a := x + assert 10 <= x && a == x + 3; "and" to record that we learned the value of 'a'. // We use if x < 20 { // Important: here we know that the 'if' test succeeded!assert 10 <= x && a == x + 3 && x < 20; 32 - x; b := assert 10 <= x && a == x + 3 && x < 20 && b == 32 - x; else { } // Now we know that the test failed.assert 10 <= x && a == x + 3 && !(x < 20); 16; b := assert 10 <= x && a == x + 3 && !(x < 20) && b == 16; } true after the 'if'? // What is // We must have followed one of the two paths, which iswith 'or'. // well-expressed assert (10 <= x && a == x + 3 && x < 20 && b == 32 - x) 10 <= x && a == x + 3 && !(x < 20) && b == 16); || ( y := a + b;assert ((10 <= x && a == x + 3 && x < 20 && b == 32 - x) 10 <= x && a == x + 3 && !(x < 20) && b == 16)) || ( && y == a + b; // At this point, we simply check that the formula we calculatedmethod postcondition! That check can be phrased // implies the as a formula itself: // assert (((10 <= x && a == x + 3 && x < 20 && b == 32 - x) 10 <= x && a == x + 3 && !(x < 20) && b == 16)) || ( && y == a + b)25 <= y; ==> }
Dafny
method Swap(x: int, y: int) returns (a: int, b: int) ensures a == y && b == x { a, b := x, y;assert a == x && b == y; var tmp := a; assert a == x && b == y && tmp == a; to be trouble. // OK, the next one is going a := b;assert a == x && b == y && tmp == a && a == b; // // Nope, can't prove that one! b := tmp; }
Dafny
method Swap_backward(x: int, y: int) returns (a: int, b: int) ensures a == y && b == x {true, which is // The final formula we calculate is obviously // necessary, since we have no precondition/'requires'. to parallel assignment (multiple // This rule easily generalizes to left of ':='). // variables assert y == y && x == x; a, b := x, y; assert b == y && a == x; var tmp := a; assert b == y && tmp == x; a := b; for assignment may now be surprising: replace all // The rule // mentions of the variable being assigned, turning them into // copies of the expression being assigned!assert a == y && tmp == x; b := tmp; in the postcondition. // Our easy starting point: copy assert a == y && b == x; }
Dafny
method MyMethod_backward(x: int) returns (y: int) requires 10 <= x ensures 25 <= y {to confirm that our precondition truly implies that // We need // formula, which proof goal we restate here explicitly.assert 10 <= x ==> ((x < 20 && 25 <= (x + 3) + (32 - x)) 20) && 25 <= (x + 3) + 16)); || (!(x < var a, b; assert (x < 20 && 25 <= (x + 3) + (32 - x)) 20) && 25 <= (x + 3) + 16); || (!(x < 3; a := x + "or" of the two top formulas // Now we mechanically create an "and"ing in the 'if' expression or its // of the blocks, also // negation.assert (x < 20 && 25 <= a + (32 - x)) 20) && 25 <= a + 16); || (!(x < if x < 20 { assert 25 <= a + (32 - x); 32 - x; b := assert 25 <= a + b; else { } assert 25 <= a + 16; 16; b := assert 25 <= a + b; } // For an 'if', copy the current formula into // the ends of the two blocks. assert 25 <= a + b; y := a + b; in the postcondition. // Easy start: copy assert 25 <= y; }
It turns out there is actually a complete rule we can use to calculate formulas in the forward direction! In the case of assignment, we have to use an ’exists’ quantifier, which is more complex. In this class, we only focus on the backward reasoning.
See Chapter 2 of Program Proofs for a more formal take on all of these matters.
Instead of going into those formalities, let’s meet some of the useful built-in, immutable container types of Dafny, which turn out to work very well with this calculational style, because we are allowed to mention containers and operations on them in formulas.
5.1.1 Sequence
Dafny
method Seq1(a: int, b: int) returns (r: int) ensures r == a + b {to a simple arithmetic formula that is easy // Great: we're down to check. // assert 0 <= 0 < 3 && 0 <= 2 < 3 && 0 <= 1 < 3 3 * b - 2 * b == a + b; && a + // That last assertion can be simplified using algebraicto the above: // properties of sequence operations, assert 0 <= 0 < |[a, 2 * b, 3 * b]| 0 <= 2 < |[a, 2 * b, 3 * b]| && 0 <= 1 < |[a, 2 * b, 3 * b]| && 2 * b, 3 * b][0] && ([a, 2 * b, 3 * b][2] + [a, 2 * b, 3 * b][1]) == a + b; - [a, var s : seq<int> := [a, 2 * b, 3 * b]; assert 0 <= 0 < |s| && 0 <= 2 < |s| && 0 <= 1 < |s| 0] + s[2] - s[1] == a + b; && s[0]; r := s[assert 0 <= 2 < |s| && 0 <= 1 < |s| && r + s[2] - s[1] == a + b; 2]; r := r + s[in-bounds access, in // Note that we add a requirement of // processing a read from a sequence.assert 0 <= 1 < |s| && r - s[1] == a + b; 1]; r := r - s[assert r == a + b; }
Dafny
method Seq2(s: seq<int>, i: int) returns (r: seq<int>) requires 0 <= i < |s| ensures r == s { // This top assertion trivially simplifies into the precondition.assert 0 <= i < |s| && s == s; to the one above, we use the fact // Going from the line below // that concatenating these two particular slices rebuilds the // original sequence.assert 0 <= i < |s[..i] + s[i..]| && s[..i] + s[i..] == s; r := s[..i] + s[i..];assert 0 <= i < |r| && r == s; assert 0 <= i < |r| && 0 <= i < |r| && r[i := r[i] + 10 - 10] == s; // OK, that last formula is getting unwieldy!do two rounds of simplification. // Let's assert 0 <= i < |r| && 0 <= i < |r[i := r[i] + 10]| 10][i := r[i := r[i] + 10][i] - 10] == s; && r[i := r[i] + 10]; r := r[i := r[i] + assert 0 <= i < |r| && r[i := r[i] - 10] == s; 10]; r := r[i := r[i] - assert r == s; }
By the way, strings are just a special kind of sequence in Dafny: seq<char>.
5.1.2 Set
Dafny
method Set1(a: int, b: int) returns (r: bool) ensures r == (a == b) {1; return |{a, b}| == to compute equality, huh? // Tricky way }
Dafny
method Set2(s: set<int>, t: set<int>, u: set<int>) ensures s + t == t + s // '+' is union. ensures s * t == t * s // '*' is intersection. ensures s * (t + u) == s * t + s * u // distributivity ensures 13 !in s ==> |s + {13}| == |s| + 1 // membership ensures s <= s + t + u // subset { // Dafny automatically proves the Postconditions.to write any code here. // We don't actually need set operators! // We could still calculate formulas using }
Dafny
method Multiset1(a: int, b: int) returns (r: bool) ensures r == false {multiset{a, b}| == 1; return | // Why does this expression indeed always return 'false'?multiset always has two elements, never one! // Because this // It just might be that the two elements are equal. } method Multiset2(m : multiset<int>) returns (m': multiset<int>) requires |m| > 0 ensures m + m != m // Negation of a set property that doesn't for multisets! // hold ensures m * m == m // But this one does. ensures |m'| > |m| {42 := m[42] + 8]; m' := m[42, to 8 more // This notation adjusts the count of element 42 was not in the original multiset, // than it started out. If 42]' evaluates to 0. // 'm[ }
5.1.3 Map
Dafny
method Map1(ak: int, bk: int, av: int, bv: int) returns (m: map<int, int>) requires ak != bk ensures |m| == 2 ensures ak in m ensures m[ak] == av ensures bk in m ensures m[bk] == bv ensures m.Keys == {ak, bk} ensures m.Values == {av, bv} ensures (ak, av) in m.Items ensures (bk, bv) in m.Items in an example of tuples! // We snuck {map[]; m := m := m[ak := av]; m := m[bk := bv]; }
Dafny
method IsDuplicateAt(s: seq<int>, n: int) returns (b: bool) requires 0 <= n < |s| ensures b == (s[n] in s[..n] || s[n] in s[n+1..]) {method meets // This assertion helps Dafny figure out why the lemma // its specification. Think of it like proposing a useful to a theorem. Dafny doesn't invent the lemmas on // on the way to check them and then use them to // its own, but it is able // prove later facts.assert s == s[..n] + [s[n]] + s[n+1..]; multiset(s)[s[n]] > 1; return }
5.2 Hoare Triples
A Hoare triple is a claim about the state before and after executing a statement. The standard notation is
{P} s {Q}
If statement s begins execution in a state satisfying assertion P,
and if s eventually terminates in some final state,
then that final state will satisfy the assertion Q.
Assertion P is called the precondition of the triple, and Q is the postcondition.
For example,
0} x := x + 1 {x == 1} {x ==
forall m, {x == m} x := x + 1 {x == m + 1}
5.3 Proof Rules
The goal of Hoare logic is to provide a compositional method for proving the validity of specific Hoare triples. That is, we want the structure of a program’s correctness proof to mirror the structure of the program itself. To this end, in the sections below, we’ll introduce a rule for reasoning about each of the different syntactic forms of statements in Dafny – one for assignment, one for sequencing, one for conditionals, etc. – plus a couple of "structural" rules for gluing things together. We will then be able to prove programs correct using these proof rules. */
5.3.1 Empty statement blocks
Since empty statement blocks don’t change the state, they preserve any assertion P:
___________________ (hoare skip) { P } {} { P }
5.3.2 Sequencing
If statement s1 takes any state where P holds to a state where Q holds, and if s2 takes any state where Q holds to one where R holds, then doing s1 followed by s2 will take any state where P holds to one where R holds:
{ P } s1 { Q }
{ Q } s2 { R }
---------------------- (hoare sequence) { P } s1; s2 { R }
5.3.3 Assignment
The rule for assignment is the most fundamental of the Hoare logic proof rules. Here’s how it works.
Consider this incomplete Hoare triple:
1 } { ??? } x := y { x ==
We want to assign y to x and finish in a state where x is 1. What could the precondition be?
One possibility is y == 1, because if y is already 1 then assigning it to x causes x to be 1. That leads to a valid Hoare triple:
1 } x := y { x == 1 } { y ==
It may seem as though coming up with that precondition must have
taken some clever thought. But there is a mechanical way we could
have done it: if we take the postcondition x == 1 and in it
replace x with y—
That same idea works in more complicated cases. For example:
1 } { ??? } x := x + y { x ==
If we replace the x in x == 1 with x + y, we get x + y == 1. That again leads to a valid Hoare triple:
1 } x := x + y { x == 1 } { x + y ==
Why does this technique work? The postcondition identifies some property P that we want to hold of the variable x being assigned. In this case, P is "equals 1". To complete the triple and make it valid, we need to identify a precondition that guarantees that property will hold of x. Such a precondition must ensure that the same property holds of _whatever is being assigned to_ x. So, in the example, we need "equals 1" to hold of x + y. That’s exactly what the technique guarantees.
In general, the postcondition could be some arbitrary assertion Q, and the right-hand side of the assignment could be some arbitrary arithmetic expression a:
{ ??? } x := a { Q }
The precondition would then be Q, but with any occurrences of x in it replaced by a.
Let’s introduce a notation for this idea of replacing occurrences: Define Q[x := a] to mean "Q where a is substituted in place of x".
This yields the Hoare logic rule for assignment:
---------------------------- (hoare assign) { Q[x := a] } x := a { Q }
To many people, this rule seems "backwards" at first, because it proceeds from the postcondition to the precondition. Actually it makes good sense to go in this direction: the postcondition is often what is more important, because it characterizes what we can assume afer running the code.
Here are some valid instances of the assignment rule:
1 <= 5 }
{ x + 1
x := x + 5 }
{ x <=
3) [x := 3] } (that is, 3 = 3)
{ (x == 3
x := 3 } { x =
5.3.4 Consequence
Sometimes the preconditions and postconditions we get from the Hoare rules won’t quite be the ones we want in the particular situation at hand – they may be logically equivalent but have a different syntactic form that fails to unify with the goal we are trying to prove, or they actually may be logically weaker (for preconditions) or stronger (for postconditions) than what we need.
For instance,
3 == 3} x := 3 {x == 3} {
follows directly from the assignment rule, but
true} x := 3 {x == 3} {
However, they are logically _equivalent_, so if one triple is valid, then the other must certainly be as well. We can capture this observation with the following rule:
{P'} s {Q}
P <-> P'
------------------------- (hoare consequence pre equiv) {P} s {Q}
Taking this line of thought a bit further, we can see that strengthening the precondition or weakening the postcondition of a valid triple always produces another valid triple. This observation is captured by two _Rules of Consequence_.
{P'} c {Q}
P -> P'
----------------------------- (hoare consequence pre) {P} s {Q}
{P} s {Q'}
Q' -> Q
----------------------------- (hoare consequence post) {P} s {Q}
5.3.5 Conditionals
What sort of rule do we want for reasoning about conditional statements?
Certainly, if the same assertion Q holds after executing either of the branches, then it holds after the whole conditional. So we might be tempted to write:
{P} s1 {Q} {P} s2 {Q}
---------------------------------if b { s1 } else { s2 } {Q} {P}
true }
{ if x <= 0
2 }
{ y := else
1 }
{ y := x + { x <= y }
Fortunately, we can say something more precise. In the "then" branch, we know that the boolean expression b evaluates to true, and in the "else" branch, we know it evaluates to false. Making this information available in the premises of the rule gives us more information to work with when reasoning about the behavior of s1 and s2 (i.e., the reasons why they establish the postcondition Q).
{P && b} s1 {Q} {P && !b} s2 {Q}if)
---------------------------------------- (hoare if b { s1 } else { s2 } {Q} {P}
What would a hoare rule for 1-sided conditionals look like? Fill in the following rule.
{P && b} s1 {Q} P && !b -> Q
---------------------------------- (hoare if1)if b {s1} {Q} {P}
5.3.6 Annotated Programs
The beauty of Hoare Logic is that it is _structure-guided_: the structure of proofs exactly follows the structure of programs.
We can record the essential ideas of a Hoare-logic proof – omitting low-level calculational details – by annotating a program with appropriate assertions on each of its commands.
Such a annotated program carries within itself an argument for its own correctness.
For example, consider the following program, which swaps the values of two variables using addition and subtraction (instead of by assigning to a temporary variable).
x := x + y;
y := x - y; x := x - y
1) { x == m && y == n } ->
(2) { (x + y) - ((x + y) - y) == n && (x + y) - y == m }
(
x := x + y;3) { x - (x - y) == n && x - y == m }
(
y := x - y;4) { x - y == n && y == m }
(
x := x - y5) { x == n && y == m } (
We begin with the unannotated program (the unnumbered lines).
We add the specification – i.e., the outer precondition (1) and postcondition (5). In the precondition, we use parameters m and n to remember the initial values of variables x and y so that we can refer to them in the postcondition (5).
We work backwards, mechanically, starting from (5) and proceeding until we get to (2). At each step, we obtain the precondition of the assignment from its postcondition by substituting the assigned variable with the right-hand-side of the assignment. For instance, we obtain (4) by substituting x with x - y in (5), and we obtain (3) by substituting y with x - y in (4).
Finally, we verify that (1) logically implies (2) – i.e., that the step from (1) to (2) is a valid use of the law of consequence – by doing a bit of high-school algebra.
Here is a simple program using conditionals, assuming x, y, and z are natural numbers (as in: non-negative integers):
method simpleIf(x:nat, y:nat) returns (z:nat)
ensures z + x == y || z + y == x
{if x <= y {
z := y - x;else {
}
z := x - y;
} }
1) { true }
(if x <= y {
2) { true && x <= y } ->
(3) { (y - x) + x == y || (y - x) + y == x }
(
z := y - x4) { z + x == y || z + y == x }
(else {
} 5) { true && ~(x <= y) } ->
(6) { (x - y) + x == y || (x - y) + y = x }
(
z := x - y7) { z + x = y || z + y = x }
(
}8) { z + x = y || z + y = x } (
We start with the outer precondition (1) and postcondition (8).
Following the format dictated by the [hoare_if] rule, we copy the postcondition (8) to (4) and (7). We conjoin the precondition (1) with the guard of the conditional to obtain (2). We conjoin (1) with the negated guard of the conditional to obtain (5).
In order to use the assignment rule and obtain (3), we substitute z by y - x in (4). To obtain (6) we substitute z by x - y in (7).
Finally, we verify that (2) implies (3) and (5) implies (6). Both of these implications crucially depend on the ordering of x and y obtained from the guard. For instance, knowing that x <= y ensures that subtracting x from y and then adding back x produces y, as required by the first disjunct of (3). Similarly, knowing that !(x <= y)] ensures that subtracting y from x and then adding back y produces x, as needed by the second disjunct of (6). Note that n - m + m = n does _not_ hold for arbitrary natural numbers n and m (Dafny, in particular, won’t even let you perform a subtraction that would get you in the negatives)