17 Induction in Rocq
Borrowed from https://www.cs.cornell.edu/courses/cs3110/2018sp/l/22-coq-induction/notes.html
We’ll need the list library for these notes.
Require Import List.
Import ListNotations.17.1 Recursive functions
The List library defines the list append operator, which in Rocq is written as infix operator ++, or as prefix function app. In OCaml, the same operator is written @. In OCaml’s standard library, you’ll recall that append is defined as follows:
OCaml REPL
let rec append lst1 lst2 = match lst1 with | [] -> lst2 | h::t -> h :: (append t lst2);; # val append : 'a list -> 'a list -> 'a list = <fun>
Rocq
Fixpoint append {A : Type} (lst1 : list A) (lst2 : list A) := match lst1 with | nil => lst2 | h::t => h :: (append t lst2) end.
Fixpoint append' (A : Type) (lst1 : list A) (lst2 : list A) :=
match lst1 with
| nil => lst2
| h::t => h :: (append' A t lst2)
end.
Definition app (A : Type) : list A -> list A -> list A :=
fix app' lst1 lst2 :=
match lst1 with
| nil => lst2
| h :: t => h :: app' t lst2
end.The Rocq fix keyword is similar to a let rec expression in OCaml, but where the body of the expression is implicit: Rocq fix f x1 .. xn := e is like OCaml let rec f x1 .. xn = e in f. So in OCaml we could rephrase the above definition as follows:
let app : 'a list -> 'a list -> 'a list =
let rec app' lst1 lst2 =
match lst1 with
| [] -> lst2
| h :: t -> h :: app' t lst2
in
app'Now that we know how Rocq defines ++, let’s prove a theorem about it.
Rocq
Theorem nil_app : forall (A:Type) (lst : list A), [] ++ lst = lst. (** Intuition: appending the empty list to lst immediately returns lst, by the definition of ++. *) Proof. intros A lst. simpl. trivial. Qed.
Rocq
Theorem app_nil : forall (A:Type) (lst : list A), lst ++ [] = lst. (* Intuition (incomplete): by case analysis on [lst]. - if lst is [], then trivially [] ++ [] is []. - if lst is h::t, then ...? *) Proof. intros A lst. destruct lst as [ | h t]. - trivial. - simpl. (* can't proceed *) Abort.
What’s going wrong here is that case analysis is not a sufficiently powerful proof technique for this theorem. We need to be able to recursively apply the theorem we’re trying to prove to smaller lists. That’s where induction comes into play.
17.2 Induction on lists
Induction is a proof technique that you will have encountered in math
classes before—
One classic metaphor for induction is falling dominos: if you arrange some dominos such that each domino, when it falls, will knock over the next domino, and if you knock over the first domino, then all the dominos will fall. Another classic metaphor for induction is a ladder: if you can reach the first rung, and if for any given rung the next rung can be reached, then you can reach any rung you wish. (As long as you’re not afraid of heights.)
a base case, in which something is done first. For the dominos, it’s knocking over the first domino; for the ladder, it’s climbing the first rung. And,
an inductive case, in which a step is taken from one thing to the the next. For the dominos, it’s one domino knocking over the next; for the ladder, it’s literally taking the step from one rung to the next. In both cases, it must actually be possible for the action to occur: if the dominos or the rungs were spaced too far apart, then progress would stop.
A proof by induction likewise has a base case and an inductive case. Here’s the structure of a proof by induction on a list:
Theorem. For all lists lst, P(lst).
Proof. By induction on lst.
Case: lst = nil
Show: P(nil)
Case: lst = h::t
IH: P(t)
Show: P(h::t)
QED.The base case of a proof by induction on lists is for when the list is empty. The inductive case is when the list is non-empty, hence is the cons of a head to a tail. In the inductive case, we get to assume an inductive hypothesis, which is that the property P holds of the tail.
In the metaphors above, the inductive hypothesis is the assumption that we’ve already reached some particular domino or rung. From there, the metaphorical work we do in the inductive case of the proof is to show that from that domino or rung, we can reach the next one.
An inductive hypothesis is exactly the kind of assumption we needed to get our proof about appending nil to go through.
Here’s how that proof could be written:
Theorem: for all lists lst, lst ++ nil = lst.
Proof: by induction on lst.
P(lst) = lst ++ nil = lst.
Case: lst = nil
Show:
P(nil)
= nil ++ nil = nil
= nil = nil
Case: lst = h::t
IH: P(t) = (t ++ nil = t)
Show
P(h::t)
= (h::t) ++ nil = h::t
= h::(t ++ nil) = h::t // by definition of ++
= h::t = h::t // by IH
QEDIn Rocq, we could prove that theorem as follows:
Theorem app_nil : forall (A:Type) (lst : list A),
lst ++ nil = lst.
Proof.
intros A lst. induction lst as [ | h t IH].
- simpl. trivial.
- simpl. rewrite -> IH. trivial.
Qed.The induction tactic is new to us. It initiates a proof by induction on its argument, in this case lst, and provides names for the variables to be used in the cases. There aren’t any new variables in the base case, but the inductive case has variables for the head of the list, the tail, and the inductive hypothesis. You could leave out those variables and simply write [induction lst.], but that leads to a less human-readable proof.
In the inductive case, we use the rewrite -> tactic to rewrite t ++ nil to t. The IH says those terms are equal. That tactic replaces the left-hand side of the equality with the right-hand side, wherever the left-hand side appears in the subgoal. It’s also possible to rewrite from right to left with the rewrite <- tactic. If you leave out the arrow, Rocq assumes that you mean ->.
Here’s another theorem we can prove in exactly the same manner. This theorem shows that append is associative.
Theorem: forall lists l1 l2 l3, l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3.
Proof: by induction on l1.
P(l1) = l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3
Case: l1 = nil
Show:
P(nil)
= nil ++ (l2 ++ l3) = (nil ++ l2) ++ l3
= l2 ++ l3 = l2 ++ l3 // simplifying ++, twice
Case: l1 = h::t
IH: P(t) = t ++ (l2 ++ l3) = (t ++ l2) ++ l3
Show:
P(h::t)
= h::t ++ (l2 ++ l3) = (h::t ++ l2) ++ l3
= h::(t ++ (l2 ++ l3)) = h::((t ++ l2) ++ l3) // simplifying ++, thrice
= h::((t ++ l2) ++ l3) = h::((t ++ l2) ++ l3) // by IH
QEDIn Rocq, that proof looks more or less identical to our previous Rocq proof about append and nil:
Theorem app_assoc : forall (A:Type) (l1 l2 l3 : list A),
l1 ++ (l2 ++ l3) = (l1 ++ l2) ++ l3.
(* Intuition: above *)
Proof.
intros A l1 l2 l3.
induction l1 as [ | h t IH].
- trivial.
- simpl. rewrite -> IH. trivial.
Qed.17.3 Induction on natural numbers
One of the classic theorems proved by induction is that 0 + 1 + ... + n is equal to n * (n+1) / 2. It uses proof by induction on the natural numbers, which are the non-negative integers. The structure of a proof by induction on the naturals is as follows:
Theorem. For all natural numbers n, P(n).
Proof. By induction on n.
Case: n = 0
Show: P(0)
Case: n = k+1
IH: P(k)
Show: P(k+1)
QED.The base case is for zero, the smallest natural number. The inductive case assumes that P holds of k, then shows that P holds of k+1.
The induction tactic in Rocq works for inductive types—
type nat = O | S of nat0 is O
1 is S O
2 is S (S O)
3 is S (S (S O))
etc.
This is a kind of unary representation of the naturals, in which we repeat the symbol S a total of n times to represent the natural number n.
The Rocq definition of nat is much the same:
Print nat.Rocq
Inductive nat : Set := | O : nat | S : nat -> nat
That is, nat has two constructors, O and S, which are just like the OCaml constructors we examined above. And nat has type Set, meaning that nat is a specification for program computations. (Or, a little more loosely, that nat is a type representing program values.)
Anywhere we write something that looks like an integer literal in Rocq, Rocq actually understand that as its expansion in the Peano representation defined above. For example, 2 is understood by Rocq as just syntactic sugar for S (S O). We can even write computations using those constructors:
Compute S (S O).Rocq responds, though, by reintroducing the syntactic sugar:
= 2 : natThe Rocq standard library defines many functions over nat using those constructors and pattern matching, including addition, subtraction, and multiplication. For example, addition is defined like this:
Fixpoint my_add (a b : nat) : nat :=
match a with
| 0 => b
| S c => S (my_add c b)
end.OCaml REPL
let rec sum_to n = if n=0 then 0 else n + sum_to (n-1);; # val sum_to : int -> int = <fun>
In Rocq, it will turn out to be surprisingly tricky...
17.4 Recursive functions, revisited
Here’s a first attempt at defining sum_to, which is just a direct translation of the OCaml code into Rocq. The Fail keyword before it tells Rocq to expect the definition to fail to compile.
Fail Fixpoint sum_to (n:nat) : nat :=
if n = 0 then 0 else n + sum_to (n-1).
The command has indeed failed with message:
The term "n = 0" has type "Prop"
which is not a (co-)inductive type.To fix this problem, we need to use an equality operator that returns a bool, rather than a Prop, when applied to two nats. Such an operator is defined in the Arith library for us:
Require Import Arith.
Locate "=?".
Check Nat.eqb.
Nat.eqb : nat -> nat -> boolWe can now try to use that operator. Unfortunately, we discover a new problem:
Fail Fixpoint sum_to (n:nat) : nat :=
if n =? 0 then 0 else n + sum_to (n-1).
Recursive definition of sum_to is ill-formed.
...
Recursive call to sum_to has principal argument equal to
"n - 1" instead of a subterm of "n".
...Before we can answer that question, let’s look at a different recursive
function—
Fail Fixpoint inf (x:nat) : nat := inf x.Rocq responds very similarly to how it did with sum_to:
Recursive definition of inf is ill-formed.
...
Recursive call to inf has principal argument equal to
"x" instead of a subterm of "x".OCaml REPL
let rec inf x = inf x;; # val inf : 'a -> 'b = <fun>
type void = {nope : 'a . 'a};;
let rec inf x = inf x;;
let ff : void = inf ();;In OCaml we don’t mind that phenomenon, because OCaml’s purpose is not to be a
proof assistant. But in Rocq it would be deadly: we should never allow the
proof assistant to prove false propositions. Rocq therefore wants to prohibit
all infinite loops. But that’s easier said than done! Recall from CMSC250 that
the halting problem is undecidable: we can’t write a program that precisely
determines whether another program will terminate. Well, the Rocq compiler is a
program, and it wants to detect which programs terminate and which programs do
not—
So instead of trying to do something impossible, Rocq settles for doing something possible but imprecise, specifically, something that prohibits all non-terminating programs as well as prohibiting some terminating programs. Rocq enforces a syntactic restriction on recursive functions: there must always be an argument that is syntactically smaller at every recursive function application. An expression e1 is syntactically smaller than e2 if e1 is a subexpression of e2. For example, 1 is syntactically smaller than 1-x, but n-1 is not syntactically smaller than n. It turns out this restriction is sufficient to guarantee that programs must terminate: eventually, if every call results in something smaller, you must reach something that is small enough that you cannot make a recursive call on it, hence evaluation must terminate. A synonym for "syntactically smaller" is structurally decreasing.
But that does rule out some programs that we as humans know will terminate yet do not meet the syntactic restriction. And sum_to is one of them. Here is the definition we previously tried:
Fail Fixpoint sum_to (n:nat) : nat :=
if n =? 0 then 0 else n + sum_to (n-1).To finally succeed in definining sum_to, we can make use of what we know about how nat is defined: since it’s an inductive type, we can pattern match on it:
Fixpoint sum_to (n:nat) : nat :=
match n with
| 0 => 0
| S k => n + sum_to k
end.17.5 Inductive proof of the summation formula
Now that we’ve finally succeeded in defining sum_to, we can prove the classic theorem about summation.
Here’s how we would write the proof mathematically:
Theorem: for all natural numbers n, sum_to n = n * (n+1) / 2.
Proof: by induction on n.
P(n) = sum_to n = n * (n+1) / 2
Case: n=0
Show:
P(0)
= sum_to 0 = 0 * (0+1) / 2
= 0 = 0 * (0+1) / 2 // simplifying sum_to 0
= 0 = 0 // 0 * x = 0
Case: n=k+1
IH: P(k) = sum_to k = k * (k+1) / 2
Show:
P(k+1)
= sum_to (k+1) = (k+1) * (k+1+1) / 2
= k + 1 + sum_to k = (k+1) * (k+1+1) / 2 // simplifying sum_to (k+1)
= k + 1 + k * (k+1) / 2 = (k+1) * (k+1+1) / 2 // using IH
= 2 + 3k + k*k = 2 + 3k + k*k // simplifying terms on each side
QEDRocq
Theorem sum_sq : forall n : nat, sum_to n = n * (n+1) / 2. Proof. intros n. induction n as [ | k IH]. - trivial. - simpl. rewrite -> IH. Abort.
The proof is working fine so far, but now we have a complicated algebraic equation we need to prove:
S (k + k * (k + 1) / 2) = fst (Nat.divmod (k + 1 + k * S (k + 1)) 1 0 0)Although we could try to prove that manually using the definitions of all the operators, it would be much nicer to get Rocq to find the proof for us. It turns out that Rocq does have great support for finding proofs that involve rings: algebraic structures that support addition and multiplication operations. (We’ll discuss rings in detail after we finish the current proof.) But we can’t use that automation here, because the equation we want to prove also involves division, and rings do not support division operations.
To avoid having to reason about division, we could rewrite the theorem we want to prove: by multiplying both sides by 2, the division goes away:
Theorem sum_sq_no_div : forall n : nat,
2 * sum_to n = n * (n+1).
Proof.
intros n.
induction n as [ | k IH].
- trivial.
- simpl.
Abort.
Lemma sum_helper : forall n : nat,
2 * sum_to (S n) = 2 * (S n) + 2 * sum_to n.
Proof.
intros n. simpl. ring.
Qed.Now that we have our helper lemma, we can use it to prove the theorem:
Theorem sum_sq_no_div : forall n : nat,
2 * sum_to n = n * (n+1).
Proof.
intros n.
induction n as [ | k IH].
- trivial.
- rewrite -> sum_helper.
rewrite -> IH.
ring.
Qed. Finally, we can use sum_sq_no_div to prove the original theorem involving division. To do that, we need to first prove another lemma that shows we can transform a multiplication into a division:
Lemma div_helper : forall a b c : nat,
c <> 0 -> c * a = b -> a = b / c.
Proof.
intros a b c neq eq.
rewrite <- eq.
rewrite Nat.mul_comm.
rewrite Nat.div_mul.
trivial.
assumption.
Qed.That lemma involves two library theorems, mult_comm and Nat.div_mul. How did we know to use these? Rocq can help us search for useful theorems. Right after we rewrite <- eq in the above proof, our subgoal is a = c * a / c. It looks like we ought to be able to cancel the c term on the right-hand side. So we can search for a theorem that would help us do that. The Search command takes wildcards and reports all theorems that match the pattern we supply, for example:
Search (_ * _ / _).This reveals a useful theorem:
Nat.div_mul: forall a b : nat, b <> 0 -> a * b / b = a That would let us cancel a term from the numerator and denominator, but it requires the left-hand side of the equality to be of the form a * b / b, whereas we have c * a / b. The problem is that the two sides of the multiplication are reversed. No worries; multiplication is commutative, and there is a library theorem that proves it. Again, we could find that theorem:
Search (_ * _ = _ * _).One of the results is:
Nat.mul_comm: forall n m : nat, n * m = m * nPutting those two library theorems to use, we’re able to prove the lemma as above.
Finally, we can use that lemma to prove our classic theorem about summation.
Theorem sum_sq : forall n : nat,
sum_to n = n * (n+1) / 2.
Proof.
intros n.
apply div_helper.
- discriminate.
- rewrite sum_sq_no_div. trivial.
Qed.Summary: wow, that was a lot of work to prove that seemingly simple classic theorem! We had to figure out how to code sum_to, and we had to deal with a lot of complications involving algebra. This situation is not uncommon: the theorems that we think of as easy with pencil-and-paper (like arithmetic) turn out to be hard to convince Rocq of, whereas the theorems that we think of as challenging with pencil-and-paper (like induction) turn out to be easy.
17.6 Rings and fields
In the proof we just did above about summation, we used a tactic called ring that we said searches for proofs about algebraic equations involving multiplication and addition. Let’s look more closely at that tactic.
When we studied OCaml modules, we looked at rings as an example of a mathematical abstraction of addition, multiplication, and negation. Here is an OCaml signature for a ring:
module type Ring = sig
type t
val zero : t
val one : t
val add : t -> t -> t
val mult : t -> t -> t
val neg : t -> t
endWe could implement that signature with a representation type t that is int, or float, or even bool.
The names given in Ring are suggestive of the operations they represent, but to really specify how those operations should behave, we need to write some equations that relate them. Below are the equations that (it turns out) fully specify zero, one, add, and mult. Rather than use those identifiers, we use the more familiar notation of 0, 1, +, and *.
0 + x = 0
x + y = y + x
x + (y + z) = (x + y) + z
0 * x = 0
1 * x = 1
x * y = y * x
x * (y * z) = (x * y) * z
(x + y) * z = (x * z) + (y * z)Technically these equations specify what is known as a commutative semi-ring. It’s a semi_-ring because we don’t have equations specifying negation yet. It’s a commutative semi-ring because the * operation commutes. (The + operation commutes too, but that’s always required of a semi-ring.)
The first group of equations specifies how + behaves on its own. The second group specifies how * behaves on its own. The final equation specifies how + and * interact.
If we extend the equations above with the following two, we get a specification for a ring:
x - y = x + (-y)
x + (-x) = 0It’s a remarkable fact from the study of abstract algebra that those equations completely specify a ring. Any theorem you want to prove about addition, multiplication, and negation follows from those equations. We call the equations the axioms that specify a ring.
Rings don’t have a division operation. Let’s introduce a new operator called inv (short for "inverse"), and let’s write 1/x as syntactic sugar for inv x. If we take all the the ring axioms and add the following axiom for [inv], we get what is called a field:
x * 1/x = 1 if x<>0A field is an abstraction of addition, multiplication, negation, and division. Note that OCaml ints do not satisfy the inv axiom above. For example, [2 * (1/2)] equals 0 in OCaml, not 1. OCaml floats mostly do satisfy the field axioms, up to the limits of floating-point arithmetic. And in mathematics, the rational numbers and the real numbers are fields.
Rocq
Theorem plus_comm : forall a b, a + b = b + a. Proof. intros a b. ring. Qed. Theorem foil : forall a b c d, (a + b) * (c + d) = a*c + b*c + a*d + b*d. Proof. intros a b c d. ring. Qed.
The proofs that the ring tactic finds can be quite complicated. For example, try looking at the output of the following command. It’s so long that we won’t put that output in this file!
Print foil.Of course, ring won’t find proofs of equations that don’t actually hold. For example, if we had a typo in our statement of foil, then ring would fail.
Theorem broken_foil: forall a b c d,
(a + b) * (c + d) = a*c + b*c + c*d + b*d.
Proof.
intros a b c d. try ring.
Abort.
Theorem sub_add_1 : forall a : nat, a - 1 + 1 = a.
Proof.
intros a.
try ring.
Abort.Compute 0-1. (* 0 : nat *)If we want to reason about the integers instead of the natural numbers, we can use a library called ZArith for that. The name comes from the fact that Z is used in mathematics to denote the integers.
Require Import ZArith.
Open Scope Z_scope.Compute 0-1. (* -1 : Z *)Now we can prove the theorem from before.
Theorem sub_add_1 : forall a : Z, a - 1 + 1 = a.
Proof.
intros a. ring.
Qed.Before going on, let’s close the Z scope so that the operators go back to working on nat, as usual.
Close Scope Z_scope.
Compute 0-1. (* 0 : nat *)Require Import Field.The rational numbers are provided in a couple different Rocq libraries; the
one we’ll use here is Qcanon. In mathematics, Q denotes the rational
numbers, and canon indicates that the numbers are stored in a canonical
form—
Require Import Qcanon.
Open Scope Qc_scope.
Theorem frac_qc: forall x y z : Qc, z <> 0 -> (x + y) / z = x / z + y /z.
Proof.
intros x y z z_not_0.
field. assumption.
Qed.
Close Scope Qc_scope.Module RealExample.
Require Import Reals.
Open Scope R_scope.
Theorem frac_r : forall x y z, z <> 0 -> (x + y) / z = x / z + y /z.
Proof.
intros x y z z_not_0.
field. assumption.
Qed.
Theorem frac_r_broken : forall x y z, (x + y) / z = x / z + y /z.
Proof.
intros x y z.
field.
Abort.
Close Scope R_scope.
End RealExample.17.7 Induction principles
When we studied the Curry-Howard correspondence, we learned that proofs
correspond to programs. That correspondence applies to inductive proofs
as well, and as it turns out, inductive proofs correspond to recursive
programs. Intuitively, that’s because an inductive proof involves
an inductive hypothesis—
To get a more concrete idea of what this means, let’s look at the proof value (i.e., program) that Rocq produces for our original inductive proof in these notes:
Print app_nil.Rocq responds:
app_nil =
fun (A : Type) (lst : list A) =>
list_ind (fun lst0 : list A => lst0 ++ nil = lst0) eq_refl
(fun (h : A) (t : list A) (IH : t ++ nil = t) =>
eq_ind_r (fun l : list A => h :: l = h :: t) eq_refl IH) lst
: forall (A : Type) (lst : list A), lst ++ nil = lstThat’s dense, but let’s start picking it apart. First, we see that app_nil is a function that takes in two arguments: A and lst. Then it immediately applies another function named list_ind. That function was defined for us in the standard library, and it’s what "implements" induction on lists. Let’s check it out:
Check list_ind.Rocq responds:
list_ind
: forall (A : Type) (P : list A -> Prop),
P nil ->
(forall (a : A) (l : list A), P l -> P (a :: l)) ->
forall l : list A, P lA, which is the type of the list elements.
P, which is the property to be proved by induction. For example, the property being proved in app_nil is fun (lst: list A) => lst ++ nil = lst.
P nil, which is a proof that P holds of the empty list. In other words, a proof of the base case.
A final argument of type forall (a : A) (l : list A), P l -> P (a :: l). This is the proof of the inductive case. It takes an argument a, which is the head of a list, l, which is the tail of a list, and a proof P l that P holds of l. So, P l is the inductive hypothesis. The output is of type P (a :: l), which is a proof that P holds of a::l.
Finally, list_ind returns a value of type forall l : list A, P l, which is a proof that P holds of all lists.
Ok, so that’s the type of list_ind: a proposition asserting that if you have a proof of the base case, and a proof of the inductive case, you can assemble those to prove that a property holds of a list. Next, what’s the value of list_ind? In other words, what’s the proof that list_ind itself is actually a true proposition?
Print list_ind.Rocq responds:
list_ind =
fun (A : Type) (P : list A -> Prop) => list_rect P
...So list_ind takes in A and P and just immediately applies another function, list_rect, to P. (The name rect is not especially helpful to understand, but alludes to recursion over a type.) Before we look at list_rect’s actual implementation, let’s look at our own equivalent implementation that is easier to read:
Fixpoint my_list_rect
(A : Type)
(P : list A -> Prop)
(baseCase : P nil)
(inductiveCase : forall (h : A) (t : list A), P t -> P (h::t))
(lst : list A)
: P lst
:=
match lst with
| nil => baseCase
| h::t => inductiveCase h t
(my_list_rect A P baseCase inductiveCase t)
end.If lst is empty, then my_list_rect returns the proof of the base case.
If lst is h::t, then my_list_rect returns the proof of the inductive case. To construct that proof, it applies inductiveCase to h and t as the head and tail. But inductiveCase also requires a final argument, which is the proof that P holds of t. To construct that proof, my_list_rect calls itself recursively on t.
That recursive call is exactly why we said that inductive proofs are recursive programs. The inductive proof needs evidence that the inductive hypothesis holds of the smaller list, and recursing on that smaller list produces the evidence.
It’s not immediately obvious, but my_list_rect is almost just fold_right. Here’s how we could implement fold_right in Rocq, with a slightly different argument order than the same function in OCaml:
Fixpoint my_fold_right
{A B : Type}
(init : B)
(f : A -> B -> B)
(lst : list A)
:=
match lst with
| nil => init
| h::t => f h (my_fold_right init f t)
end.
my_fold_right's body:
match lst with
| nil => init
| h::t => f h (my_fold_right init f t)
end.
my_list_rect's body:
match lst with
| nil => baseCase
| h::t => inductiveCase h t (my_list_rect A P baseCase inductiveCase t)
end.Both match against lst. If lst is empty, both return an initial/base-case value. If lst is non-empty, both recurse on the tail, then pass the result of the recursive call to a function (f or inductiveCase) that combines that result with the head. The only essential difference is that f does not take t directly as an input, whereas inductiveCase does.
So there you have it: induction over a list is really just folding over the list, eventually reaching the empty list and returning the proof of the base case for it, then working the way back up the call stack, assembling an ever-larger proof for each element of the list. An inductive proof is a recursive program.
Going back to the actual definition of list_rect, here it is:
Print list_rect.Rocq responds:
list_rect =
fun (A : Type) (P : list A -> Type) (f : P nil)
(f0 : forall (a : A) (l : list A), P l -> P (a :: l)) =>
fix F (l : list A) : P l :=
match l as l0 return (P l0) with
| nil => f
| y :: l0 => f0 y l0 (F l0)
end
: forall (A : Type) (P : list A -> Type),
P nil ->
(forall (a : A) (l : list A), P l -> P (a :: l)) ->
forall l : list A, P lThat uses different syntax, but it ends up defining the same function as my_list_rect.
Whenever you define an inductive type, Rocq automatically generates the induction principle and recursive function that implements it for you. For example, we could define our own lists:
Inductive mylist (A:Type) : Type :=
| mynil : mylist A
| mycons : A -> mylist A -> mylist A.
Print mylist_ind.
Print mylist_rect.17.8 Summary
Rocq excels as a proof assistant when it comes to proof by induction. Whenever we define an inductive type, Rocq generates an induction principle for us automatically. That principle is really a recursive program that knows how to assemble evidence for a proposition, given the constructors of the inductive type. The induction tactic manages the proof for us, automatically figuring out what the base case and the inductive case, and automatically generating the inductive hypothesis.
17.8.1 Terms and concepts
append
base case
field
fix
Fixpoint
induction
induction principle
inductive case
inductive hypothesis
lemma
Peano natural numbers
Prop vs bool
ring
searching for library theorems
semi-ring
syntactically smaller restriction on recursive calls
17.8.2 Tactics
field
induction
rewrite
ring
tacticals:try