1
\documentclass[11pt,a4paper]{book}
2
\usepackage[T1]{fontenc}
3
\usepackage[latin1]{inputenc}
6
\input{../common/version.tex}
7
\input{../common/macros.tex}
8
\input{../common/title.tex}
13
\coverpage{A Tutorial}{G�rard Huet, Gilles Kahn and Christine Paulin-Mohring}{}
17
\chapter*{Getting started}
19
\Coq\ is a Proof Assistant for a Logical Framework known as the Calculus
20
of Inductive Constructions. It allows the interactive construction of
21
formal proofs, and also the manipulation of functional programs
22
consistently with their specifications. It runs as a computer program
23
on many architectures.
24
%, and mainly on Unix machines.
25
It is available with a variety of user interfaces. The present
26
document does not attempt to present a comprehensive view of all the
27
possibilities of \Coq, but rather to present in the most elementary
28
manner a tutorial on the basic specification language, called Gallina,
29
in which formal axiomatisations may be developed, and on the main
30
proof tools. For more advanced information, the reader could refer to
31
the \Coq{} Reference Manual or the \textit{Coq'Art}, a new book by Y.
32
Bertot and P. Cast�ran on practical uses of the \Coq{} system.
34
Coq can be used from a standard teletype-like shell window but
35
preferably through the graphical user interface
36
CoqIde\footnote{Alternative graphical interfaces exist: Proof General
39
Instructions on installation procedures, as well as more comprehensive
40
documentation, may be found in the standard distribution of \Coq,
41
which may be obtained from \Coq{} web site \texttt{http://coq.inria.fr}.
43
In the following, we assume that \Coq~ is called from a standard
44
teletype-like shell window. All examples preceded by the prompting
45
sequence \verb:Coq < : represent user input, terminated by a
48
The following lines usually show \Coq's answer as it appears on the
49
users screen. When used from a graphical user interface such as
50
CoqIde, the prompt is not displayed: user input is given in one window
51
and \Coq's answers are displayed in a different window.
53
The sequence of such examples is a valid \Coq~
54
session, unless otherwise specified. This version of the tutorial has
55
been prepared on a PC workstation running Linux. The standard
56
invocation of \Coq\ delivers a message such as:
62
Welcome to Coq 8.0 (Mar 2004)
69
The first line gives a banner stating the precise version of \Coq~
70
used. You should always return this banner when you report an
71
anomaly to our hot-line \verb:coq-bugs@pauillac.inria.fr: or on our
72
bug-tracking system~:\verb:http://coq.inria.fr/bin/coq-bugs:
74
\chapter{Basic Predicate Calculus}
76
\section{An overview of the specification language Gallina}
78
A formal development in Gallina consists in a sequence of {\sl declarations}
79
and {\sl definitions}. You may also send \Coq~ {\sl commands} which are
80
not really part of the formal development, but correspond to information
81
requests, or service routine invocations. For instance, the command:
85
terminates the current session.
87
\subsection{Declarations}
89
A declaration associates a {\sl name} with
90
a {\sl specification}.
91
A name corresponds roughly to an identifier in a programming
92
language, i.e. to a string of letters, digits, and a few ASCII symbols like
93
underscore (\verb"_") and prime (\verb"'"), starting with a letter.
94
We use case distinction, so that the names \verb"A" and \verb"a" are distinct.
95
Certain strings are reserved as key-words of \Coq, and thus are forbidden
98
A specification is a formal expression which classifies the notion which is
99
being declared. There are basically three kinds of specifications:
100
{\sl logical propositions}, {\sl mathematical collections}, and
101
{\sl abstract types}. They are classified by the three basic sorts
102
of the system, called respectively \verb:Prop:, \verb:Set:, and
103
\verb:Type:, which are themselves atomic abstract types.
105
Every valid expression $e$ in Gallina is associated with a specification,
106
itself a valid expression, called its {\sl type} $\tau(E)$. We write
107
$e:\tau(E)$ for the judgment that $e$ is of type $E$.
108
You may request \Coq~ to return to you the type of a valid expression by using
109
the command \verb:Check::
115
Thus we know that the identifier \verb:O: (the name `O', not to be
116
confused with the numeral `0' which is not a proper identifier!) is
117
known in the current context, and that its type is the specification
118
\verb:nat:. This specification is itself classified as a mathematical
119
collection, as we may readily check:
125
The specification \verb:Set: is an abstract type, one of the basic
126
sorts of the Gallina language, whereas the notions $nat$ and $O$ are
127
notions which are defined in the arithmetic prelude,
128
automatically loaded when running the \Coq\ system.
130
We start by introducing a so-called section name. The role of sections
131
is to structure the modelisation by limiting the scope of parameters,
132
hypotheses and definitions. It will also give a convenient way to
133
reset part of the development.
138
With what we already know, we may now enter in the system a declaration,
139
corresponding to the informal mathematics {\sl let n be a natural
146
If we want to translate a more precise statement, such as
147
{\sl let n be a positive natural number},
148
we have to add another declaration, which will declare explicitly the
149
hypothesis \verb:Pos_n:, with specification the proper logical
152
Hypothesis Pos_n : (gt n 0).
155
Indeed we may check that the relation \verb:gt: is known with the right type
156
in the current context:
162
which tells us that \verb:gt: is a function expecting two arguments of
163
type \verb:nat: in order to build a logical proposition.
164
What happens here is similar to what we are used to in a functional
165
programming language: we may compose the (specification) type \verb:nat:
166
with the (abstract) type \verb:Prop: of logical propositions through the
167
arrow function constructor, in order to get a functional type
172
which may be composed again with \verb:nat: in order to obtain the
173
type \verb:nat->nat->Prop: of binary relations over natural numbers.
174
Actually \verb:nat->nat->Prop: is an abbreviation for
175
\verb:nat->(nat->Prop):.
177
Functional notions may be composed in the usual way. An expression $f$
178
of type $A\ra B$ may be applied to an expression $e$ of type $A$ in order
179
to form the expression $(f~e)$ of type $B$. Here we get that
180
the expression \verb:(gt n): is well-formed of type \verb:nat->Prop:,
181
and thus that the expression \verb:(gt n O):, which abbreviates
182
\verb:((gt n) O):, is a well-formed proposition.
187
\subsection{Definitions}
189
The initial prelude contains a few arithmetic definitions:
190
\verb:nat: is defined as a mathematical collection (type \verb:Set:), constants
191
\verb:O:, \verb:S:, \verb:plus:, are defined as objects of types
192
respectively \verb:nat:, \verb:nat->nat:, and \verb:nat->nat->nat:.
193
You may introduce new definitions, which link a name to a well-typed value.
194
For instance, we may introduce the constant \verb:one: as being defined
195
to be equal to the successor of zero:
197
Definition one := (S O).
199
We may optionally indicate the required type:
201
Definition two : nat := S one.
204
Actually \Coq~ allows several possible syntaxes:
206
Definition three : nat := S two.
209
Here is a way to define the doubling function, which expects an
210
argument \verb:m: of type \verb:nat: in order to build its result as
214
Definition double (m:nat) := plus m m.
216
This definition introduces the constant \texttt{double} defined as the
217
expression \texttt{fun m:nat => plus m m}.
218
The abstraction introduced by \texttt{fun} is explained as follows. The expression
219
\verb+fun x:A => e+ is well formed of type \verb+A->B+ in a context
220
whenever the expression \verb+e+ is well-formed of type \verb+B+ in
221
the given context to which we add the declaration that \verb+x+
222
is of type \verb+A+. Here \verb+x+ is a bound, or dummy variable in
223
the expression \verb+fun x:A => e+. For instance we could as well have
224
defined \verb:double: as \verb+fun n:nat => (plus n n)+.
226
Bound (local) variables and free (global) variables may be mixed.
227
For instance, we may define the function which adds the constant \verb:n:
230
Definition add_n (m:nat) := plus m n.
232
However, note that here we may not rename the formal argument $m$ into $n$
233
without capturing the free occurrence of $n$, and thus changing the meaning
234
of the defined notion.
236
Binding operations are well known for instance in logic, where they
237
are called quantifiers. Thus we may universally quantify a
238
proposition such as $m>0$ in order to get a universal proposition
239
$\forall m\cdot m>0$. Indeed this operator is available in \Coq, with
240
the following syntax: \verb+forall m:nat, gt m O+. Similarly to the
241
case of the functional abstraction binding, we are obliged to declare
242
explicitly the type of the quantified variable. We check:
244
Check (forall m:nat, gt m 0).
246
We may clean-up the development by removing the contents of the
252
\section{Introduction to the proof engine: Minimal Logic}
254
In the following, we are going to consider various propositions, built
255
from atomic propositions $A, B, C$. This may be done easily, by
256
introducing these atoms as global variables declared of type \verb:Prop:.
257
It is easy to declare several names with the same specification:
259
Section Minimal_Logic.
260
Variables A B C : Prop.
263
We shall consider simple implications, such as $A\ra B$, read as
264
``$A$ implies $B$''. Remark that we overload the arrow symbol, which
265
has been used above as the functionality type constructor, and which
266
may be used as well as propositional connective:
271
Let us now embark on a simple proof. We want to prove the easy tautology
272
$((A\ra (B\ra C))\ra (A\ra B)\ra (A\ra C)$.
273
We enter the proof engine by the command
274
\verb:Goal:, followed by the conjecture we want to verify:
276
Goal (A -> B -> C) -> (A -> B) -> A -> C.
279
The system displays the current goal below a double line, local hypotheses
280
(there are none initially) being displayed above the line. We call
281
the combination of local hypotheses with a goal a {\sl judgment}.
282
We are now in an inner
283
loop of the system, in proof mode.
284
New commands are available in this
285
mode, such as {\sl tactics}, which are proof combining primitives.
286
A tactic operates on the current goal by attempting to construct a proof
287
of the corresponding judgment, possibly from proofs of some
288
hypothetical judgments, which are then added to the current
289
list of conjectured judgments.
290
For instance, the \verb:intro: tactic is applicable to any judgment
291
whose goal is an implication, by moving the proposition to the left
292
of the application to the list of local hypotheses:
297
Several introductions may be done in one step:
302
We notice that $C$, the current goal, may be obtained from hypothesis
303
\verb:H:, provided the truth of $A$ and $B$ are established.
304
The tactic \verb:apply: implements this piece of reasoning:
309
We are now in the situation where we have two judgments as conjectures
310
that remain to be proved. Only the first is listed in full, for the
311
others the system displays only the corresponding subgoal, without its
312
local hypotheses list. Remark that \verb:apply: has kept the local
313
hypotheses of its father judgment, which are still available for
314
the judgments it generated.
316
In order to solve the current goal, we just have to notice that it is
317
exactly available as hypothesis $HA$:
327
And we may now conclude the proof as before, with \verb:exact HA.:
328
Actually, we may not bother with the name \verb:HA:, and just state that
329
the current goal is solvable from the current local assumptions:
334
The proof is now finished. We may either discard it, by using the
335
command \verb:Abort: which returns to the standard \Coq~ toplevel loop
336
without further ado, or else save it as a lemma in the current context,
337
under name say \verb:trivial_lemma::
342
As a comment, the system shows the proof script listing all tactic
343
commands used in the proof.
345
Let us redo the same proof with a few variations. First of all we may name
346
the initial goal as a conjectured lemma:
348
Lemma distr_impl : (A -> B -> C) -> (A -> B) -> A -> C.
351
Next, we may omit the names of local assumptions created by the introduction
352
tactics, they can be automatically created by the proof engine as new
358
The \verb:intros: tactic, with no arguments, effects as many individual
359
applications of \verb:intro: as is legal.
361
Then, we may compose several tactics together in sequence, or in parallel,
362
through {\sl tacticals}, that is tactic combinators. The main constructions
365
\item $T_1 ; T_2$ (read $T_1$ then $T_2$) applies tactic $T_1$ to the current
366
goal, and then tactic $T_2$ to all the subgoals generated by $T_1$.
367
\item $T; [T_1 | T_2 | ... | T_n]$ applies tactic $T$ to the current
368
goal, and then tactic $T_1$ to the first newly generated subgoal,
369
..., $T_n$ to the nth.
372
We may thus complete the proof of \verb:distr_impl: with one composite tactic:
374
apply H; [ assumption | apply H0; assumption ].
377
Let us now save lemma \verb:distr_impl::
382
Here \verb:Save: needs no argument, since we gave the name \verb:distr_impl:
384
it is however possible to override the given name by giving a different
385
argument to command \verb:Save:.
387
Actually, such an easy combination of tactics \verb:intro:, \verb:apply:
388
and \verb:assumption: may be found completely automatically by an automatic
389
tactic, called \verb:auto:, without user guidance:
391
Lemma distr_imp : (A -> B -> C) -> (A -> B) -> A -> C.
395
This time, we do not save the proof, we just discard it with the \verb:Abort:
402
At any point during a proof, we may use \verb:Abort: to exit the proof mode
403
and go back to Coq's main loop. We may also use \verb:Restart: to restart
404
from scratch the proof of the same lemma. We may also use \verb:Undo: to
405
backtrack one step, and more generally \verb:Undo n: to
408
We end this section by showing a useful command, \verb:Inspect n.:,
409
which inspects the global \Coq~ environment, showing the last \verb:n: declared
415
The declarations, whether global parameters or axioms, are shown preceded by
416
\verb:***:; definitions and lemmas are stated with their specification, but
417
their value (or proof-term) is omitted.
419
\section{Propositional Calculus}
421
\subsection{Conjunction}
423
We have seen how \verb:intro: and \verb:apply: tactics could be combined
424
in order to prove implicational statements. More generally, \Coq~ favors a style
425
of reasoning, called {\sl Natural Deduction}, which decomposes reasoning into
426
so called {\sl introduction rules}, which tell how to prove a goal whose main
427
operator is a given propositional connective, and {\sl elimination rules},
428
which tell how to use an hypothesis whose main operator is the propositional
429
connective. Let us show how to use these ideas for the propositional connectives
430
\verb:/\: and \verb:\/:.
433
Lemma and_commutative : A /\ B -> B /\ A.
437
We make use of the conjunctive hypothesis \verb:H: with the \verb:elim: tactic,
438
which breaks it into its components:
443
We now use the conjunction introduction tactic \verb:split:, which splits the
444
conjunctive goal into the two subgoals:
449
and the proof is now trivial. Indeed, the whole proof is obtainable as follows:
452
intro H; elim H; auto.
456
The tactic \verb:auto: succeeded here because it knows as a hint the
457
conjunction introduction operator \verb+conj+
462
Actually, the tactic \verb+Split+ is just an abbreviation for \verb+apply conj.+
464
What we have just seen is that the \verb:auto: tactic is more powerful than
465
just a simple application of local hypotheses; it tries to apply as well
466
lemmas which have been specified as hints. A
467
\verb:Hint Resolve: command registers a
468
lemma as a hint to be used from now on by the \verb:auto: tactic, whose power
469
may thus be incrementally augmented.
471
\subsection{Disjunction}
473
In a similar fashion, let us consider disjunction:
476
Lemma or_commutative : A \/ B -> B \/ A.
480
Let us prove the first subgoal in detail. We use \verb:intro: in order to
481
be left to prove \verb:B\/A: from \verb:A::
487
Here the hypothesis \verb:H: is not needed anymore. We could choose to
488
actually erase it with the tactic \verb:clear:; in this simple proof it
489
does not really matter, but in bigger proof developments it is useful to
490
clear away unnecessary hypotheses which may clutter your screen.
495
The disjunction connective has two introduction rules, since \verb:P\/Q:
496
may be obtained from \verb:P: or from \verb:Q:; the two corresponding
497
proof constructors are called respectively \verb:or_introl: and
498
\verb:or_intror:; they are applied to the current goal by tactics
499
\verb:left: and \verb:right: respectively. For instance:
504
The tactic \verb:trivial: works like \verb:auto: with the hints
505
database, but it only tries those tactics that can solve the goal in one
508
As before, all these tedious elementary steps may be performed automatically,
509
as shown for the second symmetric case:
515
However, \verb:auto: alone does not succeed in proving the full lemma, because
516
it does not try any elimination step.
517
It is a bit disappointing that \verb:auto: is not able to prove automatically
518
such a simple tautology. The reason is that we want to keep
519
\verb:auto: efficient, so that it is always effective to use.
523
A complete tactic for propositional
524
tautologies is indeed available in \Coq~ as the \verb:tauto: tactic.
531
It is possible to inspect the actual proof tree constructed by \verb:tauto:,
532
using a standard command of the system, which prints the value of any notion
533
currently defined in the context:
535
Print or_commutative.
538
It is not easy to understand the notation for proof terms without a few
539
explanations. The \texttt{fun} prefix, such as \verb+fun H:A\/B =>+,
541
to \verb:intro H:, whereas a subterm such as
542
\verb:(or_intror: \verb:B H0):
543
corresponds to the sequence \verb:apply or_intror; exact H0:.
544
The generic combinator \verb:or_intror: needs to be instantiated by
545
the two properties \verb:B: and \verb:A:. Because \verb:A: can be
546
deduced from the type of \verb:H0:, only \verb:B: is printed.
547
The two instantiations are effected automatically by the tactic
548
\verb:apply: when pattern-matching a goal. The specialist will of course
549
recognize our proof term as a $\lambda$-term, used as notation for the
550
natural deduction proof term through the Curry-Howard isomorphism. The
551
naive user of \Coq~ may safely ignore these formal details.
553
Let us exercise the \verb:tauto: tactic on a more complex example:
555
Lemma distr_and : A -> B /\ C -> (A -> B) /\ (A -> C).
560
\subsection{Classical reasoning}
562
\verb:tauto: always comes back with an answer. Here is an example where it
565
Lemma Peirce : ((A -> B) -> A) -> A.
569
Note the use of the \verb:Try: tactical, which does nothing if its tactic
572
This may come as a surprise to someone familiar with classical reasoning.
573
Peirce's lemma is true in Boolean logic, i.e. it evaluates to \verb:true: for
574
every truth-assignment to \verb:A: and \verb:B:. Indeed the double negation
575
of Peirce's law may be proved in \Coq~ using \verb:tauto::
578
Lemma NNPeirce : ~ ~ (((A -> B) -> A) -> A).
583
In classical logic, the double negation of a proposition is equivalent to this
584
proposition, but in the constructive logic of \Coq~ this is not so. If you
585
want to use classical logic in \Coq, you have to import explicitly the
586
\verb:Classical: module, which will declare the axiom \verb:classic:
587
of excluded middle, and classical tautologies such as de Morgan's laws.
588
The \verb:Require: command is used to import a module from \Coq's library:
590
Require Import Classical.
594
and it is now easy (although admittedly not the most direct way) to prove
595
a classical law such as Peirce's:
597
Lemma Peirce : ((A -> B) -> A) -> A.
602
Here is one more example of propositional reasoning, in the shape of
603
a Scottish puzzle. A private club has the following rules:
605
\item Every non-scottish member wears red socks
606
\item Every member wears a kilt or doesn't wear red socks
607
\item The married members don't go out on Sunday
608
\item A member goes out on Sunday if and only if he is Scottish
609
\item Every member who wears a kilt is Scottish and married
610
\item Every scottish member wears a kilt
612
Now, we show that these rules are so strict that no one can be accepted.
615
Variables Scottish RedSocks WearKilt Married GoOutSunday : Prop.
616
Hypothesis rule1 : ~ Scottish -> RedSocks.
617
Hypothesis rule2 : WearKilt \/ ~ RedSocks.
618
Hypothesis rule3 : Married -> ~ GoOutSunday.
619
Hypothesis rule4 : GoOutSunday <-> Scottish.
620
Hypothesis rule5 : WearKilt -> Scottish /\ Married.
621
Hypothesis rule6 : Scottish -> WearKilt.
622
Lemma NoMember : False.
626
At that point \verb:NoMember: is a proof of the absurdity depending on
628
We may end the section, in that case, the variables and hypotheses
629
will be discharged, and the type of \verb:NoMember: will be
637
\section{Predicate Calculus}
639
Let us now move into predicate logic, and first of all into first-order
640
predicate calculus. The essence of predicate calculus is that to try to prove
641
theorems in the most abstract possible way, without using the definitions of
642
the mathematical notions, but by formal manipulations of uninterpreted
643
function and predicate symbols.
645
\subsection{Sections and signatures}
647
Usually one works in some domain of discourse, over which range the individual
648
variables and function symbols. In \Coq~ we speak in a language with a rich
649
variety of types, so me may mix several domains of discourse, in our
650
multi-sorted language. For the moment, we just do a few exercises, over a
651
domain of discourse \verb:D: axiomatised as a \verb:Set:, and we consider two
652
predicate symbols \verb:P: and \verb:R: over \verb:D:, of arities
653
respectively 1 and 2. Such abstract entities may be entered in the context
654
as global variables. But we must be careful about the pollution of our
655
global environment by such declarations. For instance, we have already
656
polluted our \Coq~ session by declaring the variables
657
\verb:n:, \verb:Pos_n:, \verb:A:, \verb:B:, and \verb:C:. If we want to revert to the clean state of
658
our initial session, we may use the \Coq~ \verb:Reset: command, which returns
659
to the state just prior the given global notion as we did before to
660
remove a section, or we may return to the initial state using~:
665
We shall now declare a new \verb:Section:, which will allow us to define
666
notions local to a well-delimited scope. We start by assuming a domain of
667
discourse \verb:D:, and a binary relation \verb:R: over \verb:D::
669
Section Predicate_calculus.
671
Variable R : D -> D -> Prop.
674
As a simple example of predicate calculus reasoning, let us assume
675
that relation \verb:R: is symmetric and transitive, and let us show that
676
\verb:R: is reflexive in any point \verb:x: which has an \verb:R: successor.
677
Since we do not want to make the assumptions about \verb:R: global axioms of
678
a theory, but rather local hypotheses to a theorem, we open a specific
679
section to this effect.
682
Hypothesis R_symmetric : forall x y:D, R x y -> R y x.
683
Hypothesis R_transitive : forall x y z:D, R x y -> R y z -> R x z.
686
Remark the syntax \verb+forall x:D,+ which stands for universal quantification
689
\subsection{Existential quantification}
691
We now state our lemma, and enter proof mode.
693
Lemma refl_if : forall x:D, (exists y, R x y) -> R x x.
696
Remark that the hypotheses which are local to the currently opened sections
697
are listed as local hypotheses to the current goals.
698
The rationale is that these hypotheses are going to be discharged, as we
699
shall see, when we shall close the corresponding sections.
701
Note the functional syntax for existential quantification. The existential
702
quantifier is built from the operator \verb:ex:, which expects a
703
predicate as argument:
707
and the notation \verb+(exists x:D, P x)+ is just concrete syntax for
708
\verb+(ex D (fun x:D => P x))+.
709
Existential quantification is handled in \Coq~ in a similar
710
fashion to the connectives \verb:/\: and \verb:\/: : it is introduced by
711
the proof combinator \verb:ex_intro:, which is invoked by the specific
712
tactic \verb:Exists:, and its elimination provides a witness \verb+a:D+ to
713
\verb:P:, together with an assumption \verb+h:(P a)+ that indeed \verb+a+
714
verifies \verb:P:. Let us see how this works on this simple example.
719
Remark that \verb:intros: treats universal quantification in the same way
720
as the premises of implications. Renaming of bound variables occurs
721
when it is needed; for instance, had we started with \verb:intro y:,
722
we would have obtained the goal:
734
Let us now use the existential hypothesis \verb:x_Rlinked: to
735
exhibit an R-successor y of x. This is done in two steps, first with
736
\verb:elim:, then with \verb:intros:
743
Now we want to use \verb:R_transitive:. The \verb:apply: tactic will know
744
how to match \verb:x: with \verb:x:, and \verb:z: with \verb:x:, but needs
745
help on how to instantiate \verb:y:, which appear in the hypotheses of
746
\verb:R_transitive:, but not in its conclusion. We give the proper hint
747
to \verb:apply: in a \verb:with: clause, as follows:
749
apply R_transitive with y.
752
The rest of the proof is routine:
755
apply R_symmetric; assumption.
761
Let us now close the current section.
766
Here \Coq's printout is a warning that all local hypotheses have been
767
discharged in the statement of \verb:refl_if:, which now becomes a general
768
theorem in the first-order language declared in section
769
\verb:Predicate_calculus:. In this particular example, the use of section
770
\verb:R_sym_trans: has not been really significant, since we could have
771
instead stated theorem \verb:refl_if: in its general form, and done
772
basically the same proof, obtaining \verb:R_symmetric: and
773
\verb:R_transitive: as local hypotheses by initial \verb:intros: rather
774
than as global hypotheses in the context. But if we had pursued the
775
theory by proving more theorems about relation \verb:R:,
776
we would have obtained all general statements at the closing of the section,
777
with minimal dependencies on the hypotheses of symmetry and transitivity.
779
\subsection{Paradoxes of classical predicate calculus}
781
Let us illustrate this feature by pursuing our \verb:Predicate_calculus:
782
section with an enrichment of our language: we declare a unary predicate
783
\verb:P: and a constant \verb:d::
785
Variable P : D -> Prop.
789
We shall now prove a well-known fact from first-order logic: a universal
790
predicate is non-empty, or in other terms existential quantification
791
follows from universal quantification.
793
Lemma weird : (forall x:D, P x) -> exists a, P a.
797
First of all, notice the pair of parentheses around
798
\verb+forall x:D, P x+ in
799
the statement of lemma \verb:weird:.
800
If we had omitted them, \Coq's parser would have interpreted the
801
statement as a truly trivial fact, since we would
802
postulate an \verb:x: verifying \verb:(P x):. Here the situation is indeed
803
more problematic. If we have some element in \verb:Set: \verb:D:, we may
804
apply \verb:UnivP: to it and conclude, otherwise we are stuck. Indeed
805
such an element \verb:d: exists, but this is just by virtue of our
806
new signature. This points out a subtle difference between standard
807
predicate calculus and \Coq. In standard first-order logic,
808
the equivalent of lemma \verb:weird: always holds,
809
because such a rule is wired in the inference rules for quantifiers, the
810
semantic justification being that the interpretation domain is assumed to
811
be non-empty. Whereas in \Coq, where types are not assumed to be
812
systematically inhabited, lemma \verb:weird: only holds in signatures
813
which allow the explicit construction of an element in the domain of
816
Let us conclude the proof, in order to show the use of the \verb:Exists:
823
Another fact which illustrates the sometimes disconcerting rules of
825
predicate calculus is Smullyan's drinkers' paradox: ``In any non-empty
826
bar, there is a person such that if she drinks, then everyone drinks''.
827
We modelize the bar by Set \verb:D:, drinking by predicate \verb:P:.
828
We shall need classical reasoning. Instead of loading the \verb:Classical:
829
module as we did above, we just state the law of excluded middle as a
830
local hypothesis schema at this point:
832
Hypothesis EM : forall A:Prop, A \/ ~ A.
833
Lemma drinker : exists x:D, P x -> forall x:D, P x.
835
The proof goes by cases on whether or not
836
there is someone who does not drink. Such reasoning by cases proceeds
837
by invoking the excluded middle principle, via \verb:elim: of the
838
proper instance of \verb:EM::
840
elim (EM (exists x, ~ P x)).
843
We first look at the first case. Let Tom be the non-drinker:
845
intro Non_drinker; elim Non_drinker; intros Tom Tom_does_not_drink.
848
We conclude in that case by considering Tom, since his drinking leads to
851
exists Tom; intro Tom_drinks.
854
There are several ways in which we may eliminate a contradictory case;
855
a simple one is to use the \verb:absurd: tactic as follows:
857
absurd (P Tom); trivial.
860
We now proceed with the second case, in which actually any person will do;
861
such a John Doe is given by the non-emptiness witness \verb:d::
863
intro No_nondrinker; exists d; intro d_drinks.
866
Now we consider any Dick in the bar, and reason by cases according to its
869
intro Dick; elim (EM (P Dick)); trivial.
872
The only non-trivial case is again treated by contradiction:
874
intro Dick_does_not_drink; absurd (exists x, ~ P x); trivial.
875
exists Dick; trivial.
879
Now, let us close the main section and look at the complete statements
882
End Predicate_calculus.
888
Remark how the three theorems are completely generic in the most general
890
the domain \verb:D: is discharged in all of them, \verb:R: is discharged in
891
\verb:refl_if: only, \verb:P: is discharged only in \verb:weird: and
892
\verb:drinker:, along with the hypothesis that \verb:D: is inhabited.
893
Finally, the excluded middle hypothesis is discharged only in
896
Note that the name \verb:d: has vanished as well from
897
the statements of \verb:weird: and \verb:drinker:,
898
since \Coq's pretty-printer replaces
899
systematically a quantification such as \verb+forall d:D, E+, where \verb:d:
900
does not occur in \verb:E:, by the functional notation \verb:D->E:.
901
Similarly the name \verb:EM: does not appear in \verb:drinker:.
903
Actually, universal quantification, implication,
904
as well as function formation, are
905
all special cases of one general construct of type theory called
906
{\sl dependent product}. This is the mathematical construction
907
corresponding to an indexed family of functions. A function
908
$f\in \Pi x:D\cdot Cx$ maps an element $x$ of its domain $D$ to its
909
(indexed) codomain $Cx$. Thus a proof of $\forall x:D\cdot Px$ is
910
a function mapping an element $x$ of $D$ to a proof of proposition $Px$.
913
\subsection{Flexible use of local assumptions}
915
Very often during the course of a proof we want to retrieve a local
916
assumption and reintroduce it explicitly in the goal, for instance
917
in order to get a more general induction hypothesis. The tactic
918
\verb:generalize: is what is needed here:
921
Section Predicate_Calculus.
922
Variables P Q : nat -> Prop.
923
Variable R : nat -> nat -> Prop.
925
forall x y:nat, (R x x -> P x -> Q x) -> P x -> R x y -> Q x.
930
Sometimes it may be convenient to use a lemma, although we do not have
931
a direct way to appeal to such an already proven fact. The tactic \verb:cut:
932
permits to use the lemma at this point, keeping the corresponding proof
933
obligation as a new subgoal:
935
cut (R x x); trivial.
937
We clean the goal by doing an \verb:Abort: command.
943
\subsection{Equality}
945
The basic equality provided in \Coq~ is Leibniz equality, noted infix like
946
\verb+x=y+, when \verb:x: and \verb:y: are two expressions of
947
type the same Set. The replacement of \verb:x: by \verb:y: in any
948
term is effected by a variety of tactics, such as \verb:rewrite:
951
Let us give a few examples of equality replacement. Let us assume that
952
some arithmetic function \verb:f: is null in zero:
954
Variable f : nat -> nat.
955
Hypothesis foo : f 0 = 0.
958
We want to prove the following conditional equality:
960
Lemma L1 : forall k:nat, k = 0 -> f k = k.
963
As usual, we first get rid of local assumptions with \verb:intro::
968
Let us now use equation \verb:E: as a left-to-right rewriting:
972
This replaced both occurrences of \verb:k: by \verb:O:.
974
Now \verb:apply foo: will finish the proof:
981
When one wants to rewrite an equality in a right to left fashion, we should
982
use \verb:rewrite <- E: rather than \verb:rewrite E: or the equivalent
984
Let us now illustrate the tactic \verb:replace:.
986
Hypothesis f10 : f 1 = f 0.
987
Lemma L2 : f (f 1) = 0.
988
replace (f 1) with 0.
990
What happened here is that the replacement left the first subgoal to be
991
proved, but another proof obligation was generated by the \verb:replace:
992
tactic, as the second subgoal. The first subgoal is solved immediately
993
by applying lemma \verb:foo:; the second one transitivity and then
994
symmetry of equality, for instance with tactics \verb:transitivity: and
998
transitivity (f 0); symmetry; trivial.
1000
In case the equality $t=u$ generated by \verb:replace: $u$ \verb:with:
1001
$t$ is an assumption
1002
(possibly modulo symmetry), it will be automatically proved and the
1003
corresponding goal will not appear. For instance:
1006
replace (f 0) with 0.
1007
rewrite f10; rewrite foo; trivial.
1011
\section{Using definitions}
1013
The development of mathematics does not simply proceed by logical
1014
argumentation from first principles: definitions are used in an essential way.
1015
A formal development proceeds by a dual process of abstraction, where one
1016
proves abstract statements in predicate calculus, and use of definitions,
1017
which in the contrary one instantiates general statements with particular
1018
notions in order to use the structure of mathematical values for the proof of
1019
more specialised properties.
1021
\subsection{Unfolding definitions}
1023
Assume that we want to develop the theory of sets represented as characteristic
1024
predicates over some universe \verb:U:. For instance:
1027
Definition set := U -> Prop.
1028
Definition element (x:U) (S:set) := S x.
1029
Definition subset (A B:set) := forall x:U, element x A -> element x B.
1032
Now, assume that we have loaded a module of general properties about
1033
relations over some abstract type \verb:T:, such as transitivity:
1036
Definition transitive (T:Type) (R:T -> T -> Prop) :=
1037
forall x y z:T, R x y -> R y z -> R x z.
1040
Now, assume that we want to prove that \verb:subset: is a \verb:transitive:
1043
Lemma subset_transitive : transitive set subset.
1046
In order to make any progress, one needs to use the definition of
1047
\verb:transitive:. The \verb:unfold: tactic, which replaces all
1048
occurrences of a defined notion by its definition in the current goal,
1054
Now, we must unfold \verb:subset::
1058
Now, unfolding \verb:element: would be a mistake, because indeed a simple proof
1059
can be found by \verb:auto:, keeping \verb:element: an abstract predicate:
1064
Many variations on \verb:unfold: are provided in \Coq. For instance,
1065
we may selectively unfold one designated occurrence:
1071
One may also unfold a definition in a given local hypothesis, using the
1078
Finally, the tactic \verb:red: does only unfolding of the head occurrence
1079
of the current goal:
1087
\subsection{Principle of proof irrelevance}
1089
Even though in principle the proof term associated with a verified lemma
1090
corresponds to a defined value of the corresponding specification, such
1091
definitions cannot be unfolded in \Coq: a lemma is considered an {\sl opaque}
1092
definition. This conforms to the mathematical tradition of {\sl proof
1093
irrelevance}: the proof of a logical proposition does not matter, and the
1094
mathematical justification of a logical development relies only on
1095
{\sl provability} of the lemmas used in the formal proof.
1097
Conversely, ordinary mathematical definitions can be unfolded at will, they
1098
are {\sl transparent}.
1101
\section{Data Types as Inductively Defined Mathematical Collections}
1103
All the notions which were studied until now pertain to traditional
1104
mathematical logic. Specifications of objects were abstract properties
1105
used in reasoning more or less constructively; we are now entering
1106
the realm of inductive types, which specify the existence of concrete
1107
mathematical constructions.
1109
\subsection{Booleans}
1111
Let us start with the collection of booleans, as they are specified
1112
in the \Coq's \verb:Prelude: module:
1114
Inductive bool : Set := true | false.
1117
Such a declaration defines several objects at once. First, a new
1118
\verb:Set: is declared, with name \verb:bool:. Then the {\sl constructors}
1119
of this \verb:Set: are declared, called \verb:true: and \verb:false:.
1120
Those are analogous to introduction rules of the new Set \verb:bool:.
1121
Finally, a specific elimination rule for \verb:bool: is now available, which
1122
permits to reason by cases on \verb:bool: values. Three instances are
1123
indeed defined as new combinators in the global context: \verb:bool_ind:,
1124
a proof combinator corresponding to reasoning by cases,
1125
\verb:bool_rec:, an if-then-else programming construct,
1126
and \verb:bool_rect:, a similar combinator at the level of types.
1134
Let us for instance prove that every Boolean is true or false.
1136
Lemma duality : forall b:bool, b = true \/ b = false.
1140
We use the knowledge that \verb:b: is a \verb:bool: by calling tactic
1141
\verb:elim:, which is this case will appeal to combinator \verb:bool_ind:
1142
in order to split the proof according to the two cases:
1147
It is easy to conclude in each case:
1153
Indeed, the whole proof can be done with the combination of the
1154
\verb:simple induction: tactic, which combines \verb:intro: and \verb:elim:,
1155
with good old \verb:auto::
1158
simple induction b; auto.
1162
\subsection{Natural numbers}
1164
Similarly to Booleans, natural numbers are defined in the \verb:Prelude:
1165
module with constructors \verb:S: and \verb:O::
1167
Inductive nat : Set :=
1172
The elimination principles which are automatically generated are Peano's
1173
induction principle, and a recursion operator:
1179
Let us start by showing how to program the standard primitive recursion
1180
operator \verb:prim_rec: from the more general \verb:nat_rec::
1182
Definition prim_rec := nat_rec (fun i:nat => nat).
1185
That is, instead of computing for natural \verb:i: an element of the indexed
1186
\verb:Set: \verb:(P i):, \verb:prim_rec: computes uniformly an element of
1187
\verb:nat:. Let us check the type of \verb:prim_rec::
1192
Oops! Instead of the expected type \verb+nat->(nat->nat->nat)->nat->nat+ we
1193
get an apparently more complicated expression. Indeed the type of
1194
\verb:prim_rec: is equivalent by rule $\beta$ to its expected type; this may
1195
be checked in \Coq~ by command \verb:Eval Cbv Beta:, which $\beta$-reduces
1196
an expression to its {\sl normal form}:
1199
((fun _:nat => nat) O ->
1200
(forall y:nat, (fun _:nat => nat) y -> (fun _:nat => nat) (S y)) ->
1201
forall n:nat, (fun _:nat => nat) n).
1204
Let us now show how to program addition with primitive recursion:
1206
Definition addition (n m:nat) := prim_rec m (fun p rec:nat => S rec) n.
1209
That is, we specify that \verb+(addition n m)+ computes by cases on \verb:n:
1210
according to its main constructor; when \verb:n = O:, we get \verb:m:;
1211
when \verb:n = S p:, we get \verb:(S rec):, where \verb:rec: is the result
1212
of the recursive computation \verb+(addition p m)+. Let us verify it by
1213
asking \Coq~to compute for us say $2+3$:
1215
Eval compute in (addition (S (S O)) (S (S (S O)))).
1218
Actually, we do not have to do all explicitly. {\Coq} provides a
1219
special syntax {\tt Fixpoint/match} for generic primitive recursion,
1220
and we could thus have defined directly addition as:
1223
Fixpoint plus (n m:nat) {struct n} : nat :=
1226
| S p => S (plus p m)
1230
For the rest of the session, we shall clean up what we did so far with
1231
types \verb:bool: and \verb:nat:, in order to use the initial definitions
1232
given in \Coq's \verb:Prelude: module, and not to get confusing error
1233
messages due to our redefinitions. We thus revert to the state before
1234
our definition of \verb:bool: with the \verb:Reset: command:
1240
\subsection{Simple proofs by induction}
1246
Let us now show how to do proofs by structural induction. We start with easy
1247
properties of the \verb:plus: function we just defined. Let us first
1250
Lemma plus_n_O : forall n:nat, n = n + 0.
1254
What happened was that \verb:elim n:, in order to construct a \verb:Prop:
1255
(the initial goal) from a \verb:nat: (i.e. \verb:n:), appealed to the
1256
corresponding induction principle \verb:nat_ind: which we saw was indeed
1257
exactly Peano's induction scheme. Pattern-matching instantiated the
1258
corresponding predicate \verb:P: to \verb+fun n:nat => n = n + 0+, and we get
1259
as subgoals the corresponding instantiations of the base case \verb:(P O): ,
1260
and of the inductive step \verb+forall y:nat, P y -> P (S y)+.
1261
In each case we get an instance of function \verb:plus: in which its second
1262
argument starts with a constructor, and is thus amenable to simplification
1263
by primitive recursion. The \Coq~tactic \verb:simpl: can be used for
1270
We proceed in the same way for the base step:
1276
Here \verb:auto: succeeded, because it used as a hint lemma \verb:eq_S:,
1277
which say that successor preserves equality:
1282
Actually, let us see how to declare our lemma \verb:plus_n_O: as a hint
1283
to be used by \verb:auto::
1285
Hint Resolve plus_n_O .
1288
We now proceed to the similar property concerning the other constructor
1291
Lemma plus_n_S : forall n m:nat, S (n + m) = n + S m.
1294
We now go faster, remembering that tactic \verb:simple induction: does the
1295
necessary \verb:intros: before applying \verb:elim:. Factoring simplification
1296
and automation in both cases thanks to tactic composition, we prove this
1299
simple induction n; simpl; auto.
1301
Hint Resolve plus_n_S .
1304
Let us end this exercise with the commutativity of \verb:plus::
1307
Lemma plus_com : forall n m:nat, n + m = m + n.
1310
Here we have a choice on doing an induction on \verb:n: or on \verb:m:, the
1311
situation being symmetric. For instance:
1313
simple induction m; simpl; auto.
1316
Here \verb:auto: succeeded on the base case, thanks to our hint
1317
\verb:plus_n_O:, but the induction step requires rewriting, which
1318
\verb:auto: does not handle:
1321
intros m' E; rewrite <- E; auto.
1325
\subsection{Discriminate}
1327
It is also possible to define new propositions by primitive recursion.
1328
Let us for instance define the predicate which discriminates between
1329
the constructors \verb:O: and \verb:S:: it computes to \verb:False:
1330
when its argument is \verb:O:, and to \verb:True: when its argument is
1331
of the form \verb:(S n)::
1333
Definition Is_S (n:nat) := match n with
1339
Now we may use the computational power of \verb:Is_S: in order to prove
1340
trivially that \verb:(Is_S (S n))::
1342
Lemma S_Is_S : forall n:nat, Is_S (S n).
1347
But we may also use it to transform a \verb:False: goal into
1348
\verb:(Is_S O):. Let us show a particularly important use of this feature;
1349
we want to prove that \verb:O: and \verb:S: construct different values, one
1352
Lemma no_confusion : forall n:nat, 0 <> S n.
1355
First of all, we replace negation by its definition, by reducing the
1356
goal with tactic \verb:red:; then we get contradiction by successive
1362
Now we use our trick:
1367
Now we use equality in order to get a subgoal which computes out to
1368
\verb:True:, which finishes the proof:
1374
Actually, a specific tactic \verb:discriminate: is provided
1375
to produce mechanically such proofs, without the need for the user to define
1376
explicitly the relevant discrimination predicates:
1380
intro n; discriminate.
1385
\section{Logic programming}
1387
In the same way as we defined standard data-types above, we
1388
may define inductive families, and for instance inductive predicates.
1389
Here is the definition of predicate $\le$ over type \verb:nat:, as
1390
given in \Coq's \verb:Prelude: module:
1391
\begin{coq_example*}
1392
Inductive le (n:nat) : nat -> Prop :=
1394
| le_S : forall m:nat, le n m -> le n (S m).
1397
This definition introduces a new predicate \verb+le:nat->nat->Prop+,
1398
and the two constructors \verb:le_n: and \verb:le_S:, which are the
1399
defining clauses of \verb:le:. That is, we get not only the ``axioms''
1400
\verb:le_n: and \verb:le_S:, but also the converse property, that
1401
\verb:(le n m): if and only if this statement can be obtained as a
1402
consequence of these defining clauses; that is, \verb:le: is the
1403
minimal predicate verifying clauses \verb:le_n: and \verb:le_S:. This is
1404
insured, as in the case of inductive data types, by an elimination principle,
1405
which here amounts to an induction principle \verb:le_ind:, stating this
1406
minimality property:
1412
Let us show how proofs may be conducted with this principle.
1413
First we show that $n\le m \Rightarrow n+1\le m+1$:
1415
Lemma le_n_S : forall n m:nat, le n m -> le (S n) (S m).
1420
What happens here is similar to the behaviour of \verb:elim: on natural
1421
numbers: it appeals to the relevant induction principle, here \verb:le_ind:,
1422
which generates the two subgoals, which may then be solved easily
1423
with the help of the defining clauses of \verb:le:.
1425
apply le_n; trivial.
1426
intros; apply le_S; trivial.
1429
Now we know that it is a good idea to give the defining clauses as hints,
1430
so that the proof may proceed with a simple combination of
1431
\verb:induction: and \verb:auto:.
1434
Hint Resolve le_n le_S .
1437
We have a slight problem however. We want to say ``Do an induction on
1438
hypothesis \verb:(le n m):'', but we have no explicit name for it. What we
1439
do in this case is to say ``Do an induction on the first unnamed hypothesis'',
1442
simple induction 1; auto.
1446
Here is a more tricky problem. Assume we want to show that
1447
$n\le 0 \Rightarrow n=0$. This reasoning ought to follow simply from the
1448
fact that only the first defining clause of \verb:le: applies.
1450
Lemma tricky : forall n:nat, le n 0 -> n = 0.
1453
However, here trying something like \verb:induction 1: would lead
1454
nowhere (try it and see what happens).
1455
An induction on \verb:n: would not be convenient either.
1456
What we must do here is analyse the definition of \verb"le" in order
1457
to match hypothesis \verb:(le n O): with the defining clauses, to find
1458
that only \verb:le_n: applies, whence the result.
1459
This analysis may be performed by the ``inversion'' tactic
1460
\verb:inversion_clear: as follows:
1462
intros n H; inversion_clear H.
1469
\section{Opening library modules}
1471
When you start \Coq~ without further requirements in the command line,
1472
you get a bare system with few libraries loaded. As we saw, a standard
1473
prelude module provides the standard logic connectives, and a few
1474
arithmetic notions. If you want to load and open other modules from
1475
the library, you have to use the \verb"Require" command, as we saw for
1476
classical logic above. For instance, if you want more arithmetic
1477
constructions, you should request:
1478
\begin{coq_example*}
1479
Require Import Arith.
1482
Such a command looks for a (compiled) module file \verb:Arith.vo: in
1483
the libraries registered by \Coq. Libraries inherit the structure of
1484
the file system of the operating system and are registered with the
1485
command \verb:Add LoadPath:. Physical directories are mapped to
1486
logical directories. Especially the standard library of \Coq~ is
1487
pre-registered as a library of name \verb=Coq=. Modules have absolute
1488
unique names denoting their place in \Coq~ libraries. An absolute
1489
name is a sequence of single identifiers separated by dots. E.g. the
1490
module \verb=Arith= has full name \verb=Coq.Arith.Arith= and because
1491
it resides in eponym subdirectory \verb=Arith= of the standard
1492
library, it can be as well required by the command
1494
\begin{coq_example*}
1495
Require Import Coq.Arith.Arith.
1498
This may be useful to avoid ambiguities if somewhere, in another branch
1499
of the libraries known by Coq, another module is also called
1500
\verb=Arith=. Notice that by default, when a library is registered,
1501
all its contents, and all the contents of its subdirectories recursively are
1502
visible and accessible by a short (relative) name as \verb=Arith=.
1503
Notice also that modules or definitions not explicitly registered in
1504
a library are put in a default library called \verb=Top=.
1506
The loading of a compiled file is quick, because the corresponding
1507
development is not type-checked again.
1509
\section{Creating your own modules}
1511
You may create your own modules, by writing \Coq~ commands in a file,
1512
say \verb:my_module.v:. Such a module may be simply loaded in the current
1513
context, with command \verb:Load my_module:. It may also be compiled,
1514
in ``batch'' mode, using the UNIX command
1515
\verb:coqc:. Compiling the module \verb:my_module.v: creates a
1516
file \verb:my_module.vo:{} that can be reloaded with command
1517
\verb:Require Import my_module:.
1519
If a required module depends on other modules then the latters are
1520
automatically required beforehand. However their contents is not
1521
automatically visible. If you want a module \verb=M= required in a
1522
module \verb=N= to be automatically visible when \verb=N= is required,
1523
you should use \verb:Require Export M: in your module \verb:N:.
1525
\section{Managing the context}
1527
It is often difficult to remember the names of all lemmas and
1528
definitions available in the current context, especially if large
1529
libraries have been loaded. A convenient \verb:SearchAbout: command
1530
is available to lookup all known facts
1531
concerning a given predicate. For instance, if you want to know all the
1532
known lemmas about the less or equal relation, just ask:
1536
Another command \verb:Search: displays only lemmas where the searched
1537
predicate appears at the head position in the conclusion.
1542
A new and more convenient search tool is \textsf{SearchPattern}
1543
developed by Yves Bertot. It allows to find the theorems with a
1544
conclusion matching a given pattern, where \verb:\_: can be used in
1545
place of an arbitrary term. We remark in this example, that \Coq{}
1546
provides usual infix notations for arithmetic operators.
1549
SearchPattern (_ + _ = _).
1552
\section{Now you are on your own}
1554
This tutorial is necessarily incomplete. If you wish to pursue serious
1555
proving in \Coq, you should now get your hands on \Coq's Reference Manual,
1556
which contains a complete description of all the tactics we saw,
1558
You also should look in the library of developed theories which is distributed
1559
with \Coq, in order to acquaint yourself with various proof techniques.
1564
% $Id: Tutorial.tex 8978 2006-06-23 10:15:57Z herbelin $