Merida in the movie Ralph 2.0

Mixing Ltac and Gallina for Fun and Profit

This write-up is the second part of a series on Ltac, the default tactic language of Coq. The first part explains how Coq users can define their own tactics.
One of the most misleading introduction to Coq is to say that “Gallina is for programs, while tactics are for proofs.” Indeed, in Coq we construct terms of given types, always. Terms encodes both programs and proofs about these programs. Gallina is the preferred way to construct programs, and tactics are the preferred way to construct proofs.
The key word here is “preferred.” We do not always need to use tactics to construct a proof term. Conversly, there are some occasions where constructing a program with tactics become handy. Furthermore, Coq allows for calling tactics from Gallina definitions. Compare to Gallina, Ltac provides two very interesting features:
It turns out these features are more than handy when it comes to metaprogramming (that is, the generation of programs by programs).
We proceed as follows. First, we try to demystify how Ltac actually works under the hood. Then, we detail which bridges exists between Gallian and Ltac. Finally, we quickly discuss the risk of using Ltac to generate terms that do not encode proofs.
  1. What Do Tactics Do?
  2. A Tale of Two Worlds, and Some Bridges
  3. Beware the Automation Elephant in the Room
Revisions

This revisions table has been automatically generated from the git history of this website repository, and the change descriptions may not always be as useful as they should.

You can consult the source of this file in its current version here.

2020-07-31 Make the two articles about Ltac refer to each other c2bb9b0
2020-07-31 “For Fun and Benefit” was not an idiomatic expression c0364ff
2020-07-26 Initial publication a254d35

What Do Tactics Do?

The Coq proof environment is basically a State monad, where said state is an incomplete terms that is being gradually constructed. This term is hidden to the user, and Coq rather presents so-called “goals.”
From this perspective, a goal in Coq in basically a hole within the term we are constructing. It is presented to users as:
We illustrate what happens under the hood of Coq executes a simple proof script.

Definition identity : forall (α : Type), α -> α.

At this point, we have tell Coq that we want to construct a term of type forall (α : Type), α -> α.

Proof.

Coq presents us one goal of the form
( α : T y p e ) ,   α α \vdash \forall (\alpha : \mathrm{Type}),\ \alpha \rightarrow \alpha
As for the term that we want to construct and that Coq is hiding from us, it is just a hole.
In a Coq course, you will learn than the intro allows for turning the premise of an implication into an hypothesis within the context, that is
C P Q C \vdash P \rightarrow Q
becomes
C ,   P Q C,\ P \vdash Q
What intro actually does in refining the “hidden” terms, by inserting an anonymous function. So, when we use

  intro α.

it refines the term
_1
into
fun (α : Type) => _2
Similarly,

  intros x.

refines _2 into
fun (x : α) => _3
which means the term constructed (and hidden) by Coq is now
fun (α : Type) (x : α) => _3
In the scope wherein _3 lives, two variables are available: α and x. Therefore, the goal presented by Coq is
α : T y p e ,   x : α α \alpha : \mathrm{Type},\ x : \alpha \vdash \alpha
The assert tactic is used to “add an hypothesis to the context, assuming you can prove the hypothesis is true from the context.” What assert does under the hood is refine the term we are constructed using let _ in _.
More precisely, when we use

  assert (y : α).

It refines the hole _3 into
let y : α := _4 in _5
which means the term we are constructing becomes
fun (α : Type) (x : α) => let y : α := _4 in _5
On the one hand, there are two variables availabre inside the scope wherein _4 lives, so the related goal is
α : T y p e ,   x : α α α : \mathrm{Type},\ x : α \vdash \alpha
On the other hand, there are three variables available for defining _5 and the goal presented to the user is now
α : T y p e ,   x : α ,   y : α α \alpha : \mathrm{Type},\ x : \alpha,\ y : \alpha \vdash \alpha
To conclude this first example, we can discuss the exact tactic. This tactic takes a term as its argument, and fills the current hole (solves the current goal) with this term. We solve the two subgoals generated by assert using exact.
First, we fill _4 with x.

  + exact x.

Therefore, _4 becomes x, and the terms that we are gradually defining now is
fun (α : Type) (x : α) => let y : α := x in _5
Similarly,

  + exact y.

fill the last hole with y.
fun (α : Type) (x : α) => let y : α := x in y
The term that we are constructing does not have any hole we need to fill, and therefore Coq does not present us any more “goals to solve.”
This is not the end of the story, though. Tactics in Coq are not trusted. It is assumed they can be buggy, which would mean that they can in theory generate incorrect terms. As a consequence, we need to tell Coq to check the term (or verify the proof) using Qed or Defined.

Defined.

Coq accepts the Defined command without complaining. It means that the term generated by the tactics typechecks.
To some extend, the refine tactics is actually the most basic tool you may need. For instance,
And so on. As a consequence, we can rewrite the previous proof script using refine.

Definition identity' : forall (α : Type), α -> α.

Proof.
  refine (fun (α : Type) => _).
  refine (fun (x : α) => _).
  unshelve refine (let y : α := _ in _).
  + refine x.
  + refine y.
Defined.

A Tale of Two Worlds, and Some Bridges

Constructing terms proofs directly in Gallina often happens when one is writing dependently-typed definition. For instance, we can write a type safe from_option function (inspired by this very nice write-up) such that the option to unwrap shall be accompagnied by a proof that said option contains something. This extra argument is used in the None case to derive a proof of False, from which we can derive anything.

Definition is_some {α} (x : option α) : bool :=
  match x with Some _ => true | None => false end.

Lemma is_some_None {α} (x : option α)
  : x = None -> is_some x <> true.
Proof. intros H. rewrite H. discriminate. Qed.

Definition from_option {α}
    (x : option α) (some : is_some x = true)
  : α :=
  match x as y return x = y -> α with
  | Some x => fun _ => x
  | None => fun equ => False_rect α (is_some_None x equ some)
  end eq_refl.

In from_option, we construct two proofs without using tactics:
We can use another approach. We can decide to implement from_option with a proof script.

Definition from_option' {α}
    (x : option α) (some : is_some x = true)
  : α.

Proof.
  case_eq x.
  + intros y _.
    exact y.
  + intros equ.
    rewrite equ in some.
    now apply is_some_None in some.
Defined.

There is a third approach we can consider: mixing Gallina terms, and tactics. This is possible thanks to the ltac:() feature.

Definition from_option'' {α}
    (x : option α) (some : is_some x = true)
  : α :=
  match x as y return x = y -> α with
  | Some x => fun _ => x
  | None => fun equ => ltac:(rewrite equ in some;
                             now apply is_some_None in some)
  end eq_refl.

When Coq encounters ltac:(), it treats it as a hole. It sets up a corresponding goal, and tries to solve it with the supplied tactic.
Conversly, there exists ways to construct terms in Gallina when writing a proof script. Certains tactics takes such terms as arguments. Besides, Ltac provides constr:() and uconstr:() which work similarly to ltac:(). The difference between constr:() and uconstr:() is that Coq will try to assign a type to the argument of constr:(), but will leave the argument of uconstr:() untyped.
For instance, consider the following tactic definition.

Tactic Notation "wrap_id" uconstr(x) :=
  let f := uconstr:(fun x => x) in
  exact (f x).

Both x the argument of wrap_id and f the anonymous identity function are not typed. It is only when they are composed together as an argument of exact (which expects a typed argument, more precisely of the type of the goal) that Coq actually tries to typecheck it.
As a consequence, wrap_id generates a specialized identity function for each specific context.

Definition zero : nat := ltac:(wrap_id 0).

The generated anonymous identity function is fun x : nat => x.

Definition empty_list α : list α := ltac:(wrap_id nil).

The generated anonymous identity function is fun x : list α => x. Besides, we do not need to give more type information about nil. If wrap_id were to be expecting a typed term, we would have to replace nil by (@nil α).

Beware the Automation Elephant in the Room

Proofs and computational programs are encoded in Coq as terms, but there is a fundamental difference between them, and it is highlighted by one of the axiom provided by the Coq standard library: proof irrelevance.
Proof irrelevance states that two proofs of the same theorem (i.e., two proof terms which share the same type) are essentially equivalent, and can be substituted without threatening the trustworthiness of the system. From a formal methods point of view, it makes sense. Even if we value “beautiful proofs,” we mostly want correct proofs.
The same reasoning does not apply to computational programs. Two functions of type nat -> nat -> nat are unlikely to be equivalent. For instance, add, mul or sub share the same type, but computes totally different results.
Using tactics such as auto to generate terms which do not live inside Prop is risky, to say the least. For instance,

Definition add (x y : nat) : nat := ltac:(auto).

This works, but it is certainly not what you would expect:
add = fun _ y : nat => y
     : nat -> nat -> nat
That being said, if we keep that in mind, and assert the correctness of the generated programs (either by providing a proof, or by extensively testing it), there is no particular reason not to use Ltac to generate terms when it makes sens.
Dependently-typed programming can help here. If we decorate the return type of a function with the expected properties of the result wrt. the function’s arguments, we can ensure the function is correct, and conversly prevent tactics such as auto to generate “incorrect” terms. Interested readers may refer to the dedicated series on this very website.