Posts tagged: agda

First impressions of Coq and “Certified Programming with Dependent Types”

A simplistic view of strongly typed functional programming ecosystem is that the two most influential languages are Haskell and ML. Haskell is my language of choice so when it came to learning dependently-typed programming I stayed on the Haskell side of the spectrum and went with Agda and Idris. I chose the two over the ML-inspired Coq, most often advertised as a proof assistant rather that a programming language, planning to learn it “when I find some free time”. I wouldn’t probably find that time if it wasn’t for a friend of mine who recently picked up a book “Certified Programming with Dependent Types” by Adam Chlipala of MIT. Having a companion to discuss the ideas in the book was a perfect opportunity to pick it up – the truth is I found out about the book well over a year ago and since then it stayed on my always too long must-read-this-some-day list. So far I have read around 1/3rd of the book and would like to share some of my first impressions, both about the book, which I will refer to as CPDT, and Coq.

(Note: In what follows I will compare Coq to Agda and Idris but you have to be aware that despite similarity in features of these languages they don’t aim to be the same. Coq is a proof assistant with an extra feature of code extraction that allows you to turn your proofs into code – if you ever heard about “programs as proofs” this is it. Idris is a programming language with extra features that allow you to prove your code correct. I’m not really sure how to classify Agda. It is definitely on the programming-language-end of the spectrum – it allows you to prove your code correct but does not provide any extra built-in proof support. At the same time turning Agda code into working programs is non-trivial.)

Let me start off by saying that I don’t have any real-life Coq project on the horizon, so my learning is not motivated by need to solve any practical problem. My main driving force for learning Coq is purely interest in programming languages and seeing how Coq compares to Agda and Idris. A common thing with dependently-typed languages is that the types can get too complicated for the programmer to comprehend and thus a language requires an interactive mode to provide programmer with compiler feedback about the types. This is true for Agda, Idris and Coq. Agda offers a great support for holes: programmer can insert question marks into the program code and once it is re-compiled in the editor (read: Emacs) question marks become holes ie. places where Agda compiler provides user with feedback about expected types, bindings available in the hole context as well as some nice inference features allowing to automatically fill in contents of holes. So in Agda one proves by constructing terms of appropriate types. Coq is different, as it relies on a mechanism called “tactics”. Once the user writes down a type (theorem) he is presented with a set of goals to prove. Applying a tactic transforms the current goal into a different goal (or several goals). Conducting consecutive steps of the proof (ie. applying several tactics) should lead to some trivial goal that follows from definition and ends the proof. To work with Coq I decided to use Proof General, an Emacs extension for working with proofs (many other proof assistants are supported besides Coq)1. It launches Coq process in the background and essentially integrates writing code with proving. With Proof General I can easily step through my proofs to see how the goals are transformed by usage of tactics. Idris falls somewhere between Agda and Coq. As stated earlier it is mostly a programming language but it also provides tactic-based proving. So for example when I write a definition that requires explicit proof to typecheck, idris-mode launches interactive REPL in which I can conduct a proof in a fashion similar to Proof General and once I’m finished the proof is inserted into the source code. the result looks something like this:

par : (n : Nat) -> Parity n
par Z = even {n=Z}
par (S Z) = odd {n=Z}
par (S (S k)) with (par k)
  par (S (S (j + j)))     | even ?= even {n = S j}
  par (S (S (S (j + j)))) | odd  ?= odd {n = S j}
 
---------- Proofs ----------
 
Basics.par_lemma_2 = proof
  intros
  rewrite sym (plusSuccRightSucc j j)
  trivial
 
 
Basics.par_lemma_1 = proof
  intros
  rewrite sym (plusSuccRightSucc j j)
  trivial

The last time I checked Idris once the proof was completed and added to the source code it was not possible to step through it back and forth to see how goals are transformed. (Things might have changed since I last checked.)

So far I’ve been doing rather basic stuff with Coq so I haven’t seen much that wouldn’t be also possible in Agda or Idris. The biggest difference is that Coq feels a much more grown up language than any of the mentioned two. One totally new thing I learned so far is co-induction, but I’m only starting with it and won’t go into details, rather leaving it for a separate post. (Agda also supports co-induction.)

As for the CPDT book I have to say it is a challenging read: it’s very dense and focuses on more advanced Coq techniques without going into details of basics. As such it is a great demonstration of what can be done in Coq but not a good explanation of how it can be done. Depending on what you are expecting this book might or might not be what you want. As stated earlier I don’t plan on applying Coq in any project but rather want to see a demo of Coq features and possibly pick up some interesting theoretical concepts. As such CPDT works quite well for me although I am not entirely happy with not being able to fully understand some of the demonstrated techniques. As such CPDT is definitely not a self-contained read, which I believe was a conscious decision on the author’s side. Discussing with people on #coq IRC channel and reading various posts on the internet leads me to a conclusion that CPDT is a great book for people that have been using Coq for some time and want to take their skills to a new level. The main theme of the book is proof automation, that is replacing tactic-based sequential proofs with automated decision procedures adjusted to the problem at hand that can construct proofs automatically. Indeed tactic-based proofs are difficult to understand and maintain. Consider this proof of a simple property that n + 0 = n:

Theorem n_plus_O : forall (n : nat), plus n O = n.
  intros.
  induction n.
  reflexivity.
  simpl.
  rewrite IHn.
  reflexivity.
Qed.

To understand that proof one has to step through it to see how goals are transformed or have enough knowledge of Coq to know that without the need of stepping through. Throughout the book Adam Chlipala demonstrates the power of proof automation by using his tactic called crush, which feels like a magic wand since it usually ends the proof immediately (sometimes it requires some minimal guidance before it ends the proof immediately). I admit this is a bit frustrating as I don’t feel I learn anything by seeing crush applied to magically finish a proof. Like I said, a good demo of what can be done but without an explanation. The worst thing is that crush does not seem to be explained anywhere in the book so readers wanting to understand it are left on their own (well, almost on their own).

What about those of you who want to learn Coq starting from the basics? It seems like Software Foundations is the introductory book about Coq and – given that the main author is Benjamin Pierce – it looks like you can’t go wrong with this book. I am not yet sure whether I’ll dive into SF but most likely not as this would mean giving up on CPDT and for me it’s more important to get a general coverage of more advanced topics rather than in-depth treatment of basics.

  1. Other choices of interactive mode are available for Coq, for example CoqIDE shipped by default with Coq installation []

Data is evidence

Recently I’ve been reading “Types and Programming Langauages” book by Benjamin C. Pierce. It’s a great introduction to theory behind type systems of both functional and object-oriented languages. In the first chapter there’s this really brilliant sentence that says what a type system does:

A type system can be regarded as calculating a kind of static approximation to the run-time behaviours of the terms in a program.

So if a type system is a static approximation of program’s behaviour at runtime a natural question to ask is: “how accurate this approximation can be?” Turns out it can be very accurate.

Let’s assume that we have following definition of natural numbers1:

data Nat : Set where
  zero : Nat
  suc  : Nat → Nat

First constructor – zero – says that zero is a natural number. Second – suc – says that successor of any natural number is also a natural number. This representation allows to encode 0 as zero, 1 as suc zero, 2 as suc (suc zero) and so on2. Let’s also define a type of booleans to represent logical true and false:

data Bool : Set where
  false : Bool
  true  : Bool

We can now define a operator that returns true if its arguments are in greater-equal relation and false if they are not:

__ : Nat → Nat → Bool
m     ≥ zero  = true
zero  ≥ suc n = false
suc m ≥ suc n = m ≥ n

This definition has three cases. First says that any natural number is greater than or equal to zero. Second says that zero is not greater than any successor. Final case says that two non-zero natural numbers are in ≥ relation if their predecessors are also in that relation. What if we replace false with true in our definition?

__ : Nat → Nat → Bool
m     ≥ zero  = true
zero  ≥ suc n = true
suc m ≥ suc n = m ≥ n

Well… nothing. We get a function that has nonsense semantics but other than that it is well-typed. The type system won’t catch this mistake. The reason for this is that our function returns a result but it doesn’t say why that result is true. And since doesn’t give us any evidence that result is correct there is no way of statically checking whether the implementation is correct or not.

But it turns out that we can do better using dependent types. We can write a comparison function that proves its result correct. Let’s forget our definition of and instead define datatype called :

data __ : Nat → Nat → Set where
  ge0 : {  y : Nat}         → y     ≥ zero
  geS : {x y : Nat} → x ≥ y → suc x ≥ suc y

This type has two Nat indices that parametrize it. For example: 5 ≥ 3 and 2 ≥ 0 are two distinct types. Notice that each constructor can only be used to construct values of a specific type: ge0 constructs a value that belongs to types like 0 ≥ 0, 1 ≥ 0, 3 ≥ 0 and so on. geS given a value of type x ≥ y constructs a value of type suc x ≥ suc y.

There are a few interesting properties of datatype. Notice that not only ge0 can construct value of types y ≥ 0, but it is also the only possible value of such types. In other words the only value of 0 ≥ 0, 1 ≥ 0 or 3 ≥ 0 is ge0. Types like 5 ≥ 3 also have only one value (in case of 5 ≥ 3 it is geS (geS (geS ge0))). That’s why we call a singleton type. Note also that there is no way to construct values of type 0 ≥ 3 or 5 ≥ 2 – there are no constructors that we could use to get a value of that type. We will thus say that datatype is a witness (or evidence): if we can construct a value for a given two indices then this value is a witness that relation represented by the datatype holds. For example geS (geS ge0)) is a witness that relations 2 ≥ 2 and 2 ≥ 5 hold but there is no way to provide evidence that 0 ≥ 1 holds. Notice that previous definition of function had three cases: one base case for true, one base case for false and one inductive case. The datatype has only two cases: one being equivalent of true and one inductive. Because the value of exists if and only if its two parameters are in ≥ relation there is no need to represent false explicitly.

We have a way to express proof that one value is greater than another. Let’s now construct a datatype that can say whether one value is greater than another and supply us with a proof of that fact:

data Order : Nat → Nat → Set where
  ge : {x : Nat} {y : Nat} → x ≥ y → Order x y
  le : {x : Nat} {y : Nat} → y ≥ x → Order x y

Order is indexed by two natural numbers. These numbers can be anything – there is no restriction on any of the constructors. We can construct values of Order using one of two constructors: ge and le. Constructing value of Order using ge constructor requires a value of type x ≥ y. In other words it requires a proof that x is greater than or equal to y. Constructing value of Order using le constructor requires the opposite proof – that y ≥ x. Order datatype is equivalent of Bool except that it is specialized to one particular relation instead of being a general statement of truth or false. It also carries a proof of the fact that it states.

Now we can write a function that compares two natural numbers and returns a result that says whether first number is greater than or equal to the second one3:

order : (x : Nat)(y : Nat) → Order x y
order x       zero    = ge ge0
order zero    (suc b) = le ge0
order (suc a) (suc b) with order a b
order (suc a) (suc b) | ge a≥b = ge (geS a≥b)
order (suc a) (suc b) | le b≥a = le (geS b≥a)

In this implementation ge plays the role of true and le plays the role of false. But if we try to replace le with ge the way we previously replaced false with true the result will not be well-typed:

order : (x : Nat)(y : Nat) → Order x y
order x       zero    = ge ge0
order zero    (suc b) = ge ge0 -- TYPE ERROR
order (suc a) (suc b) with order a b
order (suc a) (suc b) | ge a≥b = ge (geS a≥b)
order (suc a) (suc b) | le b≥a = le (geS b≥a)

Why? It is a direct result of the definitions that we used. In the second equation of order, x is zero and y is suc b. To construct a value of Order x y using ge constructor we must provide a proof that x ≥ y. In this case we would have to prove that zero ≥ suc b, but as discussed previously there is no constructor of that could construct value of this type. Thus the whole expression is ill-typed and the incorrectness of our definition is caught at compile time.

Summary

The idea that types can represent logical propositions and values can be viewed as proofs of these propositions is not new – it is known as Curry-Howard correspondence (or isomorphism) and I bet many of you have heard that name. Example presented here is taken from “Why Dependent Types Matter”. See this recent post for a few more words about this paper.

  1. All code in this post is in Agda. []
  2. For the sake of readability I will write Nats as numerals, not as applications of suc and zero. So remember that whenever I write 2 I mean suc (suc zero) []
  3. Note that in Agda a≥b is a valid identifier, not an application of []

Idris – first impressions

During last few weeks I got a pretty good grip of basics of dependent types and Agda. Programming in Agda is fun but nevertheless I decided to experiment with other dependently-typed programming languages. Back in March I attempted to learn Idris from one of Edwin Brady’s presentations, but having no knowledge of dependent types I had to give up after about 30 minutes of first video. Now that I know basics of Agda I decided to give Idris another try. This time it was much simpler. Reading official Idris tutorial and doing some experiments took me about 5 hours. Below are some of my first impressions (I’m underlining that phrase to make it clear that some of these opinions may change in the future).

  • Standard library in Idris feels friendlier than in Agda. It is bundled with the compiler and doesn’t require additional installation (unlike Agda’s). Prelude is by default imported into every module so programmer can use Nat, Bool, lists and so on out of the box. There are also some similarities with Haskell prelude. All in all, standard library in Idris is much less daunting than in Agda.
  • Idris is really a programming language, i.e. one can write programs that actually run. Agda feels more like a proof assistant. According to one of the tutorials I’ve read you can run programs written in Agda, but it is not as straightforward as in Idris. I personally I haven’t run a single Agda program – I’m perfectly happy that they typecheck.
  • Compared to Agda Idris has limited Unicode support. I’ve never felt the need to use Unicode in my source code until I started programming in Agda – after just a few weeks it feels like an essential thing. I think Idris allows Unicode only in identifiers, but doesn’t allow it in operators, which means I have to use awkward operators like <!= instead of ≤. I recall seeing some discussions about Unicode at #idris channel, so I wouldn’t be surprised if that changed soon.
  • One of the biggest differences between Agda and Idris is approach to proofs. In Agda a proof is part of function’s code. Programmer is assisted by agda-mode (in Emacs) which guides code writing according to types (a common feature in dependently typed languages). Over the past few weeks I’ve come to appreciate convenience offered by agda-mode: automatic generation of case analysis, refinement of holes, autocompletion of code based on types to name a few. Idris-mode for Emacs doesn’t support interactive development. One has to use interactive proof mode provided in Idris REPL – this means switching between terminal windows, which might be a bit inconvenient. Proofs in Idris can be separated from code they are proving. This allows to write code that is much clearer. In proof mode one can use tactics, which are methods used to convert proof terms in order to reach a certain goal. Generated proof can then be added to source file. It is hard for me to decide which method I prefer. The final result is more readable in Idris, but using tactics is not always straightforward. I also like interactive development offered by Agda. Tough choice.
  • Both languages are poorly documented. That said, Idris has much less documentation (mostly papers and presentations by Edwin Brady). I expect this to change, as the Idris community seems to be growing (slowly, but still).
  • One thing I didn’t like in Idris are visibility qualifiers used to define how functions and datatypes are exported from the module. There are three available: public (export name and implementation), private (don’t export anything) and abstract (export type signature, but don’t export implementation). This is slightly different than in Haskell – I think that difference comes from properties of dependent types. What I didn’t like are rules and syntax used to define export visibility. Visibility for a function or datatype can be defined by annotating it with one of three keywords: public, private, abstract. If all definitions in a module are not annotated then everything is public. But if there is at least one annotation everything without annotation is private. Unless you changed the default visibility, in which case everything without annotation can be abstract! In other words if you see a definition without annotation it means that: a) it can be public, but you have to check if all other definitions are without annotations; b) private, if at least one other definition is annotated – again, you have to check whole file; c) but it can be abstract as well – you need to check the file to see if the default export level was set. The only way to be sure – except for nuking the entire site from orbit – is annotating every function with an export modifier, but that feels very verbose. I prefer Haskell’s syntax for defining what is exported and what is not and I think it could be easily extended to support three possible levels of export visibility.
  • Unlike Agda, Idris has case expressions. They have some limitations however. I’m not sure whether these limitations come from properties of dependently typed languages or are they just simplifications in Idris implementation that could theoretically be avoided.
  • Idris has lots of other cool features. Idiom brackets are a syntactic sugar for applicative style: you can write [| f a b c |] instead of pure f <*> a <*> b <$*gt; c. Idris has syntax extensions designed to support development of EDSLs. Moreover tuples are available out of the box, there’s do-notation for monadic expressions, there are list comprehensions and Foreign Function Interface.
  • One feature that I’m a bit sceptical about are “implicit conversions” that allow to define implicit casts between arguments and write expressions like "Number " ++ x, where x is an Int. I can imagine this could be a misfeature.
  • Idris has “using” notation that allows to introduce definitions that are visible throughout a block of code. Most common use seems to be in definition of data types. Agda does it better IMO by introducing type parameters into scope of data constructors.
  • Idris seems to be developed more actively. The repos are stored on github so anyone can easily contribute. This is not the case with Agda, which has Darcs repos and the whole process feels closed (in a sense “not opened to community”). On the other hand mailing list for Idris is set up on Google lists, which is a blocker for me.

All in all programming in Idris is also fun although it is slightly different kind of fun than in Agda. I must say that I miss two features from Agda: interactive development in Emacs and Unicode support. Given how actively Idris is developed I imagine it could soon become more popular than Agda. Perhaps these “missing” features will also be added one day?

As an exercise I rewrote code from “Why dependent types matter” paper from Agda (see my previous post) to Idris. Code is available in on github.

Staypressed theme by Themocracy