The DeepSpec Project is a research project lead by several US East Coast universities (University of Pennsylvania, MIT, Yale University and Princeton University) and aims to *“push forward the state of the art in applying computer proof assistants to verify realistic software and hardware stacks at scale”*. It consists of several smaller projects, including a formal verification of a hypervisor (CertiKOS), LLVM (Vellvm), Coq compiler (CertiCoq) and GHC’s Core language (CoreSpec).

The goal of DeepSpec Summer School was to introduce people to real-life formal verification using Coq proof assistant. School was divided into three parts. All the lectures can be found on a YouTube channel. Coq code for the courses is available on GitHub. Summer school’s web page also provides installation instructions as well as other supplementary material (click on a given lecture or select from “Lectures” tab).

First three days of the summer school were a very intensive introductory course on Coq lead by Benjamin Pierce. This essentially covered the first volume of Software Foundations. (Aside: For those of you who don’t know yet, original Software Foundations online book has been split into two volumes: Logical Foundations and Programming Language Foundations. Also, a third volume has been added to the series: Verified Functional Algorithms by Andrew Appel. All three volumes can be found here, although expect that this link will likely become broken soon when this draft versions become an official release. There are also plans for two more volumes, one on Separation Logic and another one on Systems Verification.)

First full week of the school consisted of four courses centred around programming language verification:

*Property-based random testing with QuickChick*by Benjamin Pierce. I assume many of you heard about Haskell library called QuickCh**e**ck. It offers property-based testing: programmer writes properties that the should hold for a given piece of code and QuickCheck tests whether they hold for randomly generated test data. QuickCh**i**ck is implementation of the same idea in Coq. Now, you might wonder what is the point of doing such a thing in Coq. After all, Coq is about formally proving that a given property is always true, not randomly testing whether it holds. I was sceptical about this as well, but it actually turns to be quite a good idea. The point is, specifications are difficult to write and often even more difficult to prove. They are especially difficult to prove when they are false ;-) And this is exactly when QuickChick can be beneficial: by trying to find a counter-example for which a stated property does not hold. This can indeed save programmer from spending hours on trying to prove something that is false. If QuickChick doesn’t find a counter-example we can start writing a formal proof.

This course also gives a nice overview of type classes in Coq.*The structure of verified compiler*by Xavier Leroy. This series of lectures was based on CompCert, which is a formally verified C compiler. The ideas behind formal verification of a compiler were presented on a compiler of Imp (a toy imperative language used in Software Foundations) to a simple virtual machine. Fourth, final lecture covered the CompCert project itself. To me this was the most interesting course of the summer school.*Language specification and variable binding*by Stephanie Weirich. Software Foundations is a great book, but it completely omits one topic that is very important in formalizing programming languages: dealing with variable bindings. In this courses Stephanie presented “locally nameless” representation of variable bindings. This is something I had planned to learn for a very long time but couldn’t find the time.*Vellvm: Verifying the LLVM*by Steve Zdancewic. For a change, in this course Imp was compiled to a simplified variant of LLVM, the compilation process being verified of course. Also, a nice introduction to LLVM.

Courses during the second week put more focus on verifying computer systems. Again, there were four courses:

*Certifying software with crashes*by Frans Kaashoek and Nickolai Zeldovich. The topic of this course was certification of a hard-drive operating routines, including bad-sector remapping and a simple virtual RAID 1 implementation. Although still using toy examples, specifications presented during this course were much more abstract giving a good idea how to scale to a real-world system verification. I found this course very difficult to follow, although the lectures were really superb. Note: materials for this one course are available in a separate GitHub repo.*CertiKOS: Certified kit operating systems*by Zhong Shao. Ok, I admit I was completely unable to follow this series of lectures. Way to difficult. In fact, I skipped two out of four lectures because I figured out it will make more sense to work on homework assignments for other lectures.*Program-specific proof automation*by Adam Chlipala. Unsurprisingly to those who know Adam’s*“Certified Programming with Dependent Types”*book, his course focused on proof automation using Ltac. One lecture was specifically dedicated to proofs by reflection.*Verified Functional Algorithms*by Andrew Appel. This course covered a majority of third volume of new Software Foundations.

First and foremost let me say this: DeepSpec Summer School was the best research meeting I have ever attended. The courses were really good and inspiring, but the most important thing that made this summer school so great were fantastic people who attended it. Spending evening hours together working on homework assignments was especially enjoyable.

There might be a 2018 edition of the summer school so be on the lookout – this is a really great event for anyone interested in Coq and formal verification.

]]>RWO was written by Yaron Minsky, Anil Madhavapeddy and Jason Hickey. It is one of a handful of books on OCaml. Other titles out there are “OCaml from the Very Beginning” and “More OCaml: Algorithms, Methods and Diversions” by John Whitington and “Practical OCaml” by Joshua Smith. I decided to go with RWO because when I asked “*what is the best book on OCaml*” on `#ocaml`

IRC channel RWO was an unanimous response from several users. The title itself is obviously piggybacking on an earlier “Real World Haskell” released in the same series by O’Reilly, which was in general a good book (though it had its flaws).

The first nine chapters comprise about 40% of the book (190 pages out of 470 total) and cover the basics of OCaml: various data types (lists, records, variants), error handling, imperative programming (eg. mutable variables and data structures, I/O) and basics of the module system. Chapters 10 through 12 present advanced features of the module system and introduce object-oriented aspects of OCaml. Language ecosystem (ie. tools and libraries) is discussed in chapters 13 through 18. The remaining chapters 19 through 23 go into details of OCaml compiler like garbage collector or Foreign Function Interface.

When I think back about reading “Real World Haskell” I recall that quite a lot of space was dedicated to explaining in detail various basic functional programming concepts. “Real World OCaml” is much more dense. It approaches teaching OCaml just as if it was another programming language, without making big deal of functional programming model. I am much more experienced now than when reading RWH four years ago and this is exactly what I wanted. I wonder however how will this approach work for people new to functional programming. It reminds my of my early days as a functional programmer. I began learning Scala having previously learned Scheme and Erlang (both unusual for functional languages in lacking a type system). Both Scala and OCaml are not pure functional languages: they allow free mixing of functional and imperative (side-effecting) code. They also support object-oriented programming. My plan in learning Scala was to learn functional programming and I quickly realized that I was failing. Scala simply offered too many back-doors that allowed escaping into the imperative world. So instead of forcing me to learn a new way of thinking it allowed me to do things the old way. OCaml seems to be exactly the same in this regard and RWO offers beginners little guidance to thinking functionally. Instead, it gives them a full arsenal of imperative features early on in the book. I am not entirely convinced that this approach will work well for people new to FP.

“Real World OCaml” was published less than three years ago so it is a fairly recent book. Quite surprisingly then several sections have already gone out of date. The code does not work with the latest version of OCaml compiler and requires non-obvious changes to work. (You can of course solve the problem by working with the old version of OCaml compiler.) I was told on IRC that the authors are already working on the second edition of the book to bring it to date with today’s OCaml implementation.

Given all the above my verdict on “Real World OCaml” is that it is a really good book about OCaml itself (despite being slightly outdated) but not necessarily the best book on basics of functional programming.

]]>I believe Coq’Art was the first book published on Coq. There are two editions – 2004 hardcover version and a 2010 paperback version – but as far as I know there are no differences between them. Too bad the 2010 edition was not updated for the newest versions of Coq – some of the code examples don’t work in the newest compiler. Coq’Art takes a theoretical approach, ie. it teaches Coq largely by explaining how the rules of Calculus of Constructions work. There are also practical elements like case studies and exercises but they do not dominate the book. Personally I found Coq’Art to be a very hard read. Not because it dives too much in theory – it doesn’t – but because the presentation seems to be chaotic. For example, description of a single tactic can be spread throughout deveral places in the book. In principle, I don’t object to extending earlier presentation with new details once the reader gets a hold of some new concepts, but I feel that Coq’Art definitely goes too far. Coq’Art also presents material in a very unusual order. Almost every introduction to Coq or any other functional language begins with defining data types. Coq’Art introduces them in chapter 6. On the other hand sorts and universes – something I would consider an advanced concept for anyone who is not familiar with type-level programming – are presented in the second chapter. (Note that first chapter is a very brief overview of the language.) By contrast, CPDT goes into detailed discussion of universes in chapter 12 and SF does not seem to cover them at all. Overall, Coq’Art is of limited usefulness to me. To tell the truth this is not because of its focus on theory rather than practice, but because of language style, which I find rather inaccessible. Many times I had problems understanding passages I was reading, forcing me to re-read them again and again, trying to figure out what is the message that the authors are trying to convey. I did not have such problems with CPDT, SF, nor any other book I have read in the past few years. At the moment I have given up on the idea of reading the book from cover to cover. Nevertheless I find Coq’Art a good supplementary reading for SF. Most importantly because of the sections that explain in detail the inner workings of various tactics.

As mentioned at the beginning, I already wrote a first impressions post about CPDT. Back then I said the book “is a great demonstration of what can be done in Coq but not a good explanation of how it can be done”. Having read all of it I sustain my claim. CPDT does not provide a thorough and systematic coverage of basics, but instead focuses on advanced topics. As such, it is not the best place to start for beginners but it is a priceless resource for Coq practitioners. The main focus of the book is proof automation with Ltac, Coq’s language for writing automated proof procedures. Reader is exposed to Ltac early on in the book, but detailed treatment of Ltac is delayed until chapter 14. Quite surprisingly, given that it is hard to understand earlier chapters without knowing Ltac. Luckily, the chapters are fairly independent of each other and can be read in any order the reader wishes. Definitely it is worth to dive into chapter 14 and fragments of apter 13 as early as possible – it makes understanding the book a whole lot easier. So far I have already read chapter 14 three times. As I learn Coq more and more I discover new bits of knowledge with each read. In fact, I expect to be going back regularly to CPDT.

Coq’Art and CPDT approach teaching Coq in totally different ways. It might then be surprising that Software Foundations uses yet another approach. Unlike Coq’Art it is focused on practice and unlike CPDT it places a very strong emphasis on learning the basics. I feel that SF makes Coq learning curve as flat as possible. The main focus of SF is applying Coq to formalizing programming languages semantics, especially their type systems. This should not come as a big surprise given that Benjamin Pierce, the author of SF, authored also “*Types and Programming Languages” *(TAPL), the best book on the topic of type systems and programming language semantics I have seen. It should not also be surprising that a huge chunk of material overlaps between TAPL and SF. I find this to be amongst the best things about SF. All the proofs that I read in TAPL make a lot more sense to me when I can convert them to a piece of code. This gives me a much deeper insight into the meaning of lemmas and theorems. Also, when I get stuck on an exercise I can take a look at TAPL to see what is the general idea behind the proof I am implementing.

SF is packed with material and thus it is a very long read. Three months after beginning the book and spending with it about two days a week I am halfway through. The main strength of SF is a plethora of exercises. (Coq’Art has some exercises, but not too many. CPDT has none). They can take a lot of time – and I *really* mean a lot – but I think this is the only way to learn a programming language. Besides, the exercises are very rewarding. One downside of the exercises is that the book provides no solutions, which is bad for self-studying. Moreover, the authors ask people not to publish the solutions on the internet, since “having solutions easily available makes [SF] much less useful for courses, which typically have graded homework assignments”. That being said, there are plenty of github repositories that contain the solved exercises (I also pledge guilty!). Although it goes against the authors’ will I consider it a really good thing for self-study: many times I have been stuck on exercises and was able to make progress only by peeking at someone else’s solution. This doesn’t mean I copied the solutions. I just used them to overcome difficulties and in some cases ended up with proofs more elegant than the ones I have found. As a side note I’ll add that I do not share the belief that publishing solutions on the web makes SF less useful for courses. Students who want to cheat will get the solutions from other students anyway. At least that has been my experience as an academic teacher.

To sum up, each of the books presents a different approach. Coq’Art focuses on learning Coq by understanding its theoretical foundations. SF focuses on learning Coq through practice. CPDT focuses on advanced techniques for proof automation. Personally, I feel I’ve learned the most from SF, with CPDT closely on the second place. YMMV

]]>The installation process went very smooth on my Debian machine, but on openSUSE I have run into problems. After getting the latest compiler I wanted to install `ocamlfind`

, a tool required by a project I wanted to play with. To my disappointment installation ended with an error:

[ERROR] The compilation of conf-ncurses failed at "pkg-config ncurses". This package relies on external (system) dependencies that may be missing. `opam depext conf-ncurses.1' may help you find the correct installation for your system. |

I verified that I indeed have installed development files for the `ncurses`

library as well as the `pkg-config`

tool. Running the suggested `opam`

command also didn’t find any missing dependencies, and the log files from the installation turned out to be completely empty, so I was left clueless. Googling revealed that I am not the first to encounter this problem, but offered no solution. I did some more reading on `pkg-config`

and learned that: a) it is a tool that provides meta-information about installed libraries, and b) in order to recognize that a library is installed it requires extra configuration files (aka `*.pc`

files) provided by the library. Running `pkg-config --list-all`

revealed that `ncurses`

is not recognized as installed on my system, which suggested that the relevant` *.pc`

files are missing. Some more googling revealed that `ncurses`

library can be configured and then compiled with `--enable-pc-files`

switch, which should build the files needed by `pkg-config`

. I got the sources for the `ncurses`

version installed on my system (5.7) only to learn that this build option is unsupported. This explains why the files are missing on my system. I got the sources for the latest version of `ncurses`

(6.0), configured them with `--enable-pc-files`

and compiled, only to learn that the `*.pc`

files were not built. After several minutes of debugging I realized that for some unexplained reasons the `configure`

-generated script which should build the `*.pc`

files (located at `misc/gen-pkgconfig`

) did not receive `+x`

(executable) permission. After adding this permission manually I ran the script and got five `*.pc`

files for the `ncurses`

6.0 library. Then I had to edit the files to match the version of `ncurses`

of my system – relevant information can be obtained by running `ncurses5-config --version`

. The only remaining thing was to place the five `*.pc`

files in a place where `pkg-config`

can find them. On openSUSE this was `/usr/local/pkgconfig`

, but this can differ between various Linux flavours.

After all these magical incantations the installation of `ocamlfind`

went through fine and I can enjoy a working OCaml installation on both of my machines. Now I’m waiting for the “Real-world OCaml” book ordered from Amazon (orders shipped from UK Amazon to Poland tend to take around two weeks to arrive).

My patch allows you to do several interesting things. Firstly, it allows to quote typed holes, ie. expressions with name starting with an underscore:

[d| i :: a -> a i x = _ |] |

This declaration quote will represent `_`

using an `UnboundVarE`

constructor. Secondly, you can now splice unbound variables:

i :: a -> a i x = $( return $ VarE (mkName "_") ) j :: a -> a j x = $( return $ UnboundVarE (mkName "_") ) |

Notice that in a splice you can use either `VarE`

or `UnboundVarE`

to represent an unbound variable – they are treated the same.

A very important side-effect of my implementation is that you can actually quote unbound variables. This means that you can now use nested pattern splices, as demonstrated by one of the tests in GHC testsuite:

baz = [| \ $( return $ VarP $ mkName "x" ) -> x |] |

Previously this code was rejected. The reason is that:

- nested pattern splice is not compiled immediately, because it is possible that it refers to local variables defined outside of the bracket;
- the bracket is renamed immediately at the declaration site and all the variables were required to be in scope at that time.

The combination of the above means that the pattern splice does not bring anything into scope (because it is not compiled until the outer bracket is spliced in), which lead to `x`

being out of scope. But now it is perfectly fine to have unbound variables in a bracket. So the above definition of `baz`

is now accepted. When it is first renamed `x`

is treated as an unbound variable, which is now fine, and when the bracket is spliced in, the inner splice is compiled and it correctly brings binding for `x`

into scope. Getting nested pattern splices to work was not my intention when I started implementing this patch but it turned out we essentially got this feature for free.

One stumbling block during my work was typed Template Haskell. With normal, untyped TH I can place a splice at top-level in a file:

$$(return [ SigD (mkName "m") (ForallT [PlainTV (mkName "a")] [] (AppT (AppT ArrowT (VarT (mkName "a"))) (VarT (mkName "a")))) , FunD (mkName "m") [Clause [VarP (mkName "x")] (NormalB (VarE (mkName "x"))) [] ] ]) |

and this will build a definition that will be spliced into the source code. But converting this into a typed splice, by saying `$$(return ....`

, resulted in compiler panic. I reported this as #10945. The reason turned out to be quite tricky. When Template Haskell is enabled, top-level expressions are allowed. Each such expression is treated as an implicit splice. The problem with typed TH splice is that it doesn’t really make sense at the top-level and it should be treated as an implicit splice. Yet it was treated as an explicit splice, which resulted in a panic later in the compiler pipeline.

Another issue that came up with typed TH was that typed holes cannot be quoted, again leading to panic. I reported this as #10946. This issue has not yet been solved.

The above work is now merged with HEAD and will be available in GHC 8.0.

]]>My next plans include extending injective type families to fully match expressiveness of functional dependencies, as outlined in Section 7.2 of the injectivity paper. I also hope to finally implement support for typed holes in Template Haskell. The patch was supposed to be trivial and I started working on it several months ago. But then I ran into several problems with it and abandoned it to focus on ITFs.

]]>The idea behind injective type families is to infer the arguments of a type family from the result. For example, given a definition:

type family F a = r | r -> a where F Char = Bool F Bool = Char F a = a |

if we know `(F a ~ Bool)`

^{1} then we want to infer `(a ~ Char)`

. And if we know `(F a ~ Double)`

then we want to infer `(a ~ Double)`

. Going one step further from this, if we know `(F a ~ F b)`

then – knowing that `F`

is injective – we want to infer `(a ~ b)`

.

Notice that in order to declare `F`

as injective I used new syntax. Firstly, I used “`= r`

” to introduce a name for the result returned by the type family. Secondly, I used syntax borrowed from functional dependencies to declare injectivity. For multi-argument type families this syntax allows to declare injectivity in only some of the arguments, e.g.:

type family G a b c = r | r -> a c |

Actually, you can even have kind injectivity, assuming that type arguments have polymorphic kinds.

Obviously, to make use of injectivity declared by the user GHC needs to check that the injectivity annotation is true. And that’s the really tricky part that the paper focuses on. Here’s an example:

type family T a = r | r -> a where T [a] = a |

This type family returns the type of elements stored in a list. It certainly looks injective. Surprisingly, it is not. Say we have `(T [T Int])`

. By the only equation of `T`

this gives us `(T [T Int] ~ T Int)`

. And by injectivity we have `([T Int] ~ Int)`

. We just proved that lists and integers are equal, which is a disaster.

The above is only a short teaser. The paper covers much more: more corner cases, our algorithm for verifying user’s injectivity annotations, details of exploiting knowledge of injectivity inside the compiler and relationship of injective type families to functional dependencies. Extended version of the paper also comes with proofs of soundness and completeness of our algorithm.

`~`

means unification. Think of “`~`

” as “having a proof that two types are equal”.

Find the type error in the following Haskell expression:

`if null xs then tail xs else xs`

You can’t, of course: this program is obviously nonsense unless you?re a typechecker. The trouble is that only certain computations make sense if the `null xs`

test is `True`

, whilst others make sense if it is `False`

. However, as far as the type system is concerned, the type of the then branch is the type of the else branch is the type of the entire conditional. Statically, the test is irrelevant. Which is odd, because if the test really were irrelevant, we wouldn?t do it. Of course, `tail []`

doesn?t go wrong – well-typed programs don?t go wrong – so we?d better pick a different word for the way they do go.

The above quote is an opening paragraph of Conor McBride’s “Epigram: Practical Programming with Dependent Types” paper. As always, Conor makes a good point – this test is completely irrelevant for the typechecker although it is very relevant at run time. Clearly the type system fails to accurately approximate runtime behaviour of our program. In this short post I will show how to fix this in Haskell using dependent types.

The problem is that the types used in this short program carry no information about the manipulated data. This is true both for `Bool`

returned by `null xs`

, which contains no evidence of the result, as well as lists, that store no information about their length. As some of you probably realize the latter is easily fixed by using vectors, ie. length-indexed lists:

data N = Z | S N -- natural numbers data Vec a (n :: N) where Nil :: Vec a Z Cons :: a -> Vec a n -> Vec a (S n) |

The type of vector encodes its length, which means that the type checker can now be aware whether it is dealing with an empty vector. Now let’s write `null`

and `tail`

functions that work on vectors:

vecNull :: Vec a n -> Bool vecNull Nil = True vecNull (Cons _ _) = False vecTail :: Vec a (S n) -> Vec a n vecTail (Cons _ tl) = tl |

`vecNull`

is nothing surprising – it returns `True`

for empty vector and `False`

for non-empty one. But the tail function for vectors differs from its implementation for lists. `tail`

from Haskell’s standard prelude is not defined for an empty list so calling `tail []`

results in an exception (that would be the case in Conor’s example). But the type signature of `vecTail`

requires that input vector is non-empty. As a result we can rule out the `Nil`

case. That also means that Conor’s example will no longer typecheck^{1}. But how can we write a correct version of this example, one that removes first element of a vector only when it is non-empty? Here’s an attempt:

shorten :: Vec a n -> Vec a m shorten xs = case vecNull xs of True -> xs False -> vecTail xs |

That however won’t compile: now that we written type-safe tail function typechecker requires a proof that vector passed to it as an argument is non empty. The weak link in this code is the `vecNull`

function. It tests whether a vector is empty but delivers no type-level proof of the result. In other words we need:

vecNull` :: Vec a n -> IsNull n |

ie. a function with result type carrying the information about the length of the list. This data type will have the runtime representation isomorphic to `Bool`

, ie. it will be an enumeration with two constructors, and the type index will correspond to length of a vector:

data IsNull (n :: N) where Null :: IsNull Z NotNull :: IsNull (S n) |

`Null`

represents empty vectors, `NotNull`

represents non-empty ones. We can now implement a version of `vecNull`

that carries proof of the result at the type level:

vecNull` :: Vec a n -> IsNull n vecNull` Nil = Null vecNull` (Cons _ _) = NotNull |

The type signature of `vecNull``

says that the return type must have the same index as the input vector. Pattern matching on the `Nil`

case provides the type checker with the information that the `n`

index of `Vec`

is `Z`

. This means that the return value in this case must be `Null`

– the `NotNull`

constructor is indexed with `S`

and that obviously does not match `Z`

. Similarly in the `Cons`

case the return value must be `NotNull`

. However, replacing `vecNull`

in the definition of `shorten`

with our new `vecNull``

will again result in a type error. The problem comes from the type signature of `shorten`

:

shorten :: Vec a n -> Vec a m |

By indexing input and output vectors with different length indices – `n`

and `m`

– we tell the typechecker that these are completely unrelated. But that is not true! Knowing the input length `n`

we know exactly what the result should be: if the input vector is empty the result vector is also empty; if the input vector is not empty it should be shortened by one. Since we need to express this at the type level we will use a type family:

type family Pred (n :: N) :: N where Pred Z = Z Pred (S n) = n |

(In a fully-fledged dependently-typed language we would write normal function and then apply it at the type level.) Now we can finally write:

shorten :: Vec a n -> Vec a (Pred n) shorten xs = case vecNull` xs of Null -> xs NotNull -> vecTail xs |

This definition should not go wrong. Trying to swap expression in the branches will result in a type error.

- Assuming we don’t abuse Haskell’s unsoundness as logic, eg. by using
`undefined`

.

Let’s begin with looking at Haskell because it is a good example of language not formalizing coinduction in any way. Two features of Haskell are of interest to us. First one is laziness. Thanks to Haskell being lazy we can write definitions like these (in GHCi):

ghci> let ones = 1 : ones ghci> let fib = zipWith (+) (1:fib) (1:1:fib) |

`ones`

is – as the name implies – an infinite sequence (list) of ones. `fib`

is a sequence of Fibonacci numbers. Both these definitions produce infinite lists but we can use these definitions safely because laziness allows us to force a finite number of elements in the sequence:

ghci> take 5 ones [1,1,1,1,1] ghci> take 10 fib [2,3,5,8,13,21,34,55,89,144] |

Now consider this definition:

ghci> let inf = 1 + inf |

No matter how hard we try there is no way to use the definition of `inf`

in a safe way. It always causes an infinite loop:

ghci> (0 /= inf) *** Exception: <<loop>> |

The difference between definitions of `ones`

or `fib`

an the definition of `inf`

is that the former use something what is called a *guarded recursion*. The term *guarded* comes from the fact that recursive reference to self is hidden under datatype constructor (or: guarded by a constructor). The way lazy evaluation is implemented gives a guarantee that we can stop the recursion by not evaluating the recursive constructor argument. This kind of infinite recursion can also be called *productive recursion*, which means that although recursion is infinite each recursive call is guaranteed to produce something (in my examples either a 1 or next Fibonacci number). By contrast recursion in the definition of `inf`

is not guarded or productive in any way.

Haskell happily accepts the definition of `inf`

even though it is completely useless. When we write Haskell programs we of course don’t want them to fall into silly infinite loops but the only tool we have to prevent us from writing such code is our intelligence. Situation changes when it comes to….

These languages deeply care about termination. By “termination” I mean ensuring that a program written by the user is guaranteed to terminate for any input. I am aware of two reasons why these languages care about termination. First reason is theoretical: without termination the resulting language is inconsistent as logic. This happens because non-terminating term can prove any proposition. Consider this non-terminating Coq definition:

Fixpoint evil (A : Prop) : A := evil A. |

If that definition was accepted we could use it to prove any proposition. Recall that when it comes to viewing types as proofs and programs as evidence “proving a proposition” means constructing a term of a given type. `evil`

would allow to construct a term inhabiting any type `A`

. (`Prop`

is a *kind* of logical propositions so `A`

is a type.) Since dependently-typed languages aim to be consistent logics they must reject non-terminating programs. Second reason for checking termination is practical: dependently typed languages admit functions in type signatures. If we allowed non-terminating functions then typechecking would also become non-terminating and again this is something we don’t want. (Note that Haskell gives you `UndecidableInstances`

that can cause typechecking to fall into an infinite loop).

Now, if you paid attention on your Theoretical Computer Science classes all of this should ring a bell: the halting problem! The halting problem says that the problem of determining whether a given Turing machine (read: a given computer program) will ever terminate is undecidable. So how is that possible that languages like Agda, Coq or Idris can answer that question? That’s simple: they are not Turing-complete (or at least their terminating subsets are not Turing complete). (**UPDATE:** but see Conor McBride’s comment below.) They prohibit user from using some constructs, probably the most important one being *general recursion*. Think of general recursion as any kind of recursion imaginable. Dependently typed languages require structural recursion on subterms of the arguments. That means that if a function receives an argument of an inductive data type (think: algebraic data type/generalized algebraic data type) then you can only make recursive calls on terms that are syntactic subcomponents of the argument. Consider this definition of `map`

in Idris:

map : (a -> b) -> List a -> List b map f [] = [] map f (x::xs) = f x :: map f xs |

In the second equation we use pattern matching to deconstruct the list argument. The recursive call is made on `xs`

, which is structurally smaller then the original argument. This guarantees that any call to `map`

will terminate. There is a silent assumption here that the `List A`

argument passed to `map`

is finite, but with the rules given so far it is not possible to construct infinite list.

So we just eliminated non-termination by limiting what can be done with recursion. This means that our Haskell definitions of `ones`

and `fib`

would not be accepted in a dependently-typed language because they don’t recurse on an argument that gets smaller and as a result they construct an infinite data structure. Does that mean we are stuck with having only finite data structures? Luckily, no.

Coinduction provides a way of defining and operating on infinite data structures as long as we can prove that our operations are safe, that is they are guarded and productive. In what follows I will use Coq because it seems that it has better support for coinduction than Agda or Idris (and if I’m wrong here please correct me).

Coq, Agda and Idris all require that a datatype that can contain infinite values has a special declaration. Coq uses `CoInductive`

keyword instead of `Inductive`

keyword used for standard inductive data types. In a similar fashion Idris uses `codata`

instead of `data`

, while Agda requires ? annotation on a coinductive constructor argument.

Let’s define a type of infinite `nat`

streams in Coq:

CoInductive stream : Set := | Cons : nat -> stream -> stream. |

I could have defined a polymorphic stream but for the purpose of this post stream of nats will do. I could have also defined a `Nil`

constructor to allow finite coinductive streams – declaring data as coinductive means it *can* have infinite values, not that it *must* have infinite values.

Now that we have infinite streams let’s revisit our examples from Haskell: `ones`

and `fib`

. `ones`

is simple:

CoFixpoint ones : stream := Cons 1 ones. |

We just had to use `CoFixpoint`

keyword to tell Coq that our definition will be corecursive and it is happily accepted even though a similar recursive definition (ie. using `Fixpoint`

keyword) would be rejected. Allow me to quote directly from CPDT:

whereas recursive definitions were necessary to

usevalues of recursive inductive types effectively, here we find that we needco-recursive definitionstobuildvalues of co-inductive types effectively.

That one sentence pins down an important difference between induction and coinduction.

Now let’s define `zipWith`

and try our second example `fib`

:

CoFixpoint zipWith (f : nat -> nat -> nat) (a : stream) (b : stream) : stream := match a, b with | Cons x xs, Cons y ys => Cons (f x y) (zipWith f xs ys) end. CoFixpoint fib : stream := zipWith plus (Cons 1 fib) (Cons 1 (Cons 1 fib)). |

Unfortunately this definition is rejected by Coq due to “unguarded recursive call”. What exactly goes wrong? Coq requires that all recursive calls in a corecursive definition are:

- direct arguments to a data constructor
- not inside function arguments

Our definition of `fib`

violates the second condition – both recursive calls to `fib`

are hidden inside arguments to `zipWith`

function. Why does Coq enforce such a restriction? Consider this simple example:

Definition tl (s : stream) : stream := match s with | Cons _ tl' => tl' end. CoFixpoint bad : stream := tl (Cons 1 bad). |

`tl`

is a standard tail function that discards the first element of a stream and returns its tail. Just like our definition of `fib`

the definition of `bad`

places the corecursive call inside a function argument. I hope it is easy to see that accepting the definition of `bad`

would lead to non-termination – inlining definition of `tl`

and simplifying it leads us to:

CoFixpoint bad : stream := bad. |

and that is bad. You might be thinking that the definition of `bad`

really has no chance of working whereas our definition of `fib`

could in fact be run safely without the risk of non-termination. So how do we persuade Coq that our corecursive definition of `fib`

is in fact valid? Unfortunately there seems to be no simple answer. What was meant to be a simple exercise in coinduction turned out to be a real research problem. This past Monday I spent well over an hour with my friend staring at the code and trying to come up with a solution. We didn’t find one but instead we found a really nice paper “Using Structural Recursion for Corecursion” by Yves Bertot and Ekaterina Komendantskaya. The paper presents a way of converting definitions like `fib`

to a guarded and productive form accepted by Coq. Unfortunately the converted definition looses the linear computational complexity of the original definition so the conversion method is far from perfect. I encourage to read the paper. It is not long and is written in a very accessible way. Another set of possible solutions is given in chapter 7 of CPDT but I am very far from labelling them as “accessible”.

I hope this post demonstrates that basics ideas behind coinduction are actually quite simple. For me this whole subject of coinduction looks really fascinating and I plan to dive deeper into it. I already have my eyes set on several research papers about coinduction so there’s a good chance that I’ll write more about it in future posts.

]]>