I have some very exciting news! I wrote some actual code in the Dana repository. It is the IΞ Certificate module. That is, it’s an abstract data type Proof, such that only valid proofs in IΞ can be constructed.
The certificate module (IXi.Term and IXi.Proof together) is about 280 lines, which isn’t fantastically small, but isn’t too bad. This is especially considering that I don’t expect it to grow—it is perfect, modulo bugs.
Right now, Proof is just a proof checker function, but it’s designed so that I could swap it out with some serializable format, or even (crosses fingers) a dependent certified type.
One interesting development in this implementation is the new conversion certificate for lambda calculus with De Bruijn notation. That is, objects of type Conversion represent a valid βη conversion between terms. Previously, I hadn’t been satisfied with my solution: the certificate was implemented as a pair of convertible terms. This led to far too many equality comparisons of terms when combining and using certificates, which is both inefficient and I suspect would be hard to prove things about. Also, it required you to know too much about the goal you were proving, bloating the proof terms and tangling them with the theorems they were trying to prove.
The hardest part is β expansion, for example turning X into (\x.\y.x) X (\x.x). That was the reason for the pair representation: to do beta expansion, you just did a beta reduction and reversed it.
The new implementation instead implements conversions as partial functions. I.e. you give a conversion a source term, and it gives you an equivalent term (or says it couldn’t convert). This means I had to separately model beta reduction and beta expansion, because you can’t easily reverse a conversion. However, the solution is quite clean. I chose a basis of expansion combinators, which can be composed to form any expansion. They are:
Identity : A → (\x. x) A Constant : A → (\x. A) B [x not free in A] Apply : (\x. A) C ((\x. B) C) → (\x. A B) C Lambda : \y. (\x. A) B → (\x. \y. A) B [y not free in B]
This is in addition to the other combinators, which are needed to make this basis complete. They include β reduction, η expansion/contraction, and ways to focus a conversion on a subexpression. The key is that each combinator is correct by inspection, so we can be confident that the conversion algebra is sound.
I chose these combinators by thinking about what would be needed to construct the inverse conversion from bringing a term to normal form. If you’re familiar with SKI factorization, the process is pretty similar. Whenever you reduce an application (\x. A) B, you look at the structure of A and “push” B in by one level, applying one of these combinators. For example:
|(\f. \y. f (f y)) (\x. x)||Lambda|
|\y. (\f. f (f y)) (\x. x)||inLambda Apply|
|\y. (\f. f) (\x. x) ((\f. f y) (\x. x))||inLambda (inLeft Identity)|
|\y. (\x. x) ((\f. f y) (\x. x))||inLambda Identity|
|\y. (\f. f y) (\x. x)||inLambda Apply|
|\y. (\f. f) (\x. x) ((\f. y) (\x. x))||inLambda (inLeft Identity)|
|\y. (\x. x) ((\f. y) (\x. x))||inLambda Identity|
|\y. (\f. y) (\x. x)||inLambda (Constant (\x. x))|
The reverse composition of the conversions on the right will bring us from \y. y to (\f. \y. f (f y)) (\x. x).
But isn’t it an awful pain to write all those combinators when proving things? Of course not! I make a computer do it for me. I have a little algorithm which takes two terms and computes a conversion between them, by bringing them both to normal form, and using the forward conversions one way and the inverse conversions the other way. Of course, if I give it terms which have no normal form it won’t halt, but the idea is that these terms are static: I use dummy terms to explain the conversion I want, and then apply the conversion I got back to the real terms (which may have subterms without normal forms).
So I say: get me from (\x y. A x) A I to (\x. x x) A, where “A” and “I” are just strings, and then I apply the conversion I got back to, say, (\x. \y. WW x) (WW) (\x. x), where WW has no normal form. The conversion still succeeds.
The certificate pattern shines here: my constructors are easy to verify, then I have a fairly involved algorithm for constructing certificates that is easy to use, which is guaranteed (at least one sort of) correct by construction.
So that’s fun stuff.
Proofs are still pretty tedious, however. My next step is to make some smart “tactic” combinators (which of course generate the underlying certificates) to make proofs easier. It shouldn’t take too long to make it at least tolerable. Then I’ll build up a library of certified infrastructure necessary for typechecking
Haskell--, and finally write the compiler to complete the bootstrap. There are plenty of dragons to be slain along the way.