Over the past couple months, I have been attempting to find a language to use as a core calculus for Dana, as anyone who follows this blog knows. I have been slowly converging on IΞ for its elegance and simplicity. It turns out to be very powerful, capable of modeling many structures while avoiding many of the paradoxes typically involved with those structures. In this post, I give an exposition of IΞ and construct some common structures.

The main idea is to use lambda calculus for all of our symbol manipulation needs, adding a constant Ξ for quantification. “Types” are unary predicates, so membership is application. For example, “Nat 0” is a proposition saying that 0 has type Nat (where 0 and Nat are given previous definitions somewhere).

ΞXY means “forall x, if X x, then Y x”, or, if you think of predicates as sets, simply “X is a subset of Y”. So if we have types Nat and Bool, we can say that f has type Nat → Bool with “ΞNat(\x. Bool (f x))”, read “forall x in Nat, f x is in Bool”. Very direct, is it not?

### System I

Interpreting these meanings directly, we arrive at Curry’s *System I*, whose rules follow. A proposition (sequent) has the form “Γ |- X”, which has the interpretation “assuming Γ, X is provable”.

Γ |- X ⇐ X ∈ Γ Γ |- Y ⇐ Γ |- X ; X is βη convertible with Y Γ |- YZ ⇐ Γ |- XZ ; Γ |- ΞXY Γ |- ΞXY ⇐ Γ,Xa |- Ya ; a does not occur free in X,Y,Γ

I have stated the rules as “to prove X, you need to prove Y”, because that’s kinda how my brain works. Take a moment to internalize them. They are obvious given the above intuitions, and shouldn’t be hard to read.

On top of this, we can build propositional calculus. Define “K = \x. \y. x”, and write “X ⇒ Y” as shorthand for “Ξ(KX)(KY)”. This system has the properties you would expect of a simple propositional calculus.

Sadly, this system is inconsistent. We can prove any proposition X:

Let Y = (\x. x x ⇒ X) (\x. x x ⇒ X) Observe that Y = Y ⇒ X. [1] Y |- Y [2] Y |- Y ⇒ X [3] Y |- X -- modus ponens on [1],[2] [4] |- Y ⇒ X [5] |- Y [6] |- X -- modus ponens on [4],[5]

Martin Bunder had a brilliant idea to block this paradox, and also many others, which brings us to the object of my infatuation:

### System IΞ

The crucial step above was [3]-[4], where we abstracted over the infinite proposition “((… ⇒ X) ⇒ X) ⇒ X”. The way we will block this is to only allow abstraction over finite propositions. Introduce a new symbol, **H**, such that **H**X means “X is a finite proposition” (or simply “X is a proposition”). We will derive finiteness from the finiteness of proofs: to prove HX, we first have to prove H of each component of X. Our system becomes (the new additions are in bold):

Γ |- X ⇐ X ∈ Γ Γ |- Y ⇐ Γ |- X ; X is βη convertible with Y Γ |- YZ ⇐ Γ |- XZ ; Γ |- ΞXY Γ |- ΞXY ⇐Γ |-; Γ,Xa |- Ya ; a does not occur free in X,Y,ΓH(Xa)Γ |- H(ΞXY) ⇐ Γ |-H(Xa) ; Γ,Xa |-H(Ya) ; a does not occur free in X,Y,Γ Γ |- H(HX)

The final rule is an axiom, simply saying that “X is a proposition” is always a proposition.

### Constructing the Naturals

Now I will embark on constructing the type of naturals. Since types are predicates, I also need to come up with a representation for naturals. It turns out that it doesn’t matter what representation I use, as long as it has zero and a 1-1 successor function. For the sake of discussion, let’s use the Church encoding.

0 = \f. \x. x S n = \f. \x. f (n f x)

So a natural is an iteration function. For example, the number 3 iterates a function 3 times on its argument: 3 f x = f (f (f x)).

Coming up with a way to classify all of these, but not any others (such as `infinity f x = f (f (f (f (f ...))))`), was quite a challenge. You might try to classify these function on the property that they are an iteration function, but any sensible method of doing that ends up including infinity. I began thinking that IΞ was not strong enough, and looking for ways to enrich it by adding more axioms.

Fortunately, no new axioms are necessary! The encoding is obvious in retrospect. What is the first thing a mathematician thinks when you talk about the naturals: *induction!* Let’s define a natural as any object which you can do induction (with 0 and S) over.

To make this readable, we need to introduce a few more symbols:

f . g = \x. f (g x) -- composition A → B = \f. ΞA(B . f) -- the type of functions from A to B True = ΞHH -- the true proposition U = K True -- the type of all objects (mnemonic: universe) L = U → H -- the type of predicates/types

Now, for the naturals:

Nat x = ΞL(\p. p 0 ⇒ ΞU(\n. p n ⇒ p (S n)) ⇒ p x)

Reading this in English: “x is a Natural if, for all predicates p, if p 0, and p n implies p (S n), then p x”. In other words, x is a natural if you can do induction up to it.

Exercise: prove |- Nat 0 and |- (Nat → Nat) S. Note that this can be done independent of our definitions of 0 and S.

### More Inductive Constructions

Using this scheme, we can construct all sorts of things. For example, equality:

Eq x y = ΞL(\p. p x ⇒ p y)

Lists:

nil = \n. \c. n cons x xs = \n. \c. c x xs List a x = ΞL(\p. p nil ⇒ ΞU(\ys. p ys ⇒ Ξa(\y. p (cons y ys))) ⇒ p x)

There is a classic paradox involving the inductive type Set = Set → H, which is definable using this scheme:

Set x = ΞL(\p. Ξ(p → H)p ⇒ p x)

However, as I tried to prove the inconsistency, I was blocked by the H rules. This gives me hope.

### Coinductive Constructions

It is also possible to construct coinductive types. Here are the “conaturals”, the naturals with infinity. We can’t use the constructor definition anymore; coinductive types are all about projections. So the conaturals have only one projection, onto a disjunction. So to eliminate a conatural n, you pass it what to do if n = 0, and what to do with n’ if n = S n’. For example, to check if a conatural is zero, you can use n True (K False).

CoNat x = ΞL(\p. Ξp(\y. y True p) ⇒ p x)

In English: x is a conatural if for all predicates p, if p y implies y True p, then p x. y True p can be interpreted as y = 0 or y = S n and p n.

Isn’t IΞ cool? Simple, obvious, but powerful enough to model a wide class of data (and codata). The thing I like best about it is that it is untyped at its core. Functions are just pure lambda calculus functions that we are “later” proving properties about. Type erasure comes for free (well, sortof: encoding dependent types into this system will end up passing types as parameters at runtime, even though they are never used).

### Future Concerns

Bunder proved his version of IΞ consistent and complete. But the rules I gave, and that I have been using, are not his, and in fact are more powerful than his. This makes it possible that my rules are inconsistent. My system can prove LL, while his cannot. This worries me, because LL in the typical interpretation means “Type : Type”, which gives rise to the Burali-Forti paradox. However, even though I was able to encode the infrastructure for Russell’s paradox, I was unable to complete the proof because of the H rules. Maybe the same thing will happen?

I’m spending a bit of time trying to understand the Burali-Forti paradox that gives rise to the inconsistency in Girard’s System U, so that I can try to carry it out in IΞ. If IΞ does turn out to be inconsistent, I am not worried. A few universe restrictions here and there (i.e. leave U unspecified, rather than defining as K T) should do the trick, at the expense of some convenience and beauty.

Also, since I intend to compile Haskell to IΞ, I need to talk about partiality somehow (eg. prove that if a function terminates, it has this type). This has been giving me trouble, but in a good way. I found that I don’t really understand how bottoms behave, and how to talk about them without implying their semantics. I’m confident there is a way, though, maybe after adding a few axioms as a compromise. But I need to think hard and understand bottoms better, so I know what ought to be true, before I try to prove them or add them as axioms.

Excellent post, Luke. It’s done the best job so far of explaining why you’ve become so attached to IE. (Yes, I know that’s not the right letter, but that’s what it looks like to me!)

Is this something you are working on in your spare time, or is it funded (either by a university or a formal grant)?

Yup, I agree with Ryan Ingram. Your abiding “addiction” has been very mysterious thus far to your fans.

Please continue to enlighten us with gentle expository posts. I like the way you added the exercise above, btw.

I first became aware of logics that utilized lambda calculus early this fall, but it was not until in December when someone handed me an article about the “HOL Light” system that I realized the extent to which lambda can serve as the basis for a logic. The system I’d looked at in the fall was merely first-order logic with lambda calculus *added in*, so it is a very different sort of thing… I was amazed to discover that first-order logic can be obtained “for free” from lambda-calculus-with-equality. Yet, the typed nature of the system made me feel like I didn’t quite know what was going on in the background (the material I read did not explain polymorphic typing, for example). So, I am glad to see an untyped alternative. Yet I still don’t feel like I understand how lambda-based systems fit into the grand scheme of things…

OK, I’m doing the first exercise (|- Nat 0), and I’m stuck trying to prove, without any details of S or 0:

La, K(a0)b, Uc, K(ac)d |- H(K(a(Sc))d)

which I need to prove

La, K(a0)b, Uc |- H(#(K(ac))(K(a(Sc))))

Did I go down the wrong path? (# = Xi)

Thanks for the encouragement guys. When you get really deep into something, it’s easy to forget that the rest of the world isn’t there with you.

I’m working on this in my spare time, which is presently all of my time. (Though that is quickly changing as I run out of money) I’m hoping eventually to be doing this as part of a dissertation.

As for the exercise: H(K(a(Sc))d) = H(a(Sc)), by expanding K. And you have La.

Doing things in excruciating formal detail is quite monotonous; I implicitly apply some metatheorems such as:

(Γ |- A ⇒ B) ⇐ (Γ |- HA ; Γ, A |- B)

LA ∈ Γ ⇒ Γ |- H(AB)

Keep in mind that it is legal to do βη conversion on the left of |- as well, since the only way you can use hypotheses is by bringing them into the consequent, at which point you can do the conversion. I like to keep the Ks and stuff out of my way…

Oh, duh. I short-cut this in an earlier part of the proof because I had Uc as an assumption so I didn’t have to prove (K True c), but that’s trivial to prove.

I agree with you that meta-theorems are useful to remove tedium, but when I first use a system I find it useful to go through the tedium to make sure I have a full understanding of how things really work. After that, I use metatheorems as part of a bigger “proof building” system which does transformations on proofs. Of course, you need to prove the transformations correct!

For reference, I put my solution (with all tedium!) at http://ryani.freeshell.org/nat_0.txt