# Inane Patterns

I was just screwing around in ghci, playing with the pattern:

```> sum [1..1000]
500500
> sum [1..100]
5050
> sum [1..10]
55
> sum [1..1]
1
```

And it was unfortunate to see the pattern “break” at [1..1]. The pattern being, sum [1..10^n] = 5 followed by (n-1) zeros followed by 5 followed by (n-1) zeros.

But check this out:

``` 5050 = (5 * 10^1) * 10^2 + (5 * 10^1)
55   = (5 * 10^0) * 10^1 + (5 * 10^0)
```

Continuing the pattern…

``` (5 * 10^-1) * 10^0 + (5 * 10^-1) = 0.5 * 1 + 0.5 = 1
```

So 1 is 5 followed by -1 zeros (itself zero digits, 1 for the 5 and -1 for the zeros, which is why we multiply by 10^0), followed by 5 followed by -1 zeros, and the pattern didn’t break. Woah…

Yeah, not mathematically deep, just kinda funny.

# No Total Function from Infinite Sequences of Booleans onto the Integers

Yesterday I found out about a remarkable algorithm which can take any total predicate (function returing true or false) on infinite sequences of bits and find an infinite sequence which satisfies it (and implemented it). It’s a counterintuitive result, since you’d think that you can’t search the (uncountably) infinite space of bit sequences in a finite amount of time, but you can.

This implies some results about how much total functions can encode. For example, there must not be any total function from infinite sequences of bits onto the integers (such that any integer is reachable using a suitable infinite sequence). If there were, you could use this algorithm to decide any predicate on the integers, which would be a far-too-powerful mathematical tool.

But I couldn’t prove it, not without citing the algorithm. Why must there not be any such function, besides that it would make the algorithm incredibly powerful (so powerful that you could prove its impossibility, but that proof method was not good enough for me, because it could have been that the algorithm had an error).

The proof was actually hiding in the proof on sigfpe’s blog, but I couldn’t see it at first. So I’ll give a more explicit version here.

Let f be a total function from sequences of bits onto the integers (f : (Z -> Bool) -> Z). Let df(xs) denote the greatest index that f reads when given the argument xs. Now, for example, choose an xs such that f(xs) is 0. df(xs) is the greatest index that f reads, so f doesn’t depend on anything greater (for this argument), so every sequence with the prefix xs[0..df(xs)] has the value zero.

Now make a binary tree corresponding to the input sequence, where a node is a leaf iff the sequence leading to the node is the shortest prefix that completely determines f‘s output. Every possible extension of the sequence leading up to the leaf maps to the same value, so we informally say that the leaf itself maps to the value (even though the leaf represents a finite sequence and f takes infinite sequences).

This tree has infinitely many leaves, since there must be (at least) one leaf for each integer. And now, just as sigfpe did, invoke the König lemma, “every finitely branching tree with infinitely many nodes has at least one infinite path”. That infinite path represents an input to f for which it will not halt. That is to say, f was not total after all!

Here’s a reply to a comment
that was left about my recent post “Platonism”. The discussion is
getting long and involved, so I thought I’d make a new post. And to
anonymous (the poster), don’t worry about the comment length. I’m only
posting anew to draw attention to the interesting discussion.

So, without further ado, the short version: For me, the n of your
sentence ‘But n [declared via ‘there exists an n such that M halts in n
steps’] is fantastic; it is not a concrete number; it cannot be written
1+1+1+1+1+1+… some finite amount of times.”, needs to be a plain
normal standard natural number (i.e. element of N = { 0, S(0), S(S(0)),
… }), as you’ve introduced n as giving the /number of steps/.

But in the context of that definition, we were still working in PA,
which is incapable of saying “n is a standard natural”
without giving some other kind of constraint (for example, being less
than a number written S(S(S(…S(0))))). I’ll go into more detail when
I discuss Goodstein’s Theorem below, which actually provides a nice
context for discussing nonstandard naturals.

As meta language, I’ll use a Haskell-like language which does not
have ordinals (this will become relevant in a second). As object
language, I’ll use the untyped lambda calculus. Without further

```-- I'll use Goodstein's Theorem
-- as example.  Briefly, it
-- states that every Goodstein sequence terminates, i.e. that G(n) is finite
-- for all n in N = { 0,1,... }.  It is unprovable in PA, but can be proved with
-- cool ordinal arithmetic.
exampleExp :: Nat -> Exp
exampleExp n = ...lambda expression which evaluates to some value if G(n) is
finite and bottoms otherwise...
-- eval (exampleExp n) terminates iff G(n) is finite.
```

Now my question: In my opinion, eval (exampleExp n) halts for every
n::Nat, as we know that every Goodstein sequence is finite. But from
does not halt for some n :: Nat” would be consistent, as we cannot prove
the finiteness of all Goodstein sequences, as our meta language does not
have ordinals.

In short, I see the finiteness of all Goodstein sequences as an
absolute truth; I accept this truth even if we cannot prove it in some
formal systems.

First things first, the problems I was talking about are
fundamentally different from this question in that the halting problem
is not solvable by any formal system (assuming Church’s thesis).
But there’s still a lot of interesting stuff to be reaped from your
question.

Just because your meta language doesn’t have ordinals doesn’t mean
Goodstein’s theorem is not provable (well, unless Goodstein’s theorem
gives rise to ordinals :-). But what we can say is that if Goodstein’s
theorem is unprovable in your meta language, then it is consistent with
nonstandard naturals; i.e. Nat might denote more than just the set of
things in { 0, 1, 2, … }.

We can see this by looking at the negation of Goodstein’s theorem,
which looks like “exists n. forall k. not Goodstein(n,k)” (where
Goodstein(n,k) means the Goodstein sequence starting at n reaches 0 in k
steps). Let’s call this sentence NG, and your meta language S. If S
does not prove Goodstein’s theorem, then S+NG is consistent (if not,
that would be a proof). By the completeness theorem, then, S+NG has a
model in ZFC. And what the heck is n from “exists n. …” in that
model? It had better not be one of { 0, 1, 2, … }. So the model of
Nat in S+NG can’t be isomorphic to the set N from set theory; i.e. S+NG
actually asserts the existence of nonstandard naturals.

It is useless to think of the nonstandard naturals as actually
existing, though. n is merely a figment of the finite reasoning
available to the proofs in S. If proofs could be infinitely long, then
NG would be inconsistent with S (as well as a myriad of other things,
such as PA being complete (and thus inconsistent? I don’t know if the
incompleteness theorem applies if proofs are allowed to be
infinite)).

So now the difference between the halting problem and Goodstein’s
theorem: PA does not “know” that Goodstein’s theorem applies to it, but
ZFC knows. That is, assuming ZFC is consistent, then Goodstein’s
theorem about PA is true. But ZFC does not know everything about PA.
I seem to recall a game-theoretic theorem in PA which employed a large cardinal
axiom. But it’s pretty easy to show that there are statements in PA
that ZFC cannot prove true or false about its model of PA1. ZFC’s also has got its own bundle of undecidable issues, the continuum hypothesis to name one. But we can use a bigger system that models ZFC and possibly prove the continuum hypothesis one way or the other.

And now I just had an ephiphany… I was going to say “you can keep
expanding the system up and making more and more abstract axioms, but
there is a single Turing machine which evades them all”. That is not
true though. Obviously, you can add axioms “such-and-such Turing
machine does not ever halt”, and if it actually doesn’t (Platonically
speaking), then of course that is consistent. Then it seems possible
that we may find more and more “obvious truths” about mathematics, and
those will start proving that more and more Turing machines don’t halt.
For any given Turing machine, we may eventually find a “natural” axiom
which proves it doesn’t halt. If that were the case, it would mean that
the set of axioms about the universe is not only infinite, but
non-recursive. Knowing the universe, that seems reasonable :-).

If not, though, then there is a Turing machine which evades all of
our reasoning. This also seems reasonable, since there are machines as
small as 5 states and 2 symbols whose halting problems are yet unknown,
for example:

```    State    Read 0    Read 1
-------------------------
a        1 L b     1 R b
b        1 R c     0 L e
c        0 R d     0 L a
d        1 L a     0 R d
e        -halt-    0 L c
```

I hope that cleared some of the blocks making you believe that this area is in any way clear :-).

To answer another question, nonstandard naturals have very little to do with the ordinals from
set theory. All countable nonstandard models of arithmetic are isomorphic, so we know that the
nonstandards we got from negating Goodstein’s theorem are the same ones we get by standard application
of the compactness theorem. There’s
no least nonstandard natural (as omega is to the ordinals); rather, they look like the lexographic
ordering of Q × Z; for each rational number there is a copy of the integers (both positive
and negative). Of course they are all greater than the standard naturals. They satisfy
every first-order sentence provable by PA. A classic example is that the twin prime conjecture
is true if and only if there is a nonstandard twin prime, because the twin prime conjecture is
stated as “forall a. exists b > a. ‘b and (b+2) are prime'”, which,
if true, must hold for nonstandard a’s as well.

Funny coincidence, a program computing the Goodstein sequence of a number was one of the
first Haskell programs I ever wrote. :-)

1Since PA is turing complete, it can embed the axioms of
ZFC as Gödel numbers. You can thus construct a statement of PA
that says “ZFC is consistent”. And of course, because of the
incompleteness theorem, ZFC cannot prove this statement true or
false.

# Platonism

Studying all this decidability theory is fundamentally changing my view of the universe. In particular, I am losing my Platonism.

It all begins with the halting problem, which isn’t so much a problem anymore as a result. It states that there is no computer program which can tell whether another computer program will ever finish. The proof is so remarkably simple that I will paraphrase here:

Suppose there were such a program. Call it infinitely_loops(prog, input). It uses a magical complicated procedure to analyze prog and determine whether, when input is fed as the input to prog, prog will get into an infinite loop or not. Now let’s write a new program, called gah, which takes as input input, which is the source code for a program:

```    // include source code for infinitely_loops here so we can use it
if (infinitely_loops(input, input)) {
exit();            // (a) finish immediately
}
else {
while (true) { }   // (b) infinite loop
}
```

Just to be clear: gah takes a program as input and then asks whether, when you feed it its own source code as input, whether it gets into an infinite loop. If it does, then gah exits immediately. If it doesn’t, then gah itself gets into an infinite loop.

Finally, we ask the question: what is the value of infinitely_loops(gah, gah)? That is, does gah get into an infinite loop when you feed it its own source code? Well, there are two possibilities;

1. It does get into an infinite loop. But then look at the if statement above. We gave gah itself as input, so input = gah. Substituting, the if statement reads if (infinitely_loops(gah, gah)), which is what we’re asking about. The condition would be true, in which case our program would take branch (a) and exit immediately. But that’s no infinite loop, that’s definitely finishing. So there’s a contdadiction, which leaves only the remaining alternative:
2. It does not get into an infinite loop. But in that case, in the if statement above, our program would have taken branch (b), so it would have entered the infinite loop we wrote there. Another contradiction!

So, since it can’t infinite loop, and it can’t not infinite loop, we have a contradiction, so our only assumption that the program infinitely_loops exists must have been false. There is no such program!

Okay, that was a little longer than I remember it. Still, the logic is simple and there is nothing magical about it.

I understood the halting problem even when I was in high school. I understood it, but I did not grok it. I thought it was totally neat that, after being introduced to programming and feeling like I could do anything, that there were computer programs which were impossible to write. Well, of course there are such programs: you can’t write a program which will tell you what will happen tomorrow. But this was a program which you could ask a real question to, a question which had an absolute, definite yes or no answer, and the program would not be able to answer it. That’s cool, isn’t it? It kinda gives meaning to being human, a kind of irreplacability, right?

Well, that’s what I thought at the time. I continued in college to study set theory and decidability theory, where I learned about Gödel’s incompleteness theorem. Unfortunately, that proof is not simple enough to repeat here :-). His theorem says that no matter how many assumptions you make, there will always be a statement in math which you cannot prove true or false. I’m not going to go into detail, but essentially he wrote down a logical statement which meant “such-and-such computer program eventually finishes”. Keep in mind that it’s very easy to write a computer program which will eventually write down everything that is provable (careful with the precedence there: it runs forever, writing down statements; it means that if you give me any provable statement, it will eventually write it down in its sequence). A proof is just a sequence of letters, and a computer is capable of checking whether a proof is valid. So just enumerate proofs, one-by-one (in practice this is completely infeasable, but we’re theorists!), check if it’s a valid proof, if so, write down what it proved. So if you could prove Gödel’s statement for any machine, then you could easily write a program which decided whether other programs infinite loop or not. But we just showed that that’s impossible, so it must also be impossible to prove Gödel’s statement.

This is when my mind started collapsing. The halting problem is not about what computers can do. It talks about all of mathematics. It means that determining whether computer programs infinite loop is a mathematically impossible problem, for both humans and computers. There are computer programs which the greatest mathematicans and programmers in the world could look at for years and never be able to determine whether or not it ends, without just running it (and if you run it, then you can wait until it ends and say “yes! it ends”; but if it doesn’t, you can wait forever until you die, never knowing whether it’s going to end or not).

Well, maybe it’s a problem in our reasoning skills. Maybe our logic is consistent, but it’s missing something that would conclude these things. Well, it turns out no. Gödel, the genius he was, also proved that if in every logically consistent world (“world” defined loosely, “mathematical system” is more accurate) a statement is true, then our proof system will be able to prove it. So then the halting problem becomes even more mind-boggling: there is some phantom statement we could add to logic that would allow us to prove that the machine does stop, and another phantom statement we could add that would allow us to prove that it does not. We hope (though, due to Gödel again, cannot prove) that our logic is consistent and represents the structure of information in the world. So how can whether this machine stops or not depend on which logic we use? It’s a fact! It either stops or it doesn’t!

I originally wrote about twice as much as you see here, but I realized that I wasn’t saying much, just going in circles. Essentially the whole idea of facts is vanishing from my world model. There are things that are true, and there are things that are false, and then there is fantasy. I don’t think our logic has holes, things that are true but it can’t prove given the proper assumption. It’s more like we’re asking questions about whether something will happen in the future, and in the case that it will, we will know only when it does. Notice that I didn’t consider the case that it won’t… because that case doesn’t even make sense :-).

# Turing Complete = Incomplete

I’ve talked about this a little bit before, but I’ll start over, since I have a new understanding of the problem now.

Gödel’s famous incompleteness theorem says that “any consistent, recursive set of axioms which can emulate arithmetic cannot be complete”. He showed this by essentially building a Turing machine (well, something equivalent to one) inside the logical proof system, and showing that there was a halting problem in that system.

I’ve been wondering this: You can show a system is incomplete by embedding arithmetic in it, so what other types of incomplete systems are there? Can you have an incomplete system which does not have arithmetic? Is it weaker than an arithmetic system (in which case, is there a minimal embedding that will get you to an incomplete system?), or just different?

Now I should take a moment to define my terms, because I’ve been using incomplete rather loosely. Incomplete means that there is a statement φ for which neither φ nor ¬φ is provable. The empty set of axioms is incomplete. That is of course not what I meant.

Definition: A consistent set of axioms A is called strongly incomplete if there is no recursive set of axioms R such that A ∪ R is complete.

Definition: A consistent set of axioms A is called recursively completable or weakly incomplete if it is not strongly incomplete, i.e. if there is a recursive set of axioms R such that A ∪ R is complete.

So above when I said “incomplete”, I meant “strongly incomplete”. I have a conjecture:

Conjecture: Every strongly incomplete set of axioms is capable of embedding arithmetic.

That would mean, in essence, that the only way you can be strongly incomplete is by being Turing complete. Intuitively I believe this conjecture, if only because of the appearance of the word “recursive” in the definition of strongly incomplete.

I have started to approach a proof, though I admit it’s somewhat out of my league. Still, it’s fun to explore. The best way to go about it I’ve found so far is to find a Turing complete system such as arithmetic, lambda calculus, group algebra, or diophantine equations and show that if a system is incapable of emulating that system, then it must be recursively completable.

You start with a set of axioms A and say, for example, for all wffs N(x), Z(x), S(x,y), the sentence ¬”N(x) means x is a number, Z(x) means x = 0, and S(x,y) means y = x + 1″ is consistent with A. The bit in quotes actually expands to a conjunction of four reasonably complex formulas, so its negation turns into a disjunction. Disjunctions are not very nice to work with from a proof standpoint. But it’s still a very powerful assumption, since this is for all wffs (with the desired number of free variables), so we can pick and choose ones that are convenient for our goal. We can also assume that the negation is actually provable in A, because if it’s just consistent, why not just add it to the set S we are using to complete A? We can actually add all of these negations to S (for all appropriate wffs) right off the bat, because the set is recursive.

As for how to show the rest of the completion, I’m stumped so far. I need some time to explore this system a bit; I’ve gone so far as to write down the expression of the axioms and negate them, playing around with different representations, but I haven’t really tried to get new consequences from them yet.

# Typesetting Fractals

You don’t need any fancy graphics plotter to draw fractals. You can just use Mathematica’s (fancy-schmancy) typesetter. I entered the following into Mathematica:

And I got this.

I was thinking about game ladder scoring recently. I recall being frustrated at Supreme Commander for putting me lower on the ladder than someone I’ve beaten. I want a ladder algorithm that won’t allow that.

In particular, I want a ladder function l: (P × P → Z) × P → R which assigns to each player a number and satisfies the following properties [P is the set of players, Z is the integers, R is the reals; the first argument to l is the set of games played: given two players p and q, it returns how many times p has defeated q]:

1. If g(p,q) > g(q,p) then l(g,p) > l(g,q) (if I’ve beaten you more times than you’ve beaten me, then I have a higher ladder score than you).
2. If for all players q, g(p,q) = h(p,q) and g(q,p) = h(q,p), then l(g,p) = l(h,p) (your score doesn’t change if you haven’t played any games).

Of course, property #1 already causes problems. Let’s say p has defeated q once, q has defeated r once, and r has defeated p once; the vice-versas are zero. Then l(g,p) > l(g,q) > l(g,r) > l(g,p), which is impossible. I really had no right to be mad at Supreme Commander for ranking someone I had defeated above me; it’s impossible to avoid.

That doesn’t solve the problem though. Saying it’s unsolvable isn’t a solution… we’ve got to find the best solvable solution.

I could relax the constraint in rule #1 to be ≥ rather than >, but then the problem becomes too solvable. A plethora of trivial solutions emerges: l(g,p) = 0 for all p, l(g,p) = 0 for all p but the one person who has beaten every player more times than he has been beaten by them, etc. We need to add another property to get rid of these trivial solutions.

The ≥ constraint forces every player in a cycle to be assigned the same number. We can’t get around that. So maybe we can try to say that if you’re not in a cycle with a player, then your score is distinct from his.

I knew I put that second property in there for a reason. That, too, is impossible. Picture two cycles of size, say, 50. Pick one element p from the first cycle, q from the second. Can you change a bunch of arrows, as long as you don’t change any arrows to or from p or q to make the two cycles one big cycle? Of course you can. By our axiom, p‘s and q‘s scores were distinct, but now they’re in a cycle together, so one of them must have changed. But neither p nor q needed to play any games for that to happen. This violates our second property.

So basically you can’t even get close to the two properties I stated above. The fact is that player rankings are nothing like real numbers, so don’t try to force them in.

You can satisfy the second property without the first by summing over all players q the function f(p,q) where f is defined by:

f(p,q) = 1 if g(p,q) > g(q,p), -1 if g(p,q) < g(q,p), and 0 otherwise.

Which has the nice property that you can have many rematches without taking advantage of the scoring. But, of course, it doesn’t have the other nice properties that I want…

# Incompleteness Razor

I’ve been wondering about “levels of incompleteness”. There might be some cool decidability theory hiding in here. I started the exploration with a few definitions (and haven’t gotten very far):

A recursive set of sentences A is called weakly incomplete if A is incomplete and there is a recursive set of sentences SA which is complete.

A recursive set of sentences is called strongly incomplete if it is incomplete and not weakly incomplete.

We know that number theory (+,*,exp) is strongly incomplete by these definitions; i.e. it is incomplete, and if you try to complete it, its incompleteness will just pop up somewhere else. This is Gödel’s incompleteness theorem. The question is the converse: is every strongly incomplete set of sentences strong enough to contain number theory (Turing complete)?

Here is an easy theorem:

Theorem: all models of a strongly incomplete set of sentences are infinite.

Proof: Let A be a strongly incomplete set. Call the number of finite models of A n. Note that n is finite. Let φ be a sentence independent of A. There is a model of A in which φ holds and a model in which ¬φ holds. Pick one according to your favorite football team. Add φ or ¬φ to A, and note that n has just decreased by at least one. Repeat until one of these holds:

1. There is exactly one finite model of A. By the completeness theorem, A is now complete. However, the above was a finite process, so A is still recursive, so it was actually weakly incomplete.
2. There are no finite models of A. The last φ or ¬φ we chose must not have been independent of A; in fact we just proved its negation. So this case is impossible. If there were no finite models to start with, then we’re done because we were trying to prove that.

EDIT This theorem is invalid. When I say “note that n is finite”, I have made an error. There needn’t be finitely many finite models. I conject that the theorem is still true, but I have to find a different proof.

I could have sworn that when I went on a walk today to think about this stuff, I had a proof that if every strongly incomplete set embeds number theory then there is a decision procedure for determining whether a sentence is independent of a weakly incomplete set. I can’t remember it now!

# The Real Infinity, For Reals

This is the third article in a series called Before I Forget, an exposition for the semi-layman of freaky high-level mathematics. Last time we talked about cardinal numbers: numbers which are used for counting the elements in very large sets. This time I’ll use the compactness theorem to introduce an infinity that you can use in ordinary arithmetic to get ordinary results. Remember limits from calculus? Turns out you don’t need ‘em to do calculus. We’ll see how this time.

This is the first article that will dive into the details of first-order logic. I’m not going to spend the time here to describe first-order logic: it is a deep subject with a rich vocabulary and lots of little nooks and crannys where faulty reasoning can hide. However, I highly recommend that everyone learn its principles and how to use it. It will change the way you think; it will endow you with the ability to approach certain situations with extreme precision, and to clear away bullshit that many authors throw at you. See the wikipedia page, including the external links section, if you’d like to know more. It’s not technically required to understand this article, but it is required to grok this article.

### Sexy mathematical models

I will begin by presenting the difference between a model and a set of sentences. Okay, so we have a sentence of first order logic endowed with one extra symbol: S (successor). We will use this to represent the natural numbers. The existence of the number 0 is expressed by the following sentence: ∃x ∀y S(y) ≠ x (there is an x which is not the successor of any number) (remember we’re in the natural numbers, there is no -1). So when we say 0, we really mean to introduce a new variable and that sentence to identify our 0. The important thing is that we can express it though. And now we may represent any natural number using successor and 0. 1 = S(0), 2 = S(S(0)), 3 = S(S(S(0))), etc.

# The Lesser of Infinitely Many Evils

This is the second post in a series called Before I Forget, which is an exposition for the semi-layman of some freaky high-level mathematics (mostly set theory). This episode is still about infinity, but covers the Cardinal numbers.

### Last Time

Last time I introduced the ordinal numbers, specifically the ordinal ω—the least number greater than all the integers—and things derived from it (like ω × 2, and ωωω an infinite number of times… (ω times, actually :-)). Today we’ll be exploring a parallel concept to the ordinal numbers called the cardinal numbers. Ordinals are used to index positions: the 1st position, 2nd position, ωth position; cardinals are used to measure sizes.

### Well-ordering

Okay, let’s step out of the context we’ve been working for just a little while. Numbers are now the numbers you are familiar with: not just the nonnegative integers, but also negatives, fractions, and irrationals. Here is an exercise: come up with a set of numbers which has no least member. This should be pretty easy, but it should be obvious that the set must be infinite.

Got one yet? I want you to come up with one before you continue reading.

One of the most obvious examples is the set of all numbers greater than (but not equal to) 0. How do you know it doesn’t have a least member? Well, suppose it did, and call it x. We know that x > 0, because it was in our set. Well, if x > 0, then x/2 > 0 also, so x/2 is in our set. But of course x/2 < x, so it turns out that x wasn’t the least member after all (but we chose x specifically to be the least member, so this is a contradiction). Our supposition that our set had a least member must have been flawed, and we have thus proved that our set has no least member.

Another example is the set of all integers (positive and negative). It extends infinitely in the negative direction, so it can’t have a least member. (Exercise for the reader: prove formally that the set of all integers has no least member like we did for the real numbers greater than 0 in the last paragraph) There are tons of sets with no least member, and yours could have been quite different from my two examples.

Here’s another challenge: come up with a (nonempty) set of positive integers that has no least member. Got one? No, come on, come up with one. I dare you. Don’t continue reading until you have one.

Yo mama’s fat.