When we’re talking about game design, Namaste usually comes from the perspective that you can make a computer do anything. I’m usually of the perspective that there’s a lot of stuff that isn’t practical or isn’t possible to make a computer do (I’m usually too liberal in handing out this “impossibility” trait, though). There is a pragmatic reason for this difference of perspective: when I set out to solve a problem, I want the problem *solved*; that is, my algorithm will work in all cases and will always finish. But in game design, a lot of the time you don’t need to completely solve a problem for it to work. For example, in Minesweeper Infinity, when we were discussing how to approach the solver, Namaste kept coming up with the idea for a pattern recognition engine. Like, when you see 1,2,1 along a wall, you always know that there are mines under the two 1s, and not under the 2. The idea was that you could solve most boards by building up a collection of such patterns. But an algorithm that uses that approach has two output values: either “it’s solvable”, or “it might not be solvable”; but I was looking for an algorithm that would say either “it’s solvable” or “it’s not solvable”. I did eventually come up with my algorithm, but it degenerated into the “it might not be solvable” case again when we put a “bailout” parameter on it in order to make it run faster.

But I digress. What I really want to share with you today is the concept of the busy beaver function. This is a function which is perfectly well-defined as far as its semantics, but is impossible to write in a computer. We begin with the concept of a Turing machine, a concept with which most computer scientists are familiar. A quick refresher: a two-symbol turing machine has an infinitely long tape of bits on which it works. It has a set of states, and for each state it has an action for each symbol. That is, if it’s in state 4 and it is currently looking at a 1, it has instructions about what to do next. These instructions are simply which symbol to write on top of the 1 (could be a 0, or it could be a 1 again), then which direction to move the tape (one to the left or one to the right), then which state to go to next. If a turing machine ever goes into, say, state 0, then it stops running and says “I’m done”. The Church-Turing thesis states that this small amount of machinery is enough to compute anything that is possible to compute, and has yet to be disproven (all modern computers are equivalent to a finite turing machine).

The busy beaver function Σ(n) looks through all turing machines with n states (there are finitely many of them) and, of the ones that eventually stop running, picks the one with the largest number of 1s written on its tape. Σ(n) is the number of 1s. That’s pretty easy, right? I mean, once you played with a few turing machines, it seems like you would be able to “compute” the value of this function. In fact, we know some values of this function, namely Σ(2) = 4, Σ(3) = 6, Σ(4) = 13.

But you will never write a computer program that says “What value of Σ would you like to compute? “, then you enter a number, then it runs for some time (could be billions and billions of years) and says “here is the value you were looking for”. Even with unbounded time and memory, you cannot write such a program. (A complex proof is given on the Wikipedia page, followed by a very simple one in the last paragraph of the proof section; see the very simple one :-).

Even more interestingly, this function grows more quickly than any function you can compute. So not only can you not write a program which computes the function, you can’t even write one to compute an upper bound. For example, there is some n for which Σ(n) > n ^{A(g64, g64)} (do read some wikipedia aricles and try to grok the size of that exponent!).

Oh, I forgot to mention: nobody knows Σ(5) yet, but we know it is at least 4,098. Also, nobody knows Σ(6), but it is at least 10^{865} (now you see what I mean about it growing quickly?).

Yay, a post I understand! That’s really cool. Although my brain wants to explode in attempting to comprehend

n^A(g._{64}, g_{64})So, okay, impossible to calculate the upper bounds. What are the methods for computing the lower bounds (such as for Σ(5) and Σ(6))? Is it just a sort of guess-and-check thing?

Well, you could easily write a computer program to find lower bounds. Just start the contest (i.e. run all the n-state turing machines in parallel), and each time one finishes, write down the number of 1s it recorded. This lower bound will get bigger and bigger. Then eventually, when you hit the value of the function, it will stop getting bigger. But you have no way to know that it has stopped, so you just keep checking, never making any progress.

In reality, mathematicians are the folks who do these things. No computer anywhere has 10^865 cycles to burn (actually the number of state changes for the 10^865 one was something like 10^1500; yowch!), so this method was certainly not used for that. Um, clever tricks, perhaps? Probably something to do with the Ackermann function. (That function puts an upper bound on the amount of time a primitive recursive function will take to execute. It tends to be a pretty bad estimator though: eg. “the binary addition function will finish in no more than 2^65536 – 3 cycles”).