Monthly Archives: September 2007

Bad Habits

Going off to live in a new place often causes you to redefine yourself as a person: to break down old bad habits and replace them with new, more productive patterns. Then when you come back, the old bad habits kick right back in again and all that work you put in to redefine yourself is put aside.

Well, I’ve never lived anywhere but here in Boulder (I lived for a year in Maryland when I was 5, but that doesn’t really count). But I figuratively went off to a new place when I started CU, and then again when I went to work at NetDevil. You could see traces of a hard-working, responsible person coming out. In school, I went to class all the time and worked hard; during work, I went every day and did my job well (but didn’t want to work any more that I already was, which is why I left).

Now I’ve come back; I am not enrolled in school and I’m unemployed. I have loads of spare time on my hands again. And just as expected, I’ve rebuilt my old bad habits…

… For some definition of “bad”. Now I have a chance to look back on the last few years. What have I done that I’m proud of?

  1. Got good grades in school.
  2. Made two or three computer games.
  3. Wrote two songs.
  4. Started a band.

#4 I did after I took my scholastic hiatus but before I was employed. But I don’t want to say “it doesn’t count”, because it still could have been the school environment that influenced me to do that. Regardless, this all pales in comparison to my old, bad habits:

  1. God bad grades in school.
  2. Made one computer game per month, approximately.
  3. Wrote two songs per month.
  4. Learned a new challenging piano piece every month.
  5. Became well-known as a contributor in the Perl community.

Looking back, my bad habit of learning is much faster than the new, responsible habit of learning, because my bad habit is to learn by myself at my own level and speed, and my new habit was to learn at the rate others were teaching me.

In summary, I like my bad habits much better. I enjoy myself more, I’m more satisfied with my creative output, and I’m always well-rested. While a student or employed, I’m usually fatigued from working as well as being ill-rested (as a byproduct of trying to be creative), and my creative output is pitifully small. Over the past week after my return from my vacation (which I took promptly after quitting NetDevil), I saw my old rate of creativity completely revitalized, more than it has been in years.

At some point I learned not to waste time: to use idle time and energy to learn and to create. And I love that about myself. That’s what allows me to become excellent at whatever I do; “hard work” is not what does it. The mind works best when it is free to experiment—without deadlines and without stress.

But I have a bit of an internal conflict—conflicting fears even. I fear that if I stay with my bad habits then I will never “get anywhere”. My goal was to get a PhD and become a professor. But I also fear that as a PhD student and as a professor—or whatever I end up doing—I would keep my responsible habits and be dissatisfied with my creative work.

I think I can disarm the first one, though. What is “getting somewhere”? There are a lot of professors who make major contributions to their fields, but there are a lot of professors who don’t. Someone in the math department once commented that for the majority of math dissertations, the number of readers is roughly proportional to the number of authors. The other part of “getting somewhere” is making money, but my skills (which, mind, I have developed thus far on my bad habits) can definitely do that for me when I need it.

As far as I can tell, the only thing I am missing by not taking the school route is colleagues, who can indeed help me learn faster and bring me to more advanced levels. But it’s not that I can’t find such people elsewhere; I certainly had no trouble in the Perl community. I think I will still contribute to mathematics and computer science even if I stopped formal education right now.

It looks like I just argued to myself that there’s no need for school anymore. That’s a frightening prospect because it’s been on my radar to go to graduate school for so long. In particular, Karlin’s parents have been highly influential in my life and I respect them very much, and they are quite keen on my finishing a bachelors and persuing a graduate degree. As far as I have convinced myself, though, I would actually cripple myself by going back to school and not taking things at my own pace. This may be the time to devise a nonstandard lifestyle on which I can support myself and have the freedom I need. I have been considering starting a rehearsal studio business…

Dilemmas aside, I’m very happy that my bad habits have returned to me.

Reply to Platonism

Here’s a reply to a comment
that was left about my recent post “Platonism”. The discussion is
getting long and involved, so I thought I’d make a new post. And to
anonymous (the poster), don’t worry about the comment length. I’m only
posting anew to draw attention to the interesting discussion.

So, without further ado, the short version: For me, the n of your
sentence ‘But n [declared via ‘there exists an n such that M halts in n
steps’] is fantastic; it is not a concrete number; it cannot be written
1+1+1+1+1+1+… some finite amount of times.”, needs to be a plain
normal standard natural number (i.e. element of N = { 0, S(0), S(S(0)),
… }), as you’ve introduced n as giving the /number of steps/.

But in the context of that definition, we were still working in PA,
which is incapable of saying “n is a standard natural”
without giving some other kind of constraint (for example, being less
than a number written S(S(S(…S(0))))). I’ll go into more detail when
I discuss Goodstein’s Theorem below, which actually provides a nice
context for discussing nonstandard naturals.

As meta language, I’ll use a Haskell-like language which does not
have ordinals (this will become relevant in a second). As object
language, I’ll use the untyped lambda calculus. Without further
ado:

-- I'll use Goodstein's Theorem
-- as example.  Briefly, it
-- states that every Goodstein sequence terminates, i.e. that G(n) is finite
-- for all n in N = { 0,1,... }.  It is unprovable in PA, but can be proved with
-- cool ordinal arithmetic.
exampleExp :: Nat -> Exp
exampleExp n = ...lambda expression which evaluates to some value if G(n) is
  finite and bottoms otherwise...
-- eval (exampleExp n) terminates iff G(n) is finite.

Now my question: In my opinion, eval (exampleExp n) halts for every
n::Nat, as we know that every Goodstein sequence is finite. But from
what I understand of your reply, adding the axiom “eval (exampleExp n)
does not halt for some n :: Nat” would be consistent, as we cannot prove
the finiteness of all Goodstein sequences, as our meta language does not
have ordinals.

In short, I see the finiteness of all Goodstein sequences as an
absolute truth; I accept this truth even if we cannot prove it in some
formal systems.

First things first, the problems I was talking about are
fundamentally different from this question in that the halting problem
is not solvable by any formal system (assuming Church’s thesis).
But there’s still a lot of interesting stuff to be reaped from your
question.

Just because your meta language doesn’t have ordinals doesn’t mean
Goodstein’s theorem is not provable (well, unless Goodstein’s theorem
gives rise to ordinals :-). But what we can say is that if Goodstein’s
theorem is unprovable in your meta language, then it is consistent with
nonstandard naturals; i.e. Nat might denote more than just the set of
things in { 0, 1, 2, … }.

We can see this by looking at the negation of Goodstein’s theorem,
which looks like “exists n. forall k. not Goodstein(n,k)” (where
Goodstein(n,k) means the Goodstein sequence starting at n reaches 0 in k
steps). Let’s call this sentence NG, and your meta language S. If S
does not prove Goodstein’s theorem, then S+NG is consistent (if not,
that would be a proof). By the completeness theorem, then, S+NG has a
model in ZFC. And what the heck is n from “exists n. …” in that
model? It had better not be one of { 0, 1, 2, … }. So the model of
Nat in S+NG can’t be isomorphic to the set N from set theory; i.e. S+NG
actually asserts the existence of nonstandard naturals.

It is useless to think of the nonstandard naturals as actually
existing, though. n is merely a figment of the finite reasoning
available to the proofs in S. If proofs could be infinitely long, then
NG would be inconsistent with S (as well as a myriad of other things,
such as PA being complete (and thus inconsistent? I don’t know if the
incompleteness theorem applies if proofs are allowed to be
infinite)).

So now the difference between the halting problem and Goodstein’s
theorem: PA does not “know” that Goodstein’s theorem applies to it, but
ZFC knows. That is, assuming ZFC is consistent, then Goodstein’s
theorem about PA is true. But ZFC does not know everything about PA.
I seem to recall a game-theoretic theorem in PA which employed a large cardinal
axiom. But it’s pretty easy to show that there are statements in PA
that ZFC cannot prove true or false about its model of PA1. ZFC’s also has got its own bundle of undecidable issues, the continuum hypothesis to name one. But we can use a bigger system that models ZFC and possibly prove the continuum hypothesis one way or the other.

And now I just had an ephiphany… I was going to say “you can keep
expanding the system up and making more and more abstract axioms, but
there is a single Turing machine which evades them all”. That is not
true though. Obviously, you can add axioms “such-and-such Turing
machine does not ever halt”, and if it actually doesn’t (Platonically
speaking), then of course that is consistent. Then it seems possible
that we may find more and more “obvious truths” about mathematics, and
those will start proving that more and more Turing machines don’t halt.
For any given Turing machine, we may eventually find a “natural” axiom
which proves it doesn’t halt. If that were the case, it would mean that
the set of axioms about the universe is not only infinite, but
non-recursive. Knowing the universe, that seems reasonable :-).

If not, though, then there is a Turing machine which evades all of
our reasoning. This also seems reasonable, since there are machines as
small as 5 states and 2 symbols whose halting problems are yet unknown,
for example:

    State    Read 0    Read 1
    -------------------------
    a        1 L b     1 R b
    b        1 R c     0 L e
    c        0 R d     0 L a
    d        1 L a     0 R d
    e        -halt-    0 L c

(Marxen &
Buntrock
)

I hope that cleared some of the blocks making you believe that this area is in any way clear :-).

To answer another question, nonstandard naturals have very little to do with the ordinals from
set theory. All countable nonstandard models of arithmetic are isomorphic, so we know that the
nonstandards we got from negating Goodstein’s theorem are the same ones we get by standard application
of the compactness theorem. There’s
no least nonstandard natural (as omega is to the ordinals); rather, they look like the lexographic
ordering of Q × Z; for each rational number there is a copy of the integers (both positive
and negative). Of course they are all greater than the standard naturals. They satisfy
every first-order sentence provable by PA. A classic example is that the twin prime conjecture
is true if and only if there is a nonstandard twin prime, because the twin prime conjecture is
stated as “forall a. exists b > a. ‘b and (b+2) are prime’”, which,
if true, must hold for nonstandard a’s as well.

Funny coincidence, a program computing the Goodstein sequence of a number was one of the
first Haskell programs I ever wrote. :-)

1Since PA is turing complete, it can embed the axioms of
ZFC as Gödel numbers. You can thus construct a statement of PA
that says “ZFC is consistent”. And of course, because of the
incompleteness theorem, ZFC cannot prove this statement true or
false.

Music Minus One

A few days before Nolan’s birthday party, Nolan came down and we jammed together. It was a good session… except that we forgot to enable recording for Nolan’s track! Thus the session is very groove oriented, lacking in melody. Pretty cool though, anyway.

It’s been a little while since I listened to these, but I seem to recall 1-6 all being pretty good, with #3 being especially unique.

Quantum Entanglement in Haskell

Back when I first learned some Haskell, I found out about monads. After understanding “enough to be dangerous”, I figured I could port the totally cool Quantum::Entanglement perl module to a Haskell monad. Unfortunately, such a task was not possible. I ran into trouble when the monadic bind needed to use an Eq instance, which of course it cannot because monadic bind has to work for all types.

But there is a deeper reason that I couldn’t do it. It’s because I needed to do something special with the input to quantum combinators, thus the control structure was in fact an arrow and not a monad. Well, probably an arrow. I just learned how to use them today. Anyway, the results are pretty promising. Check it out, there’s proper entanglement:

proc () -> do
    -- this binds (x,y) to either the tuple ("hello","world") with probability
    -- amplitude 1, or ("foo", "bar") with probability amplitude 1.
    (x,y) <- entangle -< [(("hello", "world"), 1 :+ 0)
                         ,(("foo",   "bar"),   1 :+ 0)
                         ]
    qPutStr -< x
    qPutStr -< y
    qPutStr -< show (x,y)

Will output helloworld(“hello”,”world”) half the time and foobar(“foo”,”bar”) half the time, but never any other combinations.

“Well,” you say, “entangle might just choose a random alternative right when you bind it, there’s nothing special about that!”. Well, wait until you see the interference features:

makeStates = proc () -> do
    -- return a variable in the superposition |4> - |-4> + i|3>
    entangle -< [( 4,  1 :+ 0)
                ,(-4, -1 :+ 0)
                ,( 3,  0 :+ 1)
                ]

demo1 = proc () -> do
    x <- makeStates -< ()
    qPutStr -< show x

demo2 = proc () -> do
    x <- makeStates -< ()
    qPutStr -< show (x^2)

demo1 prints out 4, -4, and 3 with equal probability, as expected. However, demo2 has some interesting behavior. Since 4 and -4 both have the same square, 16, and their probability amplitudes are 1 and -1, respectively, they cancel each other out. So demo2 will only ever print 9.

It keeps all states around until you do some I/O, currently only supported is qPutStr. It then collapses the system to one of the eigenstates specified by qPutStr’s argument’s Eq implementation, choosing randomly according to the given probability amplitudes. I do wonder if I can lift the IO monad into this arrow without compromising its safety (i.e. can you observe states without collapsing them?). I’ll have to learn more about arrows first, but my first tests indicate that it is in fact safe to do that.

Writing this module was a very strange experience. I’m not really sure how it works. I sorta followed how the list monad works, since it does something similar, adding features here and there, and writing functions based on what I thought their types should be. It only typechecked twice, the first time there was a bug. I decided that it must be the only part of the implementation that made me uneasy: the fact that it sometimes discarded its universally quantified dummy argument in the collapse method. I fixed more type errors, and the second time it worked… perfectly. And if someone asked me to explain how, I really couldn’t tell them.

This all of course gives me a very high opinion of Haskell. This is a module which does something which is not a “normal” thing to do, has some complex logic with quantum interference and entanglement, and I can write it correctly without understanding it.

You can download the source code, Quantum.hs, and its accompanying demo, example.hs, here. Check it out, maybe you can understand it :-).

Platonism

Studying all this decidability theory is fundamentally changing my view of the universe. In particular, I am losing my Platonism.

It all begins with the halting problem, which isn’t so much a problem anymore as a result. It states that there is no computer program which can tell whether another computer program will ever finish. The proof is so remarkably simple that I will paraphrase here:

Suppose there were such a program. Call it infinitely_loops(prog, input). It uses a magical complicated procedure to analyze prog and determine whether, when input is fed as the input to prog, prog will get into an infinite loop or not. Now let’s write a new program, called gah, which takes as input input, which is the source code for a program:

    // include source code for infinitely_loops here so we can use it
    if (infinitely_loops(input, input)) {
        exit();            // (a) finish immediately
    }
    else {
        while (true) { }   // (b) infinite loop
    }

Just to be clear: gah takes a program as input and then asks whether, when you feed it its own source code as input, whether it gets into an infinite loop. If it does, then gah exits immediately. If it doesn’t, then gah itself gets into an infinite loop.

Finally, we ask the question: what is the value of infinitely_loops(gah, gah)? That is, does gah get into an infinite loop when you feed it its own source code? Well, there are two possibilities;

  1. It does get into an infinite loop. But then look at the if statement above. We gave gah itself as input, so input = gah. Substituting, the if statement reads if (infinitely_loops(gah, gah)), which is what we’re asking about. The condition would be true, in which case our program would take branch (a) and exit immediately. But that’s no infinite loop, that’s definitely finishing. So there’s a contdadiction, which leaves only the remaining alternative:
  2. It does not get into an infinite loop. But in that case, in the if statement above, our program would have taken branch (b), so it would have entered the infinite loop we wrote there. Another contradiction!

So, since it can’t infinite loop, and it can’t not infinite loop, we have a contradiction, so our only assumption that the program infinitely_loops exists must have been false. There is no such program!

Okay, that was a little longer than I remember it. Still, the logic is simple and there is nothing magical about it.

I understood the halting problem even when I was in high school. I understood it, but I did not grok it. I thought it was totally neat that, after being introduced to programming and feeling like I could do anything, that there were computer programs which were impossible to write. Well, of course there are such programs: you can’t write a program which will tell you what will happen tomorrow. But this was a program which you could ask a real question to, a question which had an absolute, definite yes or no answer, and the program would not be able to answer it. That’s cool, isn’t it? It kinda gives meaning to being human, a kind of irreplacability, right?

Well, that’s what I thought at the time. I continued in college to study set theory and decidability theory, where I learned about Gödel’s incompleteness theorem. Unfortunately, that proof is not simple enough to repeat here :-). His theorem says that no matter how many assumptions you make, there will always be a statement in math which you cannot prove true or false. I’m not going to go into detail, but essentially he wrote down a logical statement which meant “such-and-such computer program eventually finishes”. Keep in mind that it’s very easy to write a computer program which will eventually write down everything that is provable (careful with the precedence there: it runs forever, writing down statements; it means that if you give me any provable statement, it will eventually write it down in its sequence). A proof is just a sequence of letters, and a computer is capable of checking whether a proof is valid. So just enumerate proofs, one-by-one (in practice this is completely infeasable, but we’re theorists!), check if it’s a valid proof, if so, write down what it proved. So if you could prove Gödel’s statement for any machine, then you could easily write a program which decided whether other programs infinite loop or not. But we just showed that that’s impossible, so it must also be impossible to prove Gödel’s statement.

This is when my mind started collapsing. The halting problem is not about what computers can do. It talks about all of mathematics. It means that determining whether computer programs infinite loop is a mathematically impossible problem, for both humans and computers. There are computer programs which the greatest mathematicans and programmers in the world could look at for years and never be able to determine whether or not it ends, without just running it (and if you run it, then you can wait until it ends and say “yes! it ends”; but if it doesn’t, you can wait forever until you die, never knowing whether it’s going to end or not).

Well, maybe it’s a problem in our reasoning skills. Maybe our logic is consistent, but it’s missing something that would conclude these things. Well, it turns out no. Gödel, the genius he was, also proved that if in every logically consistent world (“world” defined loosely, “mathematical system” is more accurate) a statement is true, then our proof system will be able to prove it. So then the halting problem becomes even more mind-boggling: there is some phantom statement we could add to logic that would allow us to prove that the machine does stop, and another phantom statement we could add that would allow us to prove that it does not. We hope (though, due to Gödel again, cannot prove) that our logic is consistent and represents the structure of information in the world. So how can whether this machine stops or not depend on which logic we use? It’s a fact! It either stops or it doesn’t!

I originally wrote about twice as much as you see here, but I realized that I wasn’t saying much, just going in circles. Essentially the whole idea of facts is vanishing from my world model. There are things that are true, and there are things that are false, and then there is fantasy. I don’t think our logic has holes, things that are true but it can’t prove given the proper assumption. It’s more like we’re asking questions about whether something will happen in the future, and in the case that it will, we will know only when it does. Notice that I didn’t consider the case that it won’t… because that case doesn’t even make sense :-).

Turing Complete = Incomplete

I’ve talked about this a little bit before, but I’ll start over, since I have a new understanding of the problem now.

Gödel’s famous incompleteness theorem says that “any consistent, recursive set of axioms which can emulate arithmetic cannot be complete”. He showed this by essentially building a Turing machine (well, something equivalent to one) inside the logical proof system, and showing that there was a halting problem in that system.

I’ve been wondering this: You can show a system is incomplete by embedding arithmetic in it, so what other types of incomplete systems are there? Can you have an incomplete system which does not have arithmetic? Is it weaker than an arithmetic system (in which case, is there a minimal embedding that will get you to an incomplete system?), or just different?

Now I should take a moment to define my terms, because I’ve been using incomplete rather loosely. Incomplete means that there is a statement φ for which neither φ nor ¬φ is provable. The empty set of axioms is incomplete. That is of course not what I meant.

Definition: A consistent set of axioms A is called strongly incomplete if there is no recursive set of axioms R such that A ∪ R is complete.

Definition: A consistent set of axioms A is called recursively completable or weakly incomplete if it is not strongly incomplete, i.e. if there is a recursive set of axioms R such that A ∪ R is complete.

So above when I said “incomplete”, I meant “strongly incomplete”. I have a conjecture:

Conjecture: Every strongly incomplete set of axioms is capable of embedding arithmetic.

That would mean, in essence, that the only way you can be strongly incomplete is by being Turing complete. Intuitively I believe this conjecture, if only because of the appearance of the word “recursive” in the definition of strongly incomplete.

I have started to approach a proof, though I admit it’s somewhat out of my league. Still, it’s fun to explore. The best way to go about it I’ve found so far is to find a Turing complete system such as arithmetic, lambda calculus, group algebra, or diophantine equations and show that if a system is incapable of emulating that system, then it must be recursively completable.

You start with a set of axioms A and say, for example, for all wffs N(x), Z(x), S(x,y), the sentence ¬”N(x) means x is a number, Z(x) means x = 0, and S(x,y) means y = x + 1″ is consistent with A. The bit in quotes actually expands to a conjunction of four reasonably complex formulas, so its negation turns into a disjunction. Disjunctions are not very nice to work with from a proof standpoint. But it’s still a very powerful assumption, since this is for all wffs (with the desired number of free variables), so we can pick and choose ones that are convenient for our goal. We can also assume that the negation is actually provable in A, because if it’s just consistent, why not just add it to the set S we are using to complete A? We can actually add all of these negations to S (for all appropriate wffs) right off the bat, because the set is recursive.

As for how to show the rest of the completion, I’m stumped so far. I need some time to explore this system a bit; I’ve gone so far as to write down the expression of the axioms and negate them, playing around with different representations, but I haven’t really tried to get new consequences from them yet.