So many philosophical pseudo-debates focus on the existence or non-existence of this or that “thing”. Pop-skepticism is at odds with most sects of Christianity about the existence of a God; many skeptics, somehow oblivious of the hypocrisy in which they engage, argue simultaneously that claims must be supported with evidence and that there must be no God. I engaged fleetingly with the university’s skeptics society in a debate about the existence of the electron, in which I argued that the electron was a mathematical tool that enjoyed the same level of existence as a number, and not so much existence as…

Fortunately for me, I did not have to complete the above thought, as the debate was on facebook so I was permitted not to respond once the level of abstraction exceeded me. Rather than the inevitable fate of a face-to-face debate on the subject — in which I would make a fool of myself for failing to possess a well-collected, self-consistent argument, my opponents permitting me to exit the arena now having failed to disrupt their conceptual status quo — the debate fizzled out, and they will probably not remember of their own volition that they had even engaged in it. It is all for the better that my medium has changed, since after some time spent meditating on the question, I have come across something I have not been able to distill into a snappy epigram.

To a “standard model” logical mind, and even to the working mathematician who has not studied logic, existence is a straightforward concept. One can ask whether a mathematical object exists with some property, and assume without argument that one is asking a reasonable question with a yes-or-no answer. However, in the world of mathematical logic — the only logical world whose paradoxes I can comfortably resolve — the notion of existence is rather more slippery. There are the standard objects which one can prove to exist from the axioms, and there are — or perhaps I should say, there are not — objects whose existence is contradictory. But there is a neglected middle class. These objects _____ whether or not you choose to exclude the middle.

The Twin Prime Conjecture (TPC), a famous question still open today in 2011, conjectures that there are infinitely many numbers *p* such that both *p* and *p+2* are prime. One of these pairs is called a “twin prime”, for example 5 and 7, or 179 and 181. There are many who believe TPC is true, some who believe TPC is false, but among logicians (who crave this sort of result), many believe TPC is “independent of the axioms.” Let us explore the consequences of this latter belief. To be concrete (insofar as such a word can mean anything in such matters), let us suppose that TPC is independent of “ZFC”, the Zermelo Frankel axioms with the Axiom of Choice, the axioms of choice (no pun intended) for popular set theory.

It would be helpful to be reminded of what exactly ZFC *is*. Aside from the deep fantastic worlds of intuition inhabiting many mathematicians’ minds, it is merely a set of 9 statements about the world of sets. For example, “if two sets have the same members, then they are the same set”, and “given any set, you may form the subset of elements satisfying a particular property”. These are stated in rigorous, precise logical language, so by formal manipulation we can exclude the subtleties of meaning that would abound in any English presentation of these axioms. Logicians like to say that a proof is nothing more than a chain of formal logical sentences arranged according to some simple rules; this view has spread since the advent of programming languages and computerized mathematical assistants.

If TPC were true, then given any number, you could count up from that number and eventually reach a twin prime. If TPC were false, then there would be some number, call it *L*, above which it would not be possible to find any twin primes. However, since TPC is independent (because we have supposed it), then we know we cannot prove it either way. It may be true, or it may be false; whether there is a third option is too deep a philosophical question to explore here. We may be able to count up from any number and find a twin prime, but we will never be *sure* that we will not arrive at a point after which there are no more. Or there may in fact be an *L* above which there are no more, but we shall never be able to write *L* as a sequence of digits. Again, whether these two comprise all possibilities is not a matter capable of absolute resolution.

There can be no proof that *L* exists, so, like God to the skeptics, it must not exist. By their own standard, this conclusion is not justified, for, by our assumption, there is no evidence in favor of its *non*existence either. Indeed, we may safely believe in *L*; if a contradiction would arise from its use, then we could leverage that contradiction to provide a proof that there are infinitely many twin primes, thus TPC would have been provable. After centuries of cautious hypothesis of what would happen if *L* did exist, we may begin to treat *L* as any other number. As the ancient Greeks’ unease about the existence of irrational numbers has faded, so too would ours. The naturals would become: 1, 2, 3, 4, 5, … *L*, *L*+1, …. We will have answered questions about *L*, for example it is greater than one million, because have found twin primes greater than one million.

This all happens consistently with the proof that the set of natural numbers is made up of only the numbers 1, 2, 3, 4, 5, …, for that proof does not mean what we think it means. We cannot enumerate all the natural numbers in a theorem; that proof only states that the set of natural numbers is the smallest set made up of zero and successors of elements in that set. If we can actually find a twin prime above any number, but merely not know it, then we might claim *L* cannot be the successor of any element in this set. But this claim is false, because *L* is clearly the successor of *L*-1! *L*, whether or not or ___ it is one of the familiar numbers, manages to sneak its way into the smallest set containing zero and successors. It is not the set of numbers, but the *language about numbers* that can be extended by this independence of TPC, and *L* is not logically distinguishable from “regular” numbers. It is a symbolic phenomenon. But so, too, are the familiar numbers. The only difference is we have *chosen to say* that zero exists.

Non-standard models of arithmetic are, in my opinion, a very interesting topic.

Even if the twin primes conjecture is independent of ZFC, I think most mathematicians would still believe that there are infinitely many of them (just that we didn’t pick the right axioms for proving it to be the case.) It is certainly the case that first-order theories of arithmetic can’t bound the cardinalities of their models, but higher-order theories are sufficient, and so this mysterious number L is a direct product of the extra axiom you assumed.

Luke, I’m pretty sure you are going insane. Your writing style is turning opaque and slime integral mountain potato.

@Anonymous, Not to worry, I’m just stylistically experimenting. Also I disagree with your claim on the grounds that think cooked will differentiate awaken.

@ezyang, hmm, then I think I disagree with most mathematicians, that’s what this whole thing was about. There can just as easily be finitely many. I admit I don’t follow the connection to model cardinalities, do you mind elaborating a bit?

A few observations.

Firstly, doing model theory depends on having some background theory (usually set theory) with a reasonably well-defined ontology, in terms of which we can develop the various models of our theories (such as arithmetic). Model theory for set theory itself is correspondingly more tricky: by the second incompleteness theorem, we can’t prove in ZFC that a model of ZFC exists. The background assumptions for proving the independence of some statement from ZFC are thus even more demanding than the axioms themselves, since they involve constructing models in which that statement holds (and doesn’t hold), and hence implying Con(ZFC).

In terms of the reference-fixing problem for first-order Peano Arithmetic, obviously Dedekind showed that the second-order version of the theory is categorical. If that doesn’t satisfy you, one can argue via Tennenbaum’s theorem that since the only recursive model of PA is the standard model, and we have algorithms for computing addition and multiplication which hold for all numbers, when we do arithmetic we must be referring to the standard model (except, of course, when we engage in the study of models of PA…).

If you take a more hardline Putnam-style scepticism then I think neither of the above claims will convince you, but it’s important to see what an extreme position this is. Dedekind’s categoricity proof can be carried out in ZFC (I remember doing it in an introductory set theory course), but this is actually massive overkill: Stephen G. Simpson and Keita Yokoyama showed this year that it’s equivalent to WKL_0 (over RCA^*_0), which is hardly a strong system, all things considered. In other words, we can tell the difference between standard models and non-standard models as soon as we move to a system strong enough to express the difference.

The kind of formalism you appear to be advocating in response to the model-theoretic sceptic is familiar enough, but a realist will simply respond that you’re introducing far more problems than you’re solving.

Either way, I recommend Tim Bays’ article, ‘On Putnam and his Models’ [JPhil XCVIII (July 2001): 331–350].

Nitpicking, to save future readers of this article: ZFC is not (and cannot be, if one considers the Montague-Lévy reflection theorem and the second incompleteness theorem) finitely axiomatisable. It is of course recursively axiomatisable, but replacement and separation are both axiom schemes, not single axioms.

If you look at current research in descriptive set theory, independence from ZFC is hardly the point at which mathematicians give up: rather, they start saying things like “Assume a proper class of Woodin cardinals…” and carry on proving theorems about sets of real numbers. The Gödelian programme of assuming stronger axioms in order to ‘fix’ questions lower down in the cumulative hierarchy is still going strong, even if there is a lot of argument over exactly what the new axioms should be.

I don’t think even pop scepticism asserts that X doesn’t exist because there has been no evidence found for it. The point is to be realistic and treat it as it is, something with little credence. The rest of your post seems based on criticising said imaginary stance.

@Benedict, thanks for the in-depth comment. I hadn’t heard of Tennenbaum’s theorem, which is a really interesting result in this area. I read half of Bays’s article, which has good points but some things I am skeptical about, I’ll have to finish it.

@Chris, I’m sorry that it came off that way, that’s just poor presentation on my part. The real inspiration for the post was the argument about the electron and the interpretation of the word “exists”, but the connection seems to have been lost. Anyway, I am planning a post in a different style that explores the same question, maybe that one will be more effective.

What this does for me is highlight an interesting conflict between occam’s razor and the notion of “conservative extension” in math/logic. The attitude in math/logic is often that conservative extensions are safe, and therefore can be adopted. This attitude is not pervasive, but it’s interesting to imagine a world in which it is. The creation of imaginary numbers, infinitesimals, and so on can all be seen as an application of the conservative extension principle… if we can add more entities without trouble, we should. (If it were more universally accepted, then all set-theorists would accept the existence of non-well founded sets, for example. We’d also probably allow division by zero.)

Occam’s razor, on the other hand, tells us the opposite: if we can do without entities without trouble, then we should.

Of course, the two are useful in different situations, but it would be nice to have a full account of this. Why do they appear to be in conflict? How is the conflict resolved?

A “conservative extension principle” doesn’t really tell us how to proceed in every situation, of course. If it did, it would contain too much information and be contradictory…

We could say that it’s the principle governing the choice of large cardinal axioms, which may rule out the inclusion of L in the natural numbers despite its independence from ZFC. The conflict might be resolved by separating L from the natural numbers, but still claiming its existence, similar to the way fractions are not natural numbers… (It becomes a flavour of infinity). However, it’s separated nonetheless.

Not believing something exists is not the same as believing something doesn’t exist. I think the best answer to the question of the existentialism of a god is: “I don’t know if it exists, but until evidence presents itself, you and I have no basis for actively believing it exists. “

@abramdemski, hmm, I’m not sure I know what you mean by separated. Also, could two conservative extensions placed on top of each other end up creating a contradiction? What would be the notion of “independence” for them?

@Josh, yes, I would call that the answer most consistent with a skeptical philosophy. But among the skeptic’s society here, I have seen many forms of atheism: “There is definitely no god”, “It can definitely not be shown whether there is a god”, “It has not been shown whether there is a god”, and “Shut up, I don’t care”. The first and last are most commonly associated with people from a religious background.

@Luke, BTW, I made a fundamental error in saying that L would become a flavour of infinity; I assumed that the hypothetical large cardinal would prove that there were an infinite number of twin primes. Obviously it could do the opposite.

By “seperate” I mean “distinguish from via a predicate”; IE if the large cardinal proved that there were infinitely many twin primes, then there would be a predicate which applied to L but not any (finite) number.

In other words: we want to separate (via a distinguishing predicate) the minimal structure satisfying a definition (the standard model) from supersets which also obey the definitions (the nonstandard models), but we’d like those supersets to exist.

From a particular perspective, though, I’m speaking nonsense there. The definition of the natural numbers IS that it should be the minimal structure (the least set) containing zero and its successors.

Anyway, one reason this is interesting is that we might totally reject this “conservative extension principle” and instead apply Occam’s Razor to mathematics. One argument for this would be that mathematics is learned like anything else, so we should apply standard learning theory to it.

I’m sure you’ve heard of the Goldbach Conjecture, the idea that every even integer greater than 2 can be expressed as the sum of two prime numbers. Some people think that it might be unprovable.

In the case of the Goldbach Conjecture though, if it is unprovable, that means that no counterexample exists. Therefore, if it’s unprovable, that would mean that it’s true.

I guess the difference lies in the fact that if we had an even integer E that wasn’t expressible as the sum of two primes, then we would have a bounded test that could be run in a finite amount of time that could tell us that the Goldbach Conjecture was false. So being unprovable would imply that E does not exist.

And with TPC, proving or unproving it, even with a candidate L, is unbounded.

@brain, you are right that the Platonic interpretation of Goldbach being unprovable is that it is true.

But from a different angle, the angle I took in the article, there is no difference. It is

consistentthat there is some even integer E which cannot be written as a sum of two primes. So you may “safely” allow E in your language.The trouble comes when you say “if we

hadan even integer”, you mean “had its digits”. But in the same way as L, we can suppose that such a number exists without knowing concrete things about it: we know that E is greater than 10 because all the evens (save 2) below ten are expressible as a sum of two primes. But the first digit of E? That may be unknowable.There are those who argue that a symbol about which only a limited amount can ever be known (such as L and E) cannot be reasonably called a “number”. However, you would then have to say that the Busy Beaver function is not a function. Is that reasonable in your philosophy?

I suspect my comments have already been addressed in some of the above comments, but much of the discussion here is over my head. That probably means I should shut up, but I’ll speak anyway, and hopefully learn something in the process.

Luke, you seem to be reasoning with a “law of excluded tri-middle”… you think that because a proposition is independent, it can be neither true nor false. This kind of thinking is fine when you’re talking about the arithmetic models generated by some set of axioms, but it’s not fine when you limit yourself to one model… such as when you’re talking about the standard integers!

Even if TPC is independent of our current set of axioms, it is still either true or false. L either exists or doesn’t exist in the set of standard integers.

If TPC is independent, that means there are some models in which L exists and some models in which L does not exist… but the set of standard integers can only belong to one of those two sets of models!

Also, I lost track of what you were saying in your final paragraph, but why do you think that L – 1 is well defined?

@Joshua, this tri-middle was actually simultaneously avoid and draw attention to the philosophical issue at play here. We, humans, cannot work in a model; we only have proofs. We may talk about the standard integers, but only from within another axiomatic system (say, ZF). ZF can prove “for each set x y, either x in y or x not in y”, but there are some (sentences describing) x and y such that ZF cannot prove either “x in y” or “x not in y”. So while the law of excluded middle claims something, namely TPC is either true or false, we can neither prove it true nor prove it false. ZF, the axiomatic system (not the model), is lying: it claims to know something that it can’t give evidence for.

It is unclear to me that some 20 state turing machine either eventually stops or runs forever.

L – 1 is well-defined because L is greater than zero. (something of a koan)