Tag Archives: relativism

Beliefs and Truth

I have now met the fourth person who has said that they don’t have beliefs.

Perhaps I am still stuck in a naive conception of truth that they have transcended. I still unconsciously assign beliefs to be axioms, as assumed truths upon which to base my inferences, and as such not having beliefs would seem impossible. Perhaps they have already achieved what I merely strive for: just living, just being the little perceptrons they are, already embodying the consequences of truth as a linguistic construction and not a fact of the world. They know that whether an idea is true is irrelevant — that there is nothing more than successful ideas being successful — and as such to “believe in” any truth is only to be enslaved by a clever, self-reinforcing idea: that ideas can be true.

This transcendence must have been achieved after many years of thought and meditation — we are perhaps even born clinging to truth as though it were unitary and absolute. Wars have been fought over is and is not, as if ignoring the evidence shining in their swords, both could not coexist. We have a deep genetic drive, because the uncertainty introduced in realizing the paradox of accessible truths is enough to delay a life-saving decision by a few milliseconds, and thus has been bred out of us. The option that there is a representational barrier between your perceptions and the world is not an option for the animal at the edge of survival. But perhaps there is a latent genetic drive toward the non-believer’s enlightened state after all — once you stop worrying about what is true, you can react faster, having closed the analytical gap between cause and effect. You are a wild animal, your thoughts having proregressed into instincts. Indeed, when time is of the essence, this idea could be more successful than the idea of truth — perhaps their meditation was to put themselves in life-threatening situations in which they needed to be lightningfast to survive.

They see the intimate connection between the words “belief” and “truth”. An idea must be able to be true in order to be believed. But they do not reject these words, for an idea must be able to be false to be rejected. The collusion of “belief” and “truth” makes them very hard to break out of: each reinforces the other. When it comes time to communicate, the non-believers see that language is built around truth, and one cannot communicate without presupposing it. So for them to communicate that they are not where you think they are, they must use a sentence which by its very utterance contradicts itself: “I do not have beliefs.”

Flattr this

Computably Uncountable

We are all familiar with Cantor’s diagonal argument that proves there exist infinite sets which are “larger” than the set of natural numbers. In this post I will show that we can express this argument in the form of a program, thus showing that there are countable sets which are “computably uncountable”.

I begin with the program itself:

type Cantor = Nat -> Bool

diagonal :: (Nat -> Cantor) -> Cantor
diagonal cs n = not (cs n n)

Cantor is “the cantor space”, the type of infinite sequences of booleans. We will call such an infinite sequence “a Cantor“. There are clearly infinitely many Cantors; e.g. take the range of this function which gives False at every position except the one specified:

unit :: Nat -> Cantor
unit m n = m == n

diagonal is (Georg) Cantor’s diagonal argument written as a program — it takes an alleged sequence of all Cantors, and returns a Cantor which does not occur in the sequence, by construction. This function shows by contradiction that we cannot put Cantors in 1 to 1 correspondence with naturals, and thus that there are more Cantors than there are naturals.

So how many Cantors are there? Since Nat -> Bool is a Haskell type — the type of computable functions from Nat to BoolCantors must be representable by programs. We can encode programs as numbers by treating their source code as base-128 numbers. Hence, there are no more Cantors than naturals, and so Cantors can be put into 1 to 1 correspondence with naturals.

Wait — what? There are more Cantors than Nats, but they both have the same size? Something is wrong. Indeed, in the process of this argument we have asserted both

  1. “We cannot put Cantors in 1 to 1 correspondence with naturals”
  2. Cantors can be put into 1 to 1 correspondence with naturals”

We clearly can’t have both.

I

The erroneous statement is (2). It is undecidable whether a given program represents a Cantor. If the nth Cantor is ⊥ at n, then diagonal will fail: diagonal cs n = not (cs n n) = not ⊥ = ⊥. Because ⊥ is a fixed point of not, diagonal cannot return an element different from the one it was given. Thus for diagonal to work, we must require that Cantors be fully-defined — no infinite loops!

With this requirement, we can no longer put Cantors in 1 to 1 correspondence with the naturals, because we would have to solve the halting problem. It is not enough that the type of the term is a Cantor, it now must be fully defined for all inputs, and determining that given arbitrary source code is an undecidable problem.

II

The erroneous statement is (1). Cantors are computable functions, so as we have argued, they have the same cardinality as the naturals. There are no more programs than numbers, so by the definition of equal cardinality we can put them in 1 to 1 correspondence with a function.

The problem with (1) occurs because diagonal takes as its first argument not an arbitrary sequence of Cantors, but a computable sequence of Cantors. If cs is not computable, then neither is diagonal cs (for we no longer have cs‘s source code with which to construct it), and Cantors are defined to be computable sequences. So diagonal fails to contradict our bijection.

III

The erroneous statement is (2). Section II claims to put Cantors and naturals in 1 to 1 correspondence, but it is lying. Suppose Section II is formulated with respect to some axiom system A. If it were “telling the truth”, we would expect there to be some term f in the language of A such that for every fully defined Cantor program c, there is some natural number n such that we have A \vdash f(\bar{n}) = \bar{c} (i.e. it is a theorem of A that f(1 + 1 + … + 1) = (source code of c)).

Let’s suppose we have written down the axioms of A into a Haskell program, and we have a (partial) function proofSearch :: Nat -> Cantor, which, given a number n, searches for theorems of the form f(\bar{n}) = \bar{c} and compiles and returns the first such c it finds. In the case that there is no such statement, it just runs forever; similarly for the case that c fails to compile. Although cumbersome to write, I’m sure we agree that this is possible to write. If section II is not lying, then we expect that for every natural n, proofSearch n does in fact return a valid Cantor.

Now, let us return to familiar lands with a new program:

evidence :: Cantor
evidence = diagonal proofSearch

Oh my! If section II is the truth, then proofSearch is a total, computable function of type Nat -> Cantor, which we can pass to diagonal to find a Cantor that it missed! So it must have been lying, either (1) about its function f finding every possible Cantor or (2) about it actually possessing such a function (i.e. it “proved” that there is such a function, but it couldn’t actually represent it). In either case, it did not actually create a 1 to 1 correspondence between the naturals and Cantors.

IV

Left as an exercise for the reader.


Which one is it really?

Flattr this

Relativism and Language

It is hard for me to imagine that so many people are so wrong. Sure, core beliefs go unexamined. Yes, we often unconsciously repeat taglines we have heard from those we respect instead of attempting to translate our true views. But I must admit I think of all people as essentially wanting to figure it out. Life, the universe, their meaning, how to make the world a better place. Some, who see the world as a competitive, dog-eat-dog place, want to figure it out because it will help them survive. Others, like me, who see the modern (Western — that is all I have direct experience with) world as an essentially benign place, just want to figure it out because of a innate curiosity (no doubt a result of past generations with the former motivation).

So when someone says something which strikes me as wrong, when I have the kneejerking impulse to correct them, this belief of mine kicks in and stops me. Oh my, it didn’t used to; I would happily correct the abundant wrongness in the world. After all, if people think the right way, they will do better for themselves and others. I can’t remember a time when I didn’t have this belief, however, but it has taken a while to trickle its way into my choice of actions.

All through my youth, I was told that I was smart (a pedagogically questionable practice). I didn’t buy it (I’ve always had a rebellious streak). What makes me so special? I wasn’t just born with smartness, I thought. At first this manifested as an individualistic self-righteousness: I must be smart because of the intelligent ways I chose to spend my youth (what? Trampoline, video games, and Power Rangers?). More recently it has manifested as a skepticism of the views of those who tell me I am smart: you only say that because I am articulating things you agree with, so the compliment is a way of affirming your own worldview. Those both seem naive to me now. I don’t know what I currently think about it, I will probably only be able to articulate that once I move to some other view.

I am still skeptical of any innate superiority (however not enough so to avoid writing this post in a way that comes across as advice). So when I stop myself from correcting a wrongness, what do I do? This is the relativism I’ve been talking about.

Words don’t have meaning; in a conversation, the speaker translates meaning into words, and then the listener translates the words into meaning. We have a soft social agreement about how words are used, and that gives rise to trends in our patterns of thought. But the possibility remains — and I use the word possibility only because of a timidness, I really think of it more as a high probability — that the meanings that I have assigned to the words when I hear them are different from the meanings that were used to form them. Indeed, it is unclear what is even meant by two people having the same thought. My brain is not likely to have the ability to represent the thought that produced the words, especially if I disagree with them.

The exercise, then, is this: try to represent those thoughts anyway. How can I think of these words so that the sentence becomes true? Not just slightly less false, but really true. I might have to temporarily reorient my value system; I might have to imagine I grew up in a religious family; I might have to picture the scary possible worlds that might result if the statement were false (that is, beyond the proportion of the consequences I actually predict, already thinking the statement is false). When I remember to do this, I am brought to a calm, understanding level, with few fiery arguments in sight. My contributions to these conversations are transformed into questions instead of assertions — not Socratic “let me lead you to the right answer” questions, but genuine “I want to understand you” questions.

And that is the essence of relativism to me. What you mean by your words is not what I mean by your words. Sentences are uttered with the concept of their truth in mind, and before blasting forth a correction, I first have to understand how they are true. And more often than not, my planned correction is dismantled and replaced by a connected friendship.

Perspectives on Truth and Realism

Lately I have been considering myself a relativist. To cast away the kneejerks, I don’t consider all belief systems equally valid (with caveats1). Wikipedia sums it up nicely:

… that truth is always relative to some particular frame of reference, such as a language or a culture.

I have noticed an increase in my opposition to what I am currently calling “scientific realism” — the belief that discoveries made by science are true, and other things are false (basically just an incarnation of absolutism). Yesterday I had an impassioned argument (still in good fun, though) with my roommate about our differences in perception. I noticed my emotions firing up around this subject, a symptom begging me to analyze its cause. Humans get very emotional when their thoughts approach a shattering of a core belief, so I am curious if one is near.

This time, instead of a philosophical persuasive essay, I’m just going to write down some of my observations.

In the conversation with my roommate Monty (who I consider quite intelligent), mostly a battle over semantics, I found the following ensemble of his ideas to leave an impression on me:

  1. Newtonian gravity is false, and General Relativity is true.
  2. If he lived 200 years ago, Newtonian physics would be true.
  3. One thing cannot be more true than another (except in the trivial case of one thing being true and the other false, of course).
  4. General Relativity and The Standard Model, which are mathematically incompatible, can both be true at the same time.
  5. He hasn’t yet seen any evidence that would suggest there are things that can’t eventually be explained by our current scientific ideas.

Taken together, these ideas are fascinating to me. They indicate a different definition of truth than the one I use, and I’m fascinated because I don’t have a concept that I could substitute for it. On surface interpretation, these statements seem inconsistent to me, so I am really curious about the concept from which they arise. (I am pretty sure (5) is just a fallacy though: what would such evidence look like?)

I have met others who claim that they do not have beliefs. I find this to be common among scientific realists. I wonder what definition of “belief” they use to be able to consider themselves devoid of it; so far when I have pried I am just evaded. There are two reasons I evade inquiries: (1) I am not taking the conversation seriously, which may be because it is threatening my beliefs, or other reasons; and (2) the inquiries are using words in ways that don’t have meaning to me, so I answer in riddles that bring out the dissonance2. I usually assume they are doing it because their beliefs are being threatened3; what makes me curious is the possibility that they are evading because of (2)4. Perhaps I am using “belief” incorrectly when asking that question.

Among Skeptics, there is another possible reason to avoid the word “belief”: because it is very close to “faith”, the buzzword of the enemy. Maybe they use the word “truth” to mean what I call “belief”… but then the idea that someone’s beliefs can be false would be nonsense.

I think most of my anti-realism comes from a desire to (at least give due diligence to) respect the belief systems of others. I think I may start considering “true” to be a value judgement (which, as an experiment, I am trying to avoid). I had a debate with a young earth creationist, a belief system I typically have a hard time respecting. After a long time, I think I heard an essential difference, when he said (paraphrasing): “I believe there is a God because I don’t want to live in a world without a God.” That indicates to me a different relationship to truth — that truth and desirability are related concepts — and opened to me the possibility of respecting his belief system a little more.

Dan Piponi made a brilliant comment on twitter during a conversation about realism: “I don’t think ‘reality’ means much. It’s just a placeholder to make the sentence grammatical.”

Flattr this

Notes

1 What exactly does a belief system being “valid” mean?

2 This will happen, for example, if you ask me whether I believe extra-terrestrial life exists, because I get hung up on the definition of “life”. People seem to acknowledge the subtlety of that word, but then keep using the word anyway as if the inability to define it is no big thing: “you know what I mean.” No, I actually don’t.

3 Probably because it confirms my superiority.

4 Possibly because it threatens my superiority.