Follow Your Nose Proofs

We just had the first Categories for the Boulderite meetup, in which a bunch of people who don’t know category theory tried to teach it to each other. Some of the people there had not had very much experience with proofs, so getting “a proof” was hard even though the concepts weren’t very deep. I got the impression that those who had trouble mainly did because they did not yet know the “follow your nose” proof tactic which I learned in my first upper division math class in college. That tactic is so often used that most proofs completely omit it (i.e. assume that the reader is doing it) and skip to when it gets interesting. Having it spelled out for me in that class was very helpful. So here I shall repeat it, mostly for my fellow Categories members.

Decide what to do based on a top-down analysis of the sentence you are trying to prove:

Shape of Sentence Shape of Proof
If P, then Q. (aka. P implies Q) Suppose P. <proof of Q>
P if and only if Q (→) <proof of if P implies Q>. (←) <proof of Q implies P>
For all x such that C(x), Q Given x. Suppose C(x). <proof of Q>
There exists x such that Q. Let x = <something> (requires imagination). <proof of Q>
P or Q Either <proof of P> or <proof of Q> (or sometimes something tricksier like assume not P, <proof of Q>)
P and Q (1) <proof of P>. (2) <proof of Q>.
not P Assume P. <find contradiction> (requires imagination)
X = Y Reduce X and Y by known equalities one step at a time (whichever side is easier first). Or sometimes there are definitions / lemmas that reduce equality to something else.
Something really obvious (like X = X, or 0 ≤ n where n is a natural, etc.) Say “obvious” or “trivial” and you’re done.
Something else Find definition or lemma, substitute it in, continue.

Along the way, you will find that you need to use the things you have supposed. So there is another table for how you can use assumptions.

Shape of assumption Standard usage
If P, then Q (aka P implies Q) Prove P. Then you get to use Q.
P if and only if Q P and Q are equivalent. Prove one, you get the other.
For all x such that C(x), P(x) Prove C(y) for some y that you have, then you get to use P(y).
There exists x such that C(x) Use x and the fact that C(x) somehow (helpful, right? ;-).
P and Q Therefore P / Therefore Q.
P or Q If P then <goal>. If Q then <same goal>. (Or sometimes prove not P, then you know Q)
not P Prove P. Then you’re done! (You have inconsistent assumptions, from which anything follows)
X = Y If you are stuck and have an X somewhere in your goal, try substituting Y. And vice versa.
Something obvious from your other assumptions. Throw it away, it doesn’t help you.
Something else Find definition, substitute it in, continue.

Let’s try some examples. First some definitions/lemmas to work with:

Definition (extensionality): If X and Y are sets, then X = Y if and only if for all x, x \in X if and only if x \in Y.
Definition: X \subseteq Y if and only if for every a, a \in X implies a \in Y.

Theorem: X = Y if and only if X \subseteq Y and Y \subseteq X.

Follow your nose proof.

  • (→) Show X = Y implies X \subseteq Y and Y \subseteq X.
    • Assume X = Y. Show X \subseteq Y and Y \subseteq X.
    • Substitute: Show X \subseteq X and X \subseteq X.
    • We’re done.
  • (←) Show X \subseteq Y and Y \subseteq X implies X = Y.
    • Assume X \subseteq Y and Y \subseteq X. Show X = Y.
    • (expand definition of = by extensionality)
    • Show forall x, x \in X if and only if x \in Y.
    • Given x.
    • (→) Show x \in X implies x \in Y.
      • Follows from the definition of our assumption X \subseteq Y.
    • (←) Show x \in Y implies x \in X.
      • Follows from the definition of our assumption Y \subseteq X.

See how we are mechanically disassembling the statement we have to prove? Most proofs like this don’t take any deep insight, you just execute this algorithm. Such a process is assumed when reading and writing proofs, so in the real world you will see something more like the following proof:

Proof. (→) trivial. (←) By extensionality, x \in X implies x \in Y since X \subseteq Y, and x \in Y implies x \in X since Y \subseteq X.

We have left out saying that we are assuming things that you would naturally assume from the follow your nose proof. We have also left out the unfolding of definitions, except perhaps saying the name of the definition. But when just getting started proving things, it’s good to write out these steps in detail, because then you can see what you have to work with and where you are going. Then begin leaving out obvious steps as you become comfortable.

We have also just justified a typical way to show that two sets are equal: show that they are subsets of each other.

Let’s see one more example:

Definition: Given sets A and B, a function f : A → B is a surjection if for every y \in B, there exists an x \in A such that f(x) = y.

Definition: Two functions f,g : A → B are equal if and only if for all x \in A, f(x) = g(x).

Definition: (g \circ f)(x) = g(f(x)).

Definition: For any set A, the identity \mathit{Id}_A is defined by \mathit{Id}_A(x) = x.

Theorem: Given f : A → B. If there exists f-1 : B → A such that f \circ f^{-1} = \mathit{Id}_B, then f is a surjection.

Follow your nose proof.

  • Given f : A → B.
  • Suppose there exists f-1 : B → A and f \circ f^{-1} = \mathit{Id}_B. Show f is a surjection.
  • By definition, show that for all y \in B, there exists x \in A such that f(x) = y.
  • Given y \in B. Show there exists x \in A such that f(x) = y.
  • Now we have to find an x in A. Well, we have y \in B and a function from B to A, let’s try that:
  • Let x = f^{-1}(y). Show f(x) = y.
  • Substitute: Show f(f^{-1}(y)) = y.
  • We know f \circ f^{-1} = \mathit{Id}_B, so by the definition of two functions being equal, we know f(f^{-1}(y)) = \mathit{Id}_B(y) = y, and we’re done.

Again, notice how we are breaking up the task based on the structure of what we are trying to prove. The only non-mechanical things we did were to find x and apply the assumption that f \circ f^{-1} = \mathit{Id}_B. In fact, usually the interesting parts of a proof are giving values to “there exists” statements and using assumptions (in particular, saying what values you use “for all” assumptions with). Since those are the interesting parts, those are the only parts that an idiomatic proof would say:

Proof. Given y \in B. Let x = f^{-1}(y). f(x) = f(f^{-1}(y)) = y since f \circ f^{-1} = \mathit{Id}_A.

Remember to take it step-by-step; at each step, write down what you learned and what you are trying to prove, and try to make a little progress. These proofs are easy if you follow your nose.

Constructions on Typeclasses, Part 1: F-Algebras

This post is rendered from literate Haskell. I recommend doing the exercises inline, so use the source.

> {-# LANGUAGE DeriveFunctor
>            , DeriveFoldable
>            , DeriveTraversable
>            , TypeOperators #-}
>
> import Control.Applicative
> import Data.Foldable
> import Data.Traversable

Certain kinds of typeclasses have some very regular instances. For example, it is obvious how to implement (Num a, Num b) => Num (a,b) and (Monoid a, Monoid b) => Monoid (a,b), and similarly if F is some applicative functor, (Num a) => Num (F a) and (Monoid a) => (Monoid F a) are obvious. Furthermore, these instances (and many others) seem to be obvious in the same way.

(+) a b     = (+)     <$> a <*> b
mappend a b = mappend <$> a <*> b

fromInteger n = pure (fromInteger n)
mempty        = pure mempty

And take them on pairs:

(x,x')     +     (y,y')  = (x     +     y, x'     +     y')
(x,x') `mappend` (y,y')  = (x `mappend` y, x' `mappend` y')

fromInteger n = (fromInteger n, fromInteger n)
mempty        = (mempty       , mempty)

It would be straightforward for these cases to derive the necessary implementations from the type signature. However, it would be nice if there were a more abstract perspective, such that we didn’t have to inspect the type signature to find the operations – that they could arise from some other standard construction. Further, it is not quite as obvious from the the type signature how to automatically instantiate methods such as

mconcat :: (Monoid m) => [m] -> m

without making a special case for [], whereas hopefully a more abstract perspective would inform us what kinds of type constructors would be supported.

In this post, we will see such an abstract perspective. It comes from (surprise!) category theory. I disclaim that I’m still a novice with category theory (but in the past few weeks I have gained competence by studying). So we will not get very deep into the theory, just enough to steal the useful concept and leave the rest behind. I welcome relevant insights from the more categorically educated in the comments.

F-Algebras

The unifying concept we will steal is the F-algebra. An F-algebra is a Functor f and a type a together with a function f a -> a. We can make this precise in Haskell:

> type Algebra f a = f a -> a

I claim that Num and Monoid instances are F-algebras over suitable functors. Look at the methods of Monoid:

mempty :: m
mappend :: m -> m -> m

We need to find a functor f such that we can recover these two methods from a function of type f m -> m. With some squinting, we arrive at:

> data MonoidF m
>     = MEmpty
>     | MAppend m m
>
> memptyF :: Algebra MonoidF m -> m
> memptyF alg = alg MEmpty
>
> mappendF :: Algebra MonoidF m -> (m -> m -> m)
> mappendF alg x y = alg (MAppend x y)

Exercise 1: work out the functor NumF over which Num instances are F-algebras, and write the methods of Num in terms of it.

Exercise 2: for each of the standard classes Eq, Read, Show, Bounded, and Integral, work out whether they are expressible as F-algebras. If so, give the functor; if not, explain or prove why not.

Exercise 3: write a function toMonoidAlg which finds the MonoidF-algebra for a given instance m of the Monoid class.

Combining Instances

Motivated by the examples in the introduction, we can find the “instance” for pairs given instances for each of the components.

> pairAlg :: (Functor t) => Algebra t a -> Algebra t b -> Algebra t (a,b)
> pairAlg alga algb tab = (alga (fmap fst tab), algb (fmap snd tab))

Also, we hope we can find the instance for an applicative functor given an instance for its argument

applicativeAlg :: (Functor t, Applicative f)
               => Algebra t a -> Algebra t (f a)

but there turns out to be trouble:

applicativeAlg alg tfa = ...

We need to get our hands on an t a somehow, and all we have is a t (f a). This hints at something from the standard library:

sequenceA :: (Traversible t, Applicative f) => t (f a) -> f (t a)

which indicates that our functor needs more structure to implement applicativeAlg.

> applicativeAlg :: (Traversable t, Applicative f)
>                => Algebra t a -> Algebra t (f a)
> applicativeAlg alg tfa = fmap alg (sequenceA tfa)

Now we should be able to answer the query from the beginning:

Exercise 4: For what kinds of type constructors c is it possible to automatically derive instances for (a) pairs and (b) Applicatives for a typeclass with a method of type c a -> a. (e.g. mconcat :: [a] -> a). Demonstrate this with an implementation.

Combining Classes

Intuitively, joining the methods of two classes which are both expressible as F-algebras should give us another class expressible as an F-algebra. This is demonstrated by the following construction:

> data (f :+: g) a = InL (f a) | InR (g a)
>     deriving (Functor, Foldable, Traversable)
>
> coproductAlg :: (Functor f, Functor g)
>              => Algebra f a -> Algebra g a -> Algebra (f :+: g) a
> coproductAlg falg _ (InL fa) = falg fa
> coproductAlg _ galg (InR ga) = galg ga

So now we can model a subclass of both Num and Monoid by type NumMonoidF = NumF :+: MonoidF.

Exercise 5: We hope to be able to recover Algebra NumF a from Algebra NumMonoidF a, demonstrating that the latter is in fact a subclass. Implement the necessary function(s).

Exercise 6: Given the functor product definition

> data (f :*: g) a = Pair (f a) (g a)
>     deriving (Functor, Foldable, Traversable)

find a suitable combinator for forming algebras over a product functor. It may not have the same form as coproduct’s combinator! What would a typeclass formed by a product of two typeclasses interpreted as F-algebras look like?

Free Constructions

One of the neat things we can do with typeclasses expressed as F-algebras is form free monads over them – i.e. form the data type of a “syntax tree” over the methods of a class (with a given set of free variables). Begin with the free monad over a functor:

> data Free f a
>     = Pure a
>     | Effect (f (Free f a))
>     deriving (Functor, Foldable, Traversable)
>
> instance (Functor f) => Monad (Free f) where
>     return = Pure
>     Pure a >>= t = t a
>     Effect f >>= t = Effect (fmap (>>= t) f)

(Church-encoding this gives better performance, but I’m using this version for expository purposes)

Free f a can be interpreted as a syntax tree over the typeclass formed by f with free variables in a. This is also called an “initial algebra”, a term you may have heard thrown around in the Haskell community from time to time. We demonstrate that a free construction over a functor is a valid F-algebra for that functor:

> initialAlgebra :: (Functor f) => Algebra f (Free f a)
> initialAlgebra = Effect

And that it is possible to “interpret” an initial algebra using any other F-algebra over that functor.

> initiality :: (Functor f) => Algebra f a -> Free f a -> a
> initiality alg (Pure a) = a
> initiality alg (Effect f) = alg (fmap (initiality alg) f)

Exercise 7: Give a monoid isomorphism (a bijection that preserves the monoid operations) between Free MonoidF and lists [], ignoring that Haskell allows infinitely large terms. Then, using an infinite term, show how this isomorphism fails.

Next time: F-Coalgebras

How GADTs inhibit abstraction

Today I want to talk about this snippet:

This program ought to be well-behaved — it has no recursion (or recursion-encoding tricks), no undefined or error, no incomplete pattern matches, so we should expect our types to be theorems. And yet we can get inconsistent. What is going on here?

Exercise: Identify the culprit before continuing.

The problem lies in the interaction between GADTs and generalized newtype deriving. Generalized newtype deriving seems to be broken here — we created a type B which claims to be just like A including instances, but one of A‘s instances relied on it being exactly equal to A. And so we get a program which claims to have non-exhaustive patterns (in unSwitchB), even though the pattern we omitted should have been impossible. And this is not the worst that generalized newtype deriving can do. When combined with type families, it is possible to write unsafeCoerce. This has been known since GHC 6.7.

In this post I intend to explore generalized newtype deriving and GADTs more deeply, from a more philosophical perspective, as opposed to just trying to plug this inconsistency. There are a few different forces at play, and by looking at them closely we will see some fundamental ideas about the meaning of types and type constructors.

Generalized newtype deriving seems reasonable to us by appealing to an intuition: if I have a type with some structure, I can clone that structure into a new type — basically making a type synonym that is a bit stricter about the boundaries of the abstraction. But the trouble is that you can clone parts of the structure without other parts; e.g. if X is an applicative and a monad, and I declare newtype Y a = Y (X a) deriving (Monad), then go on to define a different Applicative instance, I have done something wrong. Monad and applicative are related, so you can’t just change them willy nilly as though they were independent variables. But at the very least it seems reasonable that you should be able to copy all the structure, essentially defining a type synonym but giving it a more rigorous abstraction boundary. But in Haskell, this is not possible, and that is because, with extensions such as GADTs and type families, not all of a type’s structure is clonable.

I’m going to be talking a lot about abstraction. Although the kind of abstraction I mean here is simple, it is one of the fundamental things we do when designing software. To abstract a type is to take away some of its structure. We can abstract Integer to Nat by taking away the ability to make negatives — we still represent as Integer, but because the new type has strictly fewer operations (it must be fewer — after all we had to implement the operations somehow!) we know more about its elements, and finely tuning that knowledge is where good software engineering comes from.

When implementing an abstraction, we must define its operations. An operation takes some stuff in terms of that abstraction and gives back some stuff in terms of that abstraction. Its implementation must usually use some of the structure of the underlying representation — we define addition on Nat by addition on Integer. We may take it for granted that we can do this; for example, we do not have trouble defining:

    sum :: [Nat] -> Nat

even though we are not given any Nats directly, but instead under some type constructor ([]).

One of the properties of type constructors that causes us to take this ability to abstract for granted is that if A and B are isomorphic (in a sense that will become clear in a moment), then F A and F B should also be isomorphic. Since we, the implementers of the abstraction, are in possession of an bijection between Nats and the Integers that represent them, we can use this property to implement whatever operations we need — if they could be implemented on Integer, they can be implemented on Nat.

This isomorphism property looks like a weak version of saying that F is a Functor. Indeed, F is properly a functor from a category of isomorphisms in which A and B are objects. Every type constructor F is a functor from some category; which category specifically depends on the structure of F. F's flexibility to work with abstractions in its argument is determined by that category, so the more you can do to that category, the more you can do with F. Positive and negative data types have all of Hask as their source category, so any abstractions you make will continue to work nicely under them. Invariant functors like Endo require bijections, but fortunately when we use newtype to create abstractions, we have a bijection. This is where generalized newtype deriving gets its motivation -- we can just use that bijection to substitute the abstraction for its representation anywhere we like.

But GADTs (and type families) are different. A functor like Switch b has an even smaller category as its domain: a discrete category. The only thing which is isomorphic to A in this category is A itself -- whether there is a bijection is irrelevant. This violates generalized newtype deriving's assumption that you can always use bijections to get from an abstraction to its representation and back. GADTs that rely on exact equality of types are completely inflexible in their argument, they do not permit abstractions. This, I claim, is bad -- you want to permit the user of your functor to make abstractions.

(Aside: If you have a nice boundary around the constructors of the GADT so they cannot be observed directly, one way to do this when using GADTs is to simply insert a constructor that endows it with the necessary operation. E.g. if you want it to be a functor from Hask, just insert

    Fmap :: (a -> b) -> F a -> F b

If you want it to be a functor from Mon (category of monoids), insert:

    Fmap :: (Monoid n) => MonoidHom m n -> F m -> F n

(presumably F m already came with a `Monoid` dictionary). These, I believe, are free constructions -- giving your type the structure you want in the stupidest possible way, essentially saying "yeah it can do that" and leaving it to the consumers of the type to figure out how.)

In any case, we are seeing something about GADTs specifically that simple data types do not have -- they can give a lot of different kinds of structure to their domain, and in particular they can distinguish specific types as fundamentally different from anything else, no matter how similarly they may behave. There is another way to see this: defining a GADT which mentions a particular type gives the mentioned type unclonable structure, such that generalized newtype deriving and other abstraction techniques which clone some of a type's structure no longer succeed.

DI Breakdown

I’m having a philosophical breakdown of the software engineering variety. I’m writing a register allocation library for my current project at work, referencing a not-too-complex algorithm which, however, has many degrees of freedom.  Throughout the paper they talk about making various modifications to achieve different effects — tying variables to specific registers, brazenly pretending that a node is colorable when it looks like it isn’t (because it might work out in its favor), heuristics for choosing which nodes to simplify first, categorizing all the move instructions in the program to select from a smart, small set when the time comes to try to eliminate them.  I’m trying to characterize the algorithm so that those different selections can be made easily, and it is a wonderful puzzle.

I also feel aesthetically stuck.   I am feeling too many choices in Haskell — do I take this option as a parameter, or do I stuff it in a reader monad?  Similarly, do I characterize this computation as living in the Cont monad, or do I simply take a continuation as a parameter?  When expressing a piece of a computation, do I return the “simplest” type which provides the necessary data, do I return a functor which informs how the piece is to be used, or do I just go ahead and traverse the final data structure right there?  What if the simplest type that gives the necessary information is vacuous, and all the information is in how it is used?

You might be thinking to yourself, “yes, Luke, you are just designing software.”  But it feels more arbitrary than that — I have everything I want to say and I know how it fits together.  My physics professor always used to say “now we have to make a choice — which is bad, because we’re about to make the wrong one”.  He would manipulate the problem until every decision was forced.  I need a way to constrain my decisions, to find what might be seen as the unique most general type of each piece of this algorithm.  There are too many ways to say everything.

A Gambler In Heaven

A gambler has just lost all but one $1 in Vegas and decides to go for a walk.  Unfortunately he gets hit by a bus but, having lived mostly a good life aside from the gambling, is shown God’s mercy and lands in heaven.  They only have one type of gambling in heaven, it is a simple choice-free game with the following rules:

A coin is tossed.  If it comes up tails, you lose $1.  If it comes up heads, your entire bankroll is tripled.

The gambler only has the $1 he had on him when he died (turns out you keep your money when you go to heaven).  Here is a possible outcome of his playing this game:

  • $1 – H -> $3
  • $3 – T -> $2
  • $2 – H -> $6
  • $6 – T -> $5
  • $5 – T -> $4
  • $4 – T -> $3
  • $3 – T -> $2
  • $2 – T -> $1
  • $1 – T -> $0

And thus he is broke.

The question is this: starting with his $1, what is the probability he will live the rest of eternity broke in heaven? The alternative, presumably, is that he spends eternity doing what he loves most: gambling.  Do all paths eventually lead to bankruptcy a la Gambler’s ruin, or is there a nonzero probability of playing forever?

You may leave your ideas in the comments, and I will post a solution in a few days.

What do you say when you have nothing to say? What do you do when your song is a nice accompaniment to a vocal line, and there are no words to accompany?

I could talk about my life. I could mention my new teaching job, the cosmic interference with my busking, the flood… those all seem so incidental.

Maybe silence is okay. Maybe I am saying something — I am writing a lot of music, after all. I’m feeling pressure from Amanda (my girlfriend and closest friend) — not in any way that she is instigating, just a side-effect of who she is — to say something meaningful, something important. I can’t. I don’t feel like my ideas are important in that way, in the way that they are ready to jump from my mind into another’s and have any benefit. I think only vague half-truths: a strong conclusion, a value to hold on to, feels miles away. I know personal truths, I am feeling confident in them, and it is a great feeling, but words always miss the mark. They always make me seem either more certain or more uncertain than I am, with them I don’t know how to walk the fine line where I really communicate. And if I could . . . would I put it in a song; would I write it here?

I don’t think I would be bothered if my music felt complete without words. But I have a couple of songs in the oven that are just begging for words, that’s musically obvious to me. The missing instrument is words. I see a symbol, a metaphor: my life for the song, the words for… what? But it does feel that way — my life has a great groove but is also missing something. Missing lyrics.

I would normally argue that my lyricless music is saying something — it does have a message — but, like my thoughts and my truths, words cannot communicate it. But I’m incredulous. That argument doesn’t have the ring it used to.

I –

Why dream of being awake?

To every action, give your whole self; I am wholly procrastinating, fully indecisive, completely half-listening. Mr. mindful, awake, clear-headed, be careful, pictures can be projected on the fog. We are all blind, stumbling pigeons, wholeheartedly. The most committed are those who believe they have conquered life — how would you say that a delusional maniac “doesn’t have his heart in it”? Then there are those of us who envy such commitment — to be stuck only wanting a delusion — is that a lesser or greater commitment?

There is a transcendence here (, man). I want to experience my whole self, so I can’t just give up being lost and absent. My self includes my guilt, my self-judgments, my unacceptance of those judgments — no spiritual or psychological change I can make will do justice to my self. Nor will stagnation realize my true potential (a concept that makes the very same error).

But we get trapped again. We can’t stop intending to change because it would not do justice to self; nor can we stop intending to stop intending to change. To lay a path to spiritual betterhood is to believe that you have, in some small way, failed to be a blind, stumbling pigeon. This is false but, as we have already covered, admirable.

There’s nothing new to accept. This line of questioning is wrong. Self-acceptance is a vacuous goal. When that sinks in — when you really believe that — something changes inside of you. It plants the seed of the real self-acceptance, not that fuzzy-wuzzy kind you wanted. You’ll know when you have Achieved real self-acceptance because nothing happens, except maybe you will think and/or feel differently than you would have in some situations (it is unclear what that mechanism is).

Then what? Well of course, self-acceptance is only one step along the path — after which there can be no more — the steps no longer look like steps, but flat step-like objects. But I have to ask again, now what? What do I do now? What is the next .. the next ..

These are the chirpings of an analytical mind with nothing to analyze.

Shhhhhhh…

Life as a Musician?

So, it turns out I’m not dead. How about that?

I have dropped out of school, and am busking for a living. It is tiring (especially when I forget to drink enough water), sometimes discouraging (when I play things to no response whatsoever or make $5 in an hour), but mostly great. My job is making music! And more importantly, my job is making my music, or music I am in love with — although certain pieces tend to attract more tippers than others, so it’s not truly free (what is?).

My grandmother contacted me telling me about a startup mixer so I could find a job. I don’t think she really understands my decision. I can understand that — she wants me to get a stable, well-paying job, have kids and a family, and go to church. The usual narrative. The other day I was idly contemplating being a father. Not now, of course. But I can see the draw; I can see that being a pretty special thing. The question is whether it is worth it to me. Sacrifice is part of love. But do I sacrifice for my child, or do I sacrifice a child (umm! — sacrifice having a child) for my other loves? That is not a question I am remotely prepared to answer.

I used to think — perhaps I still do — that big questions like those aren’t really worth answering, at least not rationally. I suppose this “used to” is fairly recent, as I had spent a long time on them prior, and they led me nowhere but in circles of unfulfilled dustkicking. My self-image can be so limited at times, and the rational mind is a slave to its images. What I can really do, what I’m really made of, I perhaps thought, won’t be small enough to be so easily decided — it must be eased into, made part of myself through exploration and long, gradual growth. But the liberation I feel from this new occupation of mine has shown me that perhaps at points along this process such a life decision is valuable, that it can be a beacon that reminds me that I chose this because it was important to me — more important than anything else at one time — and so gives me something to hold onto in times of uncertainty or suffering. It sounds very compelling, doesn’t it? But I am still in the honeymoon phase of my relationship with my life as a musician, so the only thing I can be sure of is that my thoughts about it are distorted.

And am I really good enough to make this a living? Maybe Boulder is the only place people appreciate public performances of amateur classical music. Maybe when I migrate for the winter I will be met with indifference or contempt, and I will be stuck in a new city with no job. Maybe when I improvise or play my originals people only tip me because I have brought the piano out, not because the music speaks to them in any deep way — I know that is not true, my second piano sonata is almost always met by applause, but it has been 10 years since I wrote that; do I still have it? A teenager passes by and plays most of the pieces I do — not as well, but not badly — and he will surpass me by my age. Will I ever have the guts to sing out there?

A thousand fears and doubts dance their rite around my dream — all I can do is to go out there every day and hope it goes well. I think it’s proof that I’m alive. I pose this question to myself: would I rather be wildly successful in a software company, or wildly successful as a musician? The latter, by any metric. “Wildly” need not even appear. Standing on a plank and singing to the jury, my heart beating a thousand times a minute, with the conviction of a soldier — this outshines any vision of a successful software idea.

I’m not leaving software. But my most exciting software ideas aren’t the kinds of things one can easily make a living on. I’m working on a browser-based programming environment which explores a new way of designing and organizing code. I don’t want to say too much about it because as I code the idea continues to develop in my mind, and I don’t want to nail it down yet (maybe ever). But anyway, to make money with that would sacrifice its beauty; this tool is not for productivity, at least not at first: it is exploring a way of thinking. It is easier to make a living making the music I love than the software I love. If my life is to overflow with love and happiness, music is the breadwinner.

Again — only a month in. But I think this is the way to do it, for me. I’m not setting myself up for a comfortable life, but comfort is a trap anyway. It is the contrast that feels so good, and without that contrast comfort is just normal. Without discomfort to prepare the contrast, comfort is dull and boring. Anyway, that’s how I see it. Funny coming from a hedonist like me. I guess I’m having a stint of long-term hedonism at the expense of short-term.

Maybe someday I won’t even feel the need to justify my choice anymore. That’s when I’ll really be in it.

Heart

It is part of growing up, I keep telling myself — doing what I know — for some definition of know — is right, despite the advice of my family and almost everyone (but my best friend who is my only beacon in this whole mess). I have a good family — supportive, have my best interest in mind, certainly not the image of the disapproving father so pervasive — and partially I haven’t been completely honest with them, because it’s scary. Nonetheless, I feel a lot of pressure from their attempting-to-be-neutral positions, and I know what I want — what I need to do, but when the time comes to say it I can’t, condemning myself to this purgatory.

I’m not going to finish college. I am very close, only a few credits away, but it is not going to happen at the end of this semester, and everyone is like “but it’s just one more and it’s important for the future” — not so different from my reasoning for returning to college in the first place — I have been at this decision point before, and did convince myself with the assistance of my family that it was the right thing to do. Maybe it was once, and although I did not achieve the goals I set for it, it isn’t right anymore.

Here’s the really hard part, and I have to speak this with less certainty than the other, because different parts of my mind and body are fighting over it. I don’t think I’m going to finish this semester. Try as I might (whatever that means) I cannot commit myself to something that I don’t truly believe is serving me, and right now that is school. I don’t have that kind of control over myself. My grades are really slipping; each moment here feels like trying to run in a dream, suspended in the air. I know, what’s another month? It really doesn’t matter either way. It would matter if I wanted to go to grad school, but years of getting to know myself and being friends with grad students, I don’t think it is the place for me. I am too disorganized, my intellectual exploration is founded in too much curiosity and not enough desire to contribute. Suddenly a pursuit will become uninteresting and another will take me by surprise, but you can’t just switch like that in school.

But you can just switch like that in life. Why would I arbitrarily obligate myself to someone else when I am exploring what I love? Out there in the cruel, forgiving, free world, I can pursue whatever I like whenever and however I like to. Yes, I need to make money, but that’s not such a huge deal. I don’t really get why people make their way of earning money the centerpiece of their lives. Insert canonical white-picket-fence rant.

I don’t have a good phrase to describe who I want to be or what I’m going for. I think of such phrases as potentially guiding, locally, but ultimately limiting. To define myself with words is to forget every moment the words do not account for — when would someone include the Pepsi they had for breakfast in their self-definition? — but that bottle of salt and sugar is part of me, negative or however you want to judge it. Of course, not having such a phrase makes it difficult to assess the value of a difficult easy decision like this, and without a mechanism for assessing value I have no choice but to be human and follow my hearts — there’s nothing else that I can say with my vocabulary that doesn’t sound like a waste of my life.

I have long valued every moment of my time. A year of my life spent unhappy in order to support the remainder of my life never seemed worthwhile to me — I know that sounds irrational — but that seems to be the way I relate to time. A month spent in school, a month not making my living by sharing my musical heart, a month depressed and careless, a month of missed opportunity.

And yet, it is only a month. But why would I stay? I can’t articulate any convincing reason. It will make it less work should I ever decide to come back and finish — but that is actually false. One class is just as many as four, if not more.

I have had my struggles, but at important times I have always listened to the guidance of my family, I think I have always made what they saw as the best choice. This time, I think, their poor choice is the right one — if only symbolically, if only to remind myself whose life I am living.

Flattr this

List of Awesome Things

from the Lang.NEXT conference this week. I heard most of this stuff not from the talks, but from the awesome people who were attending the conference.

  • Alloy – A language for relational models. You enter a specification of your problem in a first-order relational language (quite powerful) and then it tries to find counterexamples within some small space. If no counterexamples are found, you can have a fairly high degree of confidence in your model (as they say, “most assertions are wrong; most flaws have small counterexamples”).
  • Combinatorial Sketching. “Here’s what I want my algorithm to do, and I know that it probably involves a for loop and addition and multiplication. Figure it out for me, kthxbye.” (Thanks to Peter Alvaro for the last two links; check out his research on Bloom, too: a very nice way to simplify and analyze distributed systems)
  • Cyc, a queryable database of “common sense”.
  • Labanotation – a standardized system for recording human motion. Traditionally used to write down choreography.
  • Structure and Interpretation of Classical Mechanics, in which Sussman (creator of Scheme) writes about classical mechanics in a functional style. I have read the first chapter, and I have to say, functional calculus notation makes a lot more sense to me than the traditional notation. Go figure.
  • Algorithmic information theory
  • Smooth interpretation — a way of smoothing out digital programs so that their parameter spaces can be searched by gradient descent.
  • Lytro, the coolest piece of hardware I have seen in years: it’s a camera which records the directions of incoming rays, so that you can focus the picture after the fact. For example.

Flattr this