This time, I’m just going to share my idea without any self-obligation to make it.

I’m working on Chrome’s build system at Google. We are switching the build scripts to a new system which uses an ingenious testing system that I’ve never seen before (though it seems like the kind of thing that would be well-known). For each build script, we have a few test inputs to run it on. The tests run all of our scripts on all of their test inputs, but rather than running the commands, they simply record the commands that would have been run into “test expectation” files, which we then check into source control.

Checking in these auto-generated files is the ingenious part. Now, when we want to change or refactor anything about the system, we simply make the change, regenerate the expectations, and do a `git diff`. This will tell us what the effects of our change are. If it’s supposed to be a refactor, then there should be no expectation diffs. If it’s supposed to change something, we can look through the diffs and make sure that it changed exactly what it was supposed to. These expectation files are a form of specification, except they live at the opposite end of the development chain.

This fits in nicely with a Haskell development flow that I often use. The way it usually goes: I write a function, get it to compile cleanly, then I go to ghci and try it on a few conspicuous inputs. Does it work with an empty list? What about an infinite list (and I trim the output if the output is also infinite to sanity check). I give it enough examples that I have a pretty high confidence that it’s correct. Then I move on, assuming it’s correct, and do the same with the next function.

I really enjoy this way of working. It’s “hands on”.

What if my environment recorded my playground session, so that whenever I changed a function, I could see its impact? It would mark the specific cases that changed, so that I could make sure I’m not regressing. It’s almost the same as unit tests, but with a friendlier workflow and less overhead (reading rather than writing). Maybe a little less intentional and therefore a little more error-prone, but it would be less error-prone than the regression testing strategy I currently use for my small projects (read: none).

It’s bothersome to me that this is hard to make. It seems like such a simple idea. Maybe it is easy and I’m just missing something.

]]>I have been thinking about this distinction in the realm of programming. As a Haskell programmer, most of my focus is in an algebraic-feeling programming. I like to perfect my modules, making them beautiful and eternal, built up from definitions that are compact and each obviously correct. I take care with my modules when I first write them, and then rarely touch them again (except to update them with dependency patches that the community helpfully provides). This is in harmony with the current practice of denotative programming, which strives to give mathematical meaning to programs and thus make them easy to reason about. This meaning has, so far, always been of an algebraic nature.

What a jolt I felt when I began work at Google. The programming that happens here feels quite different — much more like the analytic feeling (I presume — I mostly studied algebraic areas of math in school, so I have less experience). Here the codebase and its dependencies are constantly in motion, gaining requirements, changing direction. “Good enough” is good enough; we don’t need beautiful, eternal results. It’s messy, it’s malleable. We use automated tests to keep things within appropriate error bounds — proofs and obviously-correct code would be intractable. We don’t need perfect abstraction boundaries — we can go dig into a dependency and change its assumptions to fit our needs.

Much of the ideological disagreement within the Haskell community and between nearby communities happens across this line. Unit tests are not good enough for algebraists; proofs are crazy to an analyst. QuickCheck strikes a nice balance; it’s fuzzy unit tests for the algebraist. It gives compact, simple, meaningful specifications for the fuzzy business of testing. I wonder, can we find a dual middle-ground? I have never seen an analytic proof of software correctness. Can we say with mathematical certainty that our software is good enough, and what would such a proof look like?

**UPDATE:** Here’s a lovely related post via Lu Zeng. Algebra vs. Analysis predicts eating corn?

Now I have been programming for Clozure for almost six months. As far as jobs go, it’s great. I get to work in my favorite language, Haskell, and they give me enough freedom to experiment with designs and come up with solutions that not only work, but that I would even consider *good*. My fear of programming jobs was based on having jobs where I constantly have to compromise my values, either by working in crappy languages or on startup-style timelines where there is no time to lose. With this job, I feel reunited with my love of software, and my inspirations for developer support tools have been once again ignited.

And so I have amended the plan: after I have saved enough money to support myself for several years, I will not only attempt to bootstrap a career composing, but dedicate my current work week to making a reality the software ideas which have been floating around in my head for half a decade. This prospect *really* excites me — the reason I have not been able to make my ideas is mostly the time pressure: there’s was always something else I *should* be doing, and so I always felt guilty working on my pet projects. I wonder, what am I capable of if my pet projects are the main thing?

I want to revive CodeCatalog. Max and I lost steam on that project for a number of reasons.

- Due to family pressure, I returned to school.
- I fell in love with a girl and got my heart all broken. That can be kind of a downer.
- The priorities of the project compromised my vision. We were attempting to use modern wisdom to make the project successful: first impressions and intuitive usability came first. Our focus was on making it pretty and satisfying to use (which took a long time since neither of us were experienced web front-end developers), and that required me to strip off the most interesting parts of the project because noobs wouldn’t immediately understand it.

So I want to re-orient (3) to make it more satisfying for me. I want to allow myself to make the large strides that I envisage rather than baby-stepping toward success — to encourage myself to use my own talents in design and abstraction rather than trying to be a front-end person, to emphasize the exciting parts (what Audrey Tang calles `-Ofun`). By funding myself, I will not feel the guilt that comes with working on a project at the same time as (1). I can do no more than hope that something like (2) doesn’t happen. (I have a wonderful, stable and supportive relationship right now, so if that continues, that’d cover it :-)

I have many ideas; the reason I want to return to CodeCatalog in particular is mainly because I have identified most of my ideas as aspects of this project. My specific fancies change frequently (usually to things I have thought about before but never implemented), and so by focusing on this project in a researchy rather than producty way, I can entertain them while still working toward a larger goal and eventually benefitting the community.

Here is a summary of some ideas that fit in the CodeCatalog umbrella (just because I’m excited and want to remember):

- Inter-project version control — I have always been frustrated by the inability of git and hg to merge two projects while still allowing interoperation with where they came from. The “project” quantum seems arbitrary, and I want to globalize it.
- Package adapters — evolving the interface of a package without breaking users of the old interface by rewriting the old package in terms of the new one. There is a great deal that can be done automatically in this area with sufficient knowledge about the meaning of changes. I talked with Michael Sloan about this some, and some of the resulting ideas are contained in this writeup.
- Informal checked documentation — documenting the assumptions of code in a machine-readable semi-formal language, to get the computer to pair-program with you (e.g. you write a division
`x/y`and you have no`y /= 0`assumption in scope, you’d get a “documentation obligation” to explain in english why`y`can’t be 0). - Structural editing — coding by transforming valid syntax trees. Yes it’d be cool, but the main reason it’s compelling to me is in its synergy with other features. Once you have the notion of focusing on expressions, holes with contextual information (a la Agda), semi-automatic creation of package and data-type adapters, smarter version control (e.g. a change might rename
*all*references to an identifier, even the ones that weren’t there when the change was made) all come as natural extensions to the idea.

I think the challenge for me will be to focus on one of these for long enough to make it cool before getting distracted by another. My plan for that is to set short-term goals here on my blog and use it to keep myself in check. I am considering involving other people in my project as a way to keep myself focused (i.e. maybe I can make a little mini-kickstarter in which my devotees can pledge small amounts in exchange for me completing a specific goal on time).

This is all two years away or more, which feels like a long time, but in the grand scheme is not that long in exchange for what I see as the potential of this endeavor. I’m just excited and couldn’t help but to think about it and get pumped up. Thanks for reading!

Oh, despite the date, this is totally not an April Fools joke (as far as I know ;-).

]]>>{-# LANGUAGE DeriveFunctor > , DeriveFoldable > , DeriveTraversable > , TypeOperators #-}> >importControl.Applicative >importData.Foldable >importData.Traversable

Certain kinds of typeclasses have some very regular instances. For example, it is obvious how to implement `(Num a, Num b) => Num (a,b)`

and `(Monoid a, Monoid b) => Monoid (a,b)`

, and similarly if `F`

is some applicative functor, `(Num a) => Num (F a)`

and `(Monoid a) => (Monoid F a)`

are obvious. Furthermore, these instances (and many others) seem to be obvious in the same way.

```
(+) a b = (+) <$> a <*> b
mappend a b = mappend <$> a <*> b
fromInteger n = pure (fromInteger n)
mempty = pure mempty
```

And take them on pairs:

```
(x,x') + (y,y') = (x + y, x' + y')
(x,x') `mappend` (y,y') = (x `mappend` y, x' `mappend` y')
fromInteger n = (fromInteger n, fromInteger n)
mempty = (mempty , mempty)
```

It would be straightforward for these cases to derive the necessary implementations from the type signature. However, it would be nice if there were a more abstract perspective, such that we didn’t have to inspect the type signature to find the operations – that they could arise from some other standard construction. Further, it is not quite as obvious from the the type signature how to automatically instantiate methods such as

`mconcat :: (Monoid m) => [m] -> m`

without making a special case for `[]`

, whereas hopefully a more abstract perspective would inform us what kinds of type constructors would be supported.

In this post, we will see such an abstract perspective. It comes from (surprise!) category theory. I disclaim that I’m still a novice with category theory (but in the past few weeks I have gained competence by studying). So we will not get very deep into the theory, just enough to steal the useful concept and leave the rest behind. I welcome relevant insights from the more categorically educated in the comments.

The unifying concept we will steal is the *F-algebra*. An F-algebra is a Functor `f`

and a type `a`

together with a function `f a -> a`

. We can make this precise in Haskell:

>typeAlgebra f a = f a -> a

I claim that `Num`

and `Monoid`

instances are F-algebras over suitable functors. Look at the methods of `Monoid`

:

```
mempty :: m
mappend :: m -> m -> m
```

We need to find a functor `f`

such that we can recover these two methods from a function of type `f m -> m`

. With some squinting, we arrive at:

>dataMonoidF m > = MEmpty > | MAppend m m > > memptyF :: Algebra MonoidF m -> m > memptyF alg = alg MEmpty > > mappendF :: Algebra MonoidF m -> (m -> m -> m) > mappendF alg x y = alg (MAppend x y)

**Exercise 1:** work out the functor `NumF`

over which `Num`

instances are F-algebras, and write the methods of `Num`

in terms of it.

**Exercise 2:** for each of the standard classes `Eq`

, `Read`

, `Show`

, `Bounded`

, and `Integral`

, work out whether they are expressible as F-algebras. If so, give the functor; if not, explain or prove why not.

**Exercise 3:** write a function `toMonoidAlg`

which finds the `MonoidF`

-algebra for a given instance `m`

of the `Monoid`

class.

Motivated by the examples in the introduction, we can find the “instance” for pairs given instances for each of the components.

> pairAlg :: (Functor t) => Algebra t a -> Algebra t b -> Algebra t (a,b) > pairAlg alga algb tab = (alga (fmap fst tab), algb (fmap snd tab))

Also, we hope we can find the instance for an applicative functor given an instance for its argument

```
applicativeAlg :: (Functor t, Applicative f)
=> Algebra t a -> Algebra t (f a)
```

but there turns out to be trouble:

`applicativeAlg alg tfa = ...`

We need to get our hands on an `t a`

somehow, and all we have is a `t (f a)`

. This hints at something from the standard library:

`sequenceA :: (Traversible t, Applicative f) => t (f a) -> f (t a)`

which indicates that our functor needs more structure to implement `applicativeAlg`

.

> applicativeAlg :: (Traversable t, Applicative f) > => Algebra t a -> Algebra t (f a) > applicativeAlg alg tfa = fmap alg (sequenceA tfa)

Now we should be able to answer the query from the beginning:

**Exercise 4:** For what kinds of type constructors `c`

is it possible to automatically derive instances for *(a)* pairs and *(b)* `Applicative`

s for a typeclass with a method of type `c a -> a`

. (e.g. `mconcat :: [a] -> a`

). Demonstrate this with an implementation.

Intuitively, joining the methods of two classes which are both expressible as F-algebras should give us another class expressible as an F-algebra. This is demonstrated by the following construction:

>data(f:+:g) a = InL (f a) | InR (g a) >deriving(Functor, Foldable, Traversable) > > coproductAlg :: (Functor f, Functor g) > => Algebra f a -> Algebra g a -> Algebra (f:+:g) a > coproductAlg falg_(InL fa) = falg fa > coproductAlg_galg (InR ga) = galg ga

So now we can model a subclass of both `Num`

and `Monoid`

by `type NumMonoidF = NumF :+: MonoidF`

.

**Exercise 5:** We hope to be able to recover `Algebra NumF a`

from `Algebra NumMonoidF a`

, demonstrating that the latter is in fact a subclass. Implement the necessary function(s).

**Exercise 6:** Given the functor product definition

>data(f:*:g) a = Pair (f a) (g a) >deriving(Functor, Foldable, Traversable)

find a suitable combinator for forming algebras over a product functor. It may not have the same form as coproduct’s combinator! What would a typeclass formed by a product of two typeclasses interpreted as F-algebras look like?

One of the neat things we can do with typeclasses expressed as F-algebras is form free monads over them – i.e. form the data type of a “syntax tree” over the methods of a class (with a given set of free variables). Begin with the free monad over a functor:

>dataFree f a > = Pure a > | Effect (f (Free f a)) >deriving(Functor, Foldable, Traversable) > >instance(Functor f) => Monad (Free f)where> return = Pure > Pure a >>= t = t a > Effect f >>= t = Effect (fmap (>>= t) f)

(Church-encoding this gives better performance, but I’m using this version for expository purposes)

`Free f a`

can be interpreted as a syntax tree over the typeclass formed by `f`

with free variables in `a`

. This is also called an “initial algebra”, a term you may have heard thrown around in the Haskell community from time to time. We demonstrate that a free construction over a functor is a valid F-algebra for that functor:

> initialAlgebra :: (Functor f) => Algebra f (Free f a) > initialAlgebra = Effect

And that it is possible to “interpret” an initial algebra using any other F-algebra over that functor.

> initiality :: (Functor f) => Algebra f a -> Free f a -> a > initiality alg (Pure a) = a > initiality alg (Effect f) = alg (fmap (initiality alg) f)

**Exercise 7:** Give a monoid isomorphism (a bijection that preserves the monoid operations) between `Free MonoidF`

and lists `[]`

, ignoring that Haskell allows infinitely large terms. Then, using an infinite term, show how this isomorphism fails.

*Next time: F-Coalgebras*

This program ought to be well-behaved — it has no recursion (or recursion-encoding tricks), no `undefined` or `error`, no incomplete pattern matches, so we should expect our types to be theorems. And yet we can get `inconsistent`. What is going on here?

*Exercise:* Identify the culprit before continuing.

The problem lies in the interaction between GADTs and generalized newtype deriving. Generalized newtype deriving seems to be broken here — we created a type `B` which claims to be just like `A` including instances, but one of `A`‘s instances relied on it being exactly equal to `A`. And so we get a program which claims to have non-exhaustive patterns (in `unSwitchB`), even though the pattern we omitted should have been impossible. And this is not the worst that generalized newtype deriving can do. When combined with type families, it is possible to write `unsafeCoerce`. This has been known since GHC 6.7.

In this post I intend to explore generalized newtype deriving and GADTs more deeply, from a more philosophical perspective, as opposed to just trying to plug this inconsistency. There are a few different forces at play, and by looking at them closely we will see some fundamental ideas about the meaning of types and type constructors.

Generalized newtype deriving seems reasonable to us by appealing to an intuition: if I have a type with some structure, I can clone that structure into a new type — basically making a type synonym that is a bit stricter about the boundaries of the abstraction. But the trouble is that you can clone parts of the structure without other parts; e.g. if X is an applicative and a monad, and I declare `newtype Y a = Y (X a) deriving (Monad)`, then go on to define a different `Applicative` instance, I have done something wrong. Monad and applicative are related, so you can’t just change them willy nilly as though they were independent variables. But at the very least it seems reasonable that you should be able to copy *all* the structure, essentially defining a type synonym but giving it a more rigorous abstraction boundary. But in Haskell, this is not possible, and that is because, with extensions such as GADTs and type families, not all of a type’s structure is clonable.

I’m going to be talking a lot about abstraction. Although the kind of abstraction I mean here is simple, it is one of the fundamental things we do when designing software. To abstract a type is to *take away* some of its structure. We can abstract `Integer` to `Nat` by taking away the ability to make negatives — we still represent as `Integer`, but because the new type has strictly fewer operations (it must be fewer — after all we had to implement the operations somehow!) we know more about its elements, and finely tuning that knowledge is where good software engineering comes from.

When implementing an abstraction, we must define its operations. An operation takes some stuff in terms of that abstraction and gives back some stuff in terms of that abstraction. Its implementation must usually use some of the structure of the underlying representation — we define addition on `Nat` by addition on `Integer`. We may take it for granted that we can do this; for example, we do not have trouble defining:

sum :: [Nat] -> Nat

even though we are not given any `Nat`s directly, but instead under some type constructor (`[]`).

One of the properties of type constructors that causes us to take this ability to abstract for granted is that if A and B are isomorphic (in a sense that will become clear in a moment), then F A and F B should also be isomorphic. Since we, the implementers of the abstraction, are in possession of an bijection between `Nat`s and the `Integer`s that represent them, we can use this property to implement whatever operations we need — if they could be implemented on `Integer`, they can be implemented on `Nat`.

This isomorphism property looks like a weak version of saying that F is a `Functor`. Indeed, F is properly a functor from a category of isomorphisms in which A and B are objects. Every type constructor F is a functor from some category; which category specifically depends on the structure of F. F's flexibility to work with abstractions in its argument is determined by that category, so the more you can do to that category, the more you can do with F. Positive and negative data types have all of **Hask** as their source category, so any abstractions you make will continue to work nicely under them. Invariant functors like `Endo` require bijections, but fortunately when we use `newtype` to create abstractions, we have a bijection. This is where generalized newtype deriving gets its motivation -- we can just use that bijection to substitute the abstraction for its representation anywhere we like.

But GADTs (and type families) are different. A functor like `Switch b` has an even smaller category as its domain: a discrete category. The only thing which is isomorphic to `A` in this category is `A` itself -- whether there is a bijection is irrelevant. This violates generalized newtype deriving's assumption that you can always use bijections to get from an abstraction to its representation and back. GADTs that rely on exact equality of types are completely inflexible in their argument, they do not permit abstractions. This, I claim, is bad -- you want to permit the user of your functor to make abstractions.

(Aside: If you have a nice boundary around the constructors of the GADT so they cannot be observed directly, one way to do this when using GADTs is to simply insert a constructor that endows it with the necessary operation. E.g. if you want it to be a functor from **Hask**, just insert

Fmap :: (a -> b) -> F a -> F b

If you want it to be a functor from **Mon** (category of monoids), insert:

Fmap :: (Monoid n) => MonoidHom m n -> F m -> F n

(presumably `F m` already came with a `Monoid` dictionary). These, I believe, are free constructions -- giving your type the structure you want in the stupidest possible way, essentially saying "yeah it can do that" and leaving it to the consumers of the type to figure out how.)

In any case, we are seeing something about GADTs specifically that simple data types do not have -- they can give a lot of different kinds of structure to their domain, and in particular they can distinguish specific types as fundamentally different from anything else, no matter how similarly they may behave. There is another way to see this: defining a GADT which mentions a particular type gives the mentioned type unclonable structure, such that generalized newtype deriving and other abstraction techniques which clone some of a type's structure no longer succeed.

]]>I also feel aesthetically stuck. I am feeling too many choices in Haskell — do I take this option as a parameter, or do I stuff it in a reader monad? Similarly, do I characterize this computation as living in the Cont monad, or do I simply take a continuation as a parameter? When expressing a piece of a computation, do I return the “simplest” type which provides the necessary data, do I return a functor which informs how the piece is to be used, or do I just go ahead and traverse the final data structure right there? What if the simplest type that gives the necessary information is vacuous, and *all* the information is in how it is used?

You might be thinking to yourself, “yes, Luke, you are just designing software.” But it feels more arbitrary than that — I have everything I want to say and I know how it fits together. My physics professor always used to say *“now we have to make a choice — which is bad, because we’re about to make the wrong one”*. He would manipulate the problem until every decision was forced. I need a way to constrain my decisions, to find what might be seen as the unique most general type of each piece of this algorithm. There are too many ways to say everything.

I begin with the program itself:

type Cantor = Nat -> Bool diagonal :: (Nat -> Cantor) -> Cantor diagonal cs n = not (cs n n)

`Cantor` is “the cantor space”, the type of infinite sequences of booleans. We will call such an infinite sequence “a `Cantor`“. There are clearly infinitely many `Cantor`s; e.g. take the range of this function which gives `False` at every position except the one specified:

unit :: Nat -> Cantor unit m n = m == n

`diagonal` is (Georg) Cantor’s diagonal argument written as a program — it takes an alleged sequence of all `Cantor`s, and returns a `Cantor` which does not occur in the sequence, by construction. This function shows by contradiction that we cannot put `Cantor`s in 1 to 1 correspondence with naturals, and thus that there are *more* `Cantor`s than there are naturals.

So how many `Cantor`s are there? Since `Nat -> Bool` is a Haskell type — the type of *computable* functions from `Nat` to `Bool` — `Cantor`s must be representable by programs. We can encode programs as numbers by treating their source code as base-128 numbers. Hence, there are no more `Cantor`s than naturals, and so `Cantor`s can be put into 1 to 1 correspondence with naturals.

Wait — what? There are more `Cantor`s than `Nat`s, but they both have the same size? Something is wrong. Indeed, in the process of this argument we have asserted both

- “We cannot put
`Cantor`s in 1 to 1 correspondence with naturals” - “
`Cantor`s can be put into 1 to 1 correspondence with naturals”

We clearly can’t have both.

The erroneous statement is (2). It is undecidable whether a given program represents a `Cantor`. If the `n`th `Cantor` is ⊥ at `n`, then `diagonal` will fail: `diagonal cs n = not (cs n n) = not ⊥ = ⊥`. Because ⊥ is a fixed point of `not`, `diagonal` cannot return an element different from the one it was given. Thus for `diagonal` to work, we must require that `Cantor`s be *fully-defined* — no infinite loops!

With this requirement, we can no longer put `Cantor`s in 1 to 1 correspondence with the naturals, because we would have to solve the halting problem. It is not enough that the type of the term is a `Cantor`, it now must be fully defined for all inputs, and determining that given arbitrary source code is an undecidable problem.

The erroneous statement is (1). `Cantor`s are computable functions, so as we have argued, they have the same cardinality as the naturals. There are no more programs than numbers, so *by the definition* of equal cardinality we can put them in 1 to 1 correspondence with a function.

The problem with (1) occurs because `diagonal` takes as its first argument not an arbitrary sequence of `Cantor`s, but a *computable* sequence of `Cantor`s. If `cs` is not computable, then neither is `diagonal cs` (for we no longer have `cs`‘s source code with which to construct it), and `Cantor`s are defined to be computable sequences. So `diagonal` fails to contradict our bijection.

The erroneous statement is (2). Section II claims to put `Cantor`s and naturals in 1 to 1 correspondence, but it is lying. Suppose Section II is formulated with respect to some axiom system *A*. If it were “telling the truth”, we would expect there to be some term *f* in the language of *A* such that for every fully defined `Cantor` program *c*, there is some natural number *n* such that we have (i.e. it is a theorem of *A* that *f(1 + 1 + … + 1) = (source code of c)*).

Let’s suppose we have written down the axioms of *A* into a Haskell program, and we have a (partial) function `proofSearch :: Nat -> Cantor`, which, given a number n, searches for theorems of the form and compiles and returns the first such *c* it finds. In the case that there is no such statement, it just runs forever; similarly for the case that *c* fails to compile. Although cumbersome to write, I’m sure we agree that this is *possible* to write. If section II is not lying, then we expect that for every natural *n*, `proofSearch n` *does* in fact return a valid `Cantor`.

Now, let us return to familiar lands with a new program:

evidence :: Cantor evidence = diagonal proofSearch

Oh my! If section II is the truth, then `proofSearch` is a *total, computable* function of type `Nat -> Cantor`, which we can pass to `diagonal` to find a `Cantor` that it missed! So it must have been lying, either (1) about its function `f` finding every possible `Cantor` or (2) about it actually possessing such a function (i.e. it “proved” that there is such a function, but it couldn’t actually represent it). In either case, it did not actually create a 1 to 1 correspondence between the naturals and `Cantor`s.

*Left as an exercise for the reader.*

Which one is it *really*?

Each paradigm corresponds to a syntactic unit of language. The **imperative**, as the name suggests, corresponds to sentences in the imperative mood:

def bfs(predicate, root): queue = Queue() queue.add(root) # First,addthe root to the queue. while not queue.empty(): # While the queue is not empty,dothe following: node = queue.dequeue() #Takea node off the queue. if predicate(node): # If it satisfies the predictate, return node #returnit. for n in node.children(): # Otherwise, queue.add(n) #addeach of its children to the queue.

The program is written as a recipe, telling the computer what to do as if a servant. Note that, after qualifying phrases, each sentence begins with an action word in the imperative form, just as this sentence begins with “note”. The object-oriented aspect adds a sort of directorial role to the program, wherein the program is read not as the user telling the computer what to do, but the program telling objects in the program what to do. Sentences are still written in the imperative mood, they can now be directed: “queue, *give* me an element”, “handle, *write* down this line.”

But not every sentence in human discourse is imperative, for example this one. The **logical** captures sentences of relationship, such as:

captures(logical, relationships).

But perhaps we should see a more practical logical program as an example:

mirrors([], []). % The empty listmirrorsitself. mirrors([X|XS], L) :- % [X|XS]mirrorsL if append(LS,[X],L), % LS and [X]composeL, and mirrors(XS, LS). % XSmirrorsLS.

The program is written as a set of assertions, building a model of its world up from nothing. To run this program, you ask it questions about the world:

?- mirrors(X, [1,2,3]). %Whatmirrors [1,2,3]? X = [3,2,1]

A great deal of human discourse falls into this category: propositional sentences. Most of the sentences in this post fall into that category. However, those familiar with Prolog will know that this is a poor implementation of `mirrors` (it is quadratic time), and would write it this way instead:

mirror([], R, R). % The mirror of [] prepended to RisR mirror([X|XS], Y, R) :- % The mirror of [X|XS] prepended to Yismirror(XS, [X|Y], R). % the mirror of XS prepended to [X|Y|.

In proper logical style, `mirror` expresses itself as propositional relationships, with the caveat that the only relationship is “is”. Code written this way, relating things by identity rather than propositionally, is actually characteristic of **functional** style:

mirror [] r = r -- The mirror of [] prepended to r is r mirror (x:xs) y = -- The mirror of (x:xs) prepended to y is mirror xs (x:y) -- the mirror of xs prepended to (x:y)

The `R` in the Prolog code is only connecting together propositions so that we can express this idea in a functional way. The original `mirrors` predicate is quite unlike this; expressing it in Haskell requires more than a mindless transliteration (however there are still hints of a structural similarity, if a bit elusive).

But I claim that “is” is *not* the defining linguistic characteristic of functional programs. We could write a functional program with a single equals sign if we were so obfuscatedly-minded; the analogous claim is invalid for logical and imperative programs. The characteristic device of functional programming is the *noun*: functional programs do not give instructions or express relationships, they are interested in defining objects.

bfs children xs = --the BF traversal of xsis xs ++ --xsitselfappended tobfs (concatMap children xs) --the BF traversal of each of xs's children

The monad is typically used to express instructions as nouns:

main = --the main programisthe recipewhich getLine >>= \name -> -- gets a line then putStrLn $ "Hello, " ++ name -- puts "Hello, " prepended to that line

Haskell elites will object to me using IO as a prototypical example of a monad, but the claim is still valid; look at the words used to define monad actions: `tell`, `ask`, `get`, `put`, `call`. These are imperative words. This is not a sweeping generalization, however; for example, the constructors of Escardó’s search monad are nouns.

The following table summarizes the linguistic analogies:

Paradigm |
Mechanism |
Example |

Imperative | Imperative mood | Put a piece of bread; put meat on top; put bread on top. |

Logical | Propositional relationships | Bread sandwiches meat. |

Functional | Noun phrases | Meat sandwiched by bread |

Each of these paradigms is pretty good in its area. When we’re issuing commands in a functional language, we pretend we are using an imperative language; when we want to treat complex nouns (such as lists) in an imperative language, we are learning fall back on functional concepts to operate on them. When we are documenting our code or talking about types, we use logical ideas (typeclasses, generics).

The thing that makes me hubristically curious is the realization that these three categories hardly cover the mechanisms available in language. What else can we learn from human languages when expressing ourselves to computers?

This post has focused on the “content code” of the various paradigms. There is another kind of code (mostly in statically typed languages) that is interspersed with this, namely **declarations**:

class Player {...} data Player = ...

I have not yet been able to find a good analogy for declarations in language; perhaps they introduce something like a proper noun. I will save this topic for another post.

]]>It has also been some time since I have been excited about programming. My readers are surely aware of this as the topics of this blog meander this way and that away from my expertise. I used to love seeking the elegant design, simmering 200 lines down into 40, hypothesizing about language features and module systems, imagining worlds in which all modern software was built upon my paradigm.

I think these two things are related. I am not sure which way the causation goes, or even if there is a causal relationship. But thinking back on my time as a programmer, the times when I was most productive I was learning: working in a new language or working in a new domain. I still think CodeCatalog is a great idea, in total a few hard weeks’ work, and I can’t convince myself to write another line of the damned thing. That’s because I know how it’s gonna go; I know what I have to do; there is no mystery to it.

What if, instead of twisting my arm and trying to force myself into “good behavior”, I embraced this aspect of myself? There has to be some beautiful experimental kernel to that project; there has to be some beautiful way to express it. And it is certainly possible, even likely, that the result won’t end up looking and feeling like StackOverflow or Google Whatever (beta).

So what?

Have I been so brainwashed by the business of software that I will abandon a project because I cannot inspire myself to bring it to business standards? I think it’s because we wanted to make money. It would be nice not to have to worry about paying rent, I admit, but not worth exchanging for the beauty of an inspired work of code.

Someone invent a beautiful web/database language. I mean beautiful and unique — not practical or productive or real-world or familiar or interoperable or scalable. I don’t care about those adjectives, those are for engineers.

]]>@stevedekorte – Threads sharing state by default is like variables being global by default.

@luqui – state is like globals :-)

@stevedekorte – @luqui only shared state – which is why FP ultimately fails – it trades comprehensibility for shared state optimizations

@luqui – @stevedekorte, wow, sneaking “FP ultimately fails” as if an obvious truth in a reply to a Haskell programmer

@stevedekorte – @luqui, a bit like sneaking in “[all] state is like globals” to an OO programmer? :-)

@stevedekorte – @psnively @luqui my only issue with FP is the decision to trade expressivity and reusability for less state while calling it progress

The conversation goes on (and on) between many twitterites, having a fun but serious argument about this and that benefit of this and that style. Dynamic/static types come up, OO vs. functional, usefulness, mathematical foundation, learning curves; none of the standard artillery is spared. What irritates me about this style of argument is all the sweeping adjectives (1) used with no definition, thus impossible to support with evidence, and (2) synonymized with better.

In this post, I will draw attention to this irritating vocabulary, so that the next time you use it you can realize how little you are saying.

(Disclaimer: this post is *not* intended to criticize stevedekorte specifically, even though I identify two of the terms he used below. It was a long, typical programming zealot argument, and all parties involved were being equally dumb :-)

A person is expressive if he expresses himself — he has an idea and wants to write it down. So I would say a language is expressive if it allows or enables the programmer to be expressive. Languages that restrict expression are not expressive. So we have the following facts:

- Dynamically typed languages are more expressive than corresponding statically typed ones, because statically typed languages forbid you from expressing some ideas.
- Multiparadigmatic languages are more expressive than languages which are paradigmatically pure, because the former allow you to express yourself if you are not thinking within the framework.
- A language which
*you*are fluent in is more expressive than a language you do not know very well.

By these observations, we might guess that Perl is the most expressive language, Haskell is the least.

Do you notice yourself already equating expressive with good, and thus perhaps feeling offended? Everyone wants an expressive language, right? Here are some reasons *some* programmers might not want an expressive language:

- Most of my ideas are made of bullshit. Have you ever had a great idea, gone to write it in your blog, and realized it was nonsense because you were unable to write it? So writing is less expressive than thinking. Is thinking better than writing?
- Every programmer has a different way of structuring their thoughts. An expressive language will bring out the differences in thought structure between programmers, and introduce impedance mismatches between programmers on a shared project.

I’m not arguing that expressiveness is *bad*. I’m just arguing that it doesn’t mean good, it means expressive.

A language “is reusable” (to abuse language a bit) if code written in that language can be easily reused.

This “obvious” statement is hiding something very important; namely, reused how? For what? We are in an unfortunate situation in programming: code is designed to be reused in a particular way, and if you want to reuse it in a different way you are pretty much out of luck. An OO widget library is designed for the addition of new types of widgets, but if you want to reuse a *program* written in the library on a new platform you are in for a lot of work. A functional drawing library is designed so that you can transform and export your drawings in an open-ended variety of ways, composing new ways out of old ones; but if you need to draw a circle you have to build it out of lines, even if there is a much better way to draw a circle on your target. (This is essentially the expression problem).

An abstraction will always expose some things and conceal others. Different languages enable abstraction in different ways, which makes exposing certain things easier and others harder. The zealot will reply, “but in my experience, *real-world* programs are naturally organized around <insert preferred paradigm>, and <insert uncomfortable paradigm> doesn’t support that style as easily.” I would suggest to this zealot to look deeper into the definition of “real-world” to discover its many facets of subjectivity. (What domain do you work in? Who architected the real-world software you have worked on, and what was their background? What kinds of programs do you consider not to exist in the real world, and what values are you using to minimize them?)

A language is easier to learn than another language if it takes less time to become competent/fluent programming in that language.

I don’t think this one is as automatically synonymized with “good”. Haskell programmers are aware how much relative effort was required to learn Haskell, and are usually grateful that they put in the effort. But *all other things being equal*, a language easier to learn ought to be better than one harder to learn.

The deadly omission in the above definition is that *people* are doing the learning. A language is easier or harder to learn to a single person, and that is entangled with their background, their approach, and their ambitions. When arguing “X is easier to learn than Y”, I encourage you to add one of the following phrases to the end:

- for programmers who already know Z.
- for people with a mathematical background.
- for people with a non-mathematical background.
- for children.
- for me.

Or something similar. The following phrases do *not* count.

- for almost everyone.
- for people with a typical background.
- for people who want to do useful things.

I’ll close this section with this remark: Haskell is the easiest language to learn, because I already know it.

I know I am frequently irritated by many of these kinds of words, and I’ve only mentioned three here. But you see where I am going. *Respect* the *values* of your fellow engineers. If you are a polyglot and like a paradigm, it probably comes from a set of values you have — a set of things you consider important, and that *differ* from the values of others. Concentrate on that; communicate your values, and try to understand the values of others. If you have toyed with a paradigm and quickly lost interest because of some surface feature (I have — e.g. I will throw out a language without support for closures) or impression, consider the possibility that you like what you like because simply because it is *familiar* (other equivalent notions: easy, natural). Which is fine, some people like to tinker with their thinking patterns more than others, but remember that you have trained yourself to think in a particular way, so consider that your “objective” judgements about these ultimately *human languages* could be biased.

(For the record, that last phrase originally read: “all of your ‘objective’ judgements about one of these ultimately *human languages* are biased”, which is quite a bit stronger than what I had intended)