Monthly Archives: October 2008

I know what I’m going to get for christmas

I believe I have found a truly marvelous implementation of FRP which this blog post is too short to contain.

Fortunately this is the internet, so it all fits in these three words.

It’s a very, very rough sketch. A plan for an implementation, with many questionable data structures. However I believe it to be possible, and it will doubtless consume my life for the coming weeks.

FRP Rant

FRP is eating my life away!

Here’s the thing. I love FRP. I love the concept, so much so that I am having a very hard time writing anything without it. I want to make games, servers, UI libraries, with it. I want to make things! But I am stuck in research mode, where I am still looking for an FRP library in which to make these things.

Why am I stuck searching? Because I don’t like Reactive. It’s just difficult to use, and difficult to make easy to use. The semantics are unclean (not all of them, just some essential pieces like Event, integral, and stepper), it’s abstracted so high that it takes me forever just to figure out that it doesn’t quite make sense in this little corner, and it’s all coupled together so I can’t figure out how to fix it without rewriting the whole damn thing. And, according to Conal, it still has the dreaded spacetime leak (which can fortunately but rather annoyingly be circumvented as I described almost a year ago).

What am I searching for? Well, whatever ends up being beautiful and easy to use. I’ve got a bunch of leads varying in ease of use and possibility of implementation. The ones which are obviously implementable are basically Reactive. However, my dream would be a pull-implementation of Fregl (the source code now lives here), because Fregl was great! I actually made stuff with it! In particular, everything to do with Events just rolled off my fingertips. The waiter semantics and the monad instance were the power tools analogous to the axiom of choice, where previously careful, meticulous reasoning becomes tactless and obvious under the great sledgehammer. I want a sledgehammer; that’s why I program in a Turing-complete language rather than a total one.

Fregl is no beauty on the inside, however, and this impaired the efficiency of the programs implemented in it. The GHC optimizer is dumbstruck trying to optimize any code written using the library; because of the nature of the GHC garbage collector and weak references, far more work ended up being done than necessary. Basically it was a completely push implementation, and everything in Haskell is optimized for pull implementations, so performance was terrible.

What I want most for christmas is a Haskellian Fregl implementation.

data-memocombinators

Well, it’s not perfect, but it is filling a hole that has been mostly absent for far too long: I just uploaded the data-memocombinators library to hackage. The idea is to abstract away the manual memoization everybody keeps doing and pack them away in tidy little combinators.

For those out of loop of popular CS vernacular, “memoization” is a stupid name for “caching”. As a rather trivial example, take the naive definition of the fibonacci function:


fib :: Integer -> Integer
fib n = if n <= 1 then n else fib (n-1) + fib (n-2)

This runs in exponential time (a rather beautiful O(φn) where φ is the golden ratio). If we want to make this run in linear time without using our brains, we just cache (er, memoize) the intermediate results.

import qualified Data.MemoCombinators as Memo
fib :: Integer -> Integer
fib = Memo.integral fib'
    where
    fib' n = if n <= 1 then n else fib (n-1) + fib (n-2)

Note how the recursive call in fib' is to fib, not fib'.

There is also a library (heh, by Conal—who, contrary to what one might believe by analysis of interests, I am not :-), called MemoTrie, which does the same thing. However I don’t agree with the typeclass approach to this problem. We are concerned not simply whether or not you can memoize a particular type, but how. For example, for some problem you may know that you only want to memoize integers less than 1000, or that you want to use a fast array for those and a binary trie for the rest, or that you accept Eithers, and you want to memoize Lefts but not Rights. This is why, as the name suggests, data-memocombinators provides combinators for building nicely controlled memo tables.

I would like this library to grow, so if anybody has any suggestions or implementations for memo tables they use commonly, do send them along!

Laziness and the monad laws

Recently on haskell-cafe, the rather trivial module OneTuple was announced. I looked over the source and saw something that bugged me.

data OneTuple a = OneTuple { only :: a }

instance Functor OneTuple where
  fmap f (OneTuple x) = OneTuple (f x)

I never write functor instances this way. This is too strict; I would have written it like this instead:

instance Functor OneTuple where
  fmap f = OneTuple . f . only

Because it’s lazier. As I was composing a message explaining to the author why his instance was incorrect and why mine was correct, I realized something: his instance is correct and mine is incorrect! To see why, observe the functor laws:

  fmap id = id
  fmap (f.g) = fmap f . fmap g

My implementation satisfies the second law, but violates the first: fmap id ⊥ = OneTuple ⊥!

The same thing happens with the monad laws; I would have written the monad instance this way:

  instance Monad OneTuple where
    return = OneTuple
    m >>= f = f (only m)

But this violates the right unit law:

  m >>= return = m

Since ⊥ >>= return = return (only ⊥) = OneTuple ⊥ /= ⊥

So, in summary: most Haskell hackers know not to make their functions too strict. But we have to be careful not to make our functions too lazy either, and not just from an efficiency standpoint!