Tag Archives: frp

Reactive spaces

My recent days have been spent staring at the ceiling, drawing abstract doodles on a whiteboard, or closing my eyes and watching higher-dimensional images fly through my consciousness. No, I haven’t been on drugs. I’m after a very specific piece of mathematics, to solve a specific problem. But I have no idea what that mathematics is.

I’m after the sameness between functions of time and functions that react to events. Here is my best attempt at putting my images into words:

I will call them (generalized) Reactives, to confuse classical FRP people :-). A Reactive is defined over an environment space, which is a set of environments (with some structure). Time is an environment space; it is uniform and boring, so all of its environments are identical or isomorphic. There is also an Event environment space, whose inhabitants are roughly Events from reactive (a set of separated occurrences).

A Reactive takes an environment and outputs values in terms of it somehow. Reactives have a notion of translation. Say you have a reactive over an event space which is “False until the next mouse click and then True”. Translating this switches which mouse click it is talking about, but not when the mouse clicks occur; so the transition point will always be exactly on an external mouse click. However, since time is a uniform space, translation of a reactive over time does correspond to simple translation, since there is no interesting structure to query.

I don’t know yet what exactly an environment is. I am trying to capture the fact that reactives over an event space can only switch on occurrences of events, whereas reactives over time correspond to continuous functions. If an event environment looks like an FRP Event, what does the time environment look like?

Relative time FRP

Conal seems to have discovered an essential change in the semantics of FRP. The change is to think in terms of relative time instead of absolute time (Einstein would approve). It really cleans a lot of things in the semantics up, makes the implementation local (which functional programming is very good at making efficient).

So, for example, instead of Behavior a = Time -> a, we have Behavior a = Time -> a :-). The difference is simply that on the left, “now” is whatever time you give it, whereas on the right, “now” is 0.

However, this change has far reaching consequences on how I think about FRP. The difficulties I have been trying to overcome in my implementations no longer make any sense. This is a good sign.

I have been gradually hacking on a “practical” implementation of FRP, i.e. putting aside the beauty of the implementation and doing whatever I can to make it efficient and easy to use. But man it’s awful. I have no certainty whatsoever than the 200 lines I have so far written are anywhere near correct other than they seem to work when I play with them. I don’t like that feeling.

I have the urge to revisit my precise and rigorous implementations I had been working on, which I gave up on because they didn’t efficiently express what I wanted them to. In particular, this relative time thing with comonads seems very simple. So I want to write it together with my “correct blocking thunks”, which are thunks that are technically blocking, but they never actually do because you have to prove that they won’t before you evaluate them :-)

I’m pretty excited. I’m hopeful that this new FRP is expressive and powerful in addition to its beauty. I won’t know that until I rewire my brain a little.

Screw laziness (w.r.t. Fran semantics)

Reactive is getting more mature, but still does not support “feedback” behaviors (a bug), and still does not support true first-class Behaviors (an essential limitation); meaning you cannot efficiently put a Behavior in a data structure and save it for later use. By observing the reactive mailing list, this limitation is proving unimportant; still plenty of things are possible. But I am a stubborn, hubristic person, and I simply insist on Fran semantics (Behavior & Future) with first class Behaviors.

I have tried and I have tried, and I am quite convinced that the Fran semantics are not implementable lazily and efficiently in Haskell. I think I have come up with a garbage collector extension that would allow it, but I’m not going to pursue it. Knowing that such an extension exists is important to me, however, as I see hardware such as The Reduceron playing a big role in the future. The IO monad is not going to fly in a reduceronian world.

My impatience for a functional reactive game and UI library is growing. I could have made one in Fregl, but I became convinced that Fregl was too inefficient, and that I required weak-reference-free laziness to make it efficient. In retrospect, however, Fregl was inefficient for many more reasons than simply its eagerness. I am thus throwing aside my laziness requirement, and am now setting out to make a very efficient eager FRP library with Fran semantics.

Perl hackers know the three virtues: laziness, impatience, and hubris. I find it amusing that the one I’m having to throw away in a lazy language is laziness itself.

I know what I’m going to get for christmas

I believe I have found a truly marvelous implementation of FRP which this blog post is too short to contain.

Fortunately this is the internet, so it all fits in these three words.

It’s a very, very rough sketch. A plan for an implementation, with many questionable data structures. However I believe it to be possible, and it will doubtless consume my life for the coming weeks.

FRP Rant

FRP is eating my life away!

Here’s the thing. I love FRP. I love the concept, so much so that I am having a very hard time writing anything without it. I want to make games, servers, UI libraries, with it. I want to make things! But I am stuck in research mode, where I am still looking for an FRP library in which to make these things.

Why am I stuck searching? Because I don’t like Reactive. It’s just difficult to use, and difficult to make easy to use. The semantics are unclean (not all of them, just some essential pieces like Event, integral, and stepper), it’s abstracted so high that it takes me forever just to figure out that it doesn’t quite make sense in this little corner, and it’s all coupled together so I can’t figure out how to fix it without rewriting the whole damn thing. And, according to Conal, it still has the dreaded spacetime leak (which can fortunately but rather annoyingly be circumvented as I described almost a year ago).

What am I searching for? Well, whatever ends up being beautiful and easy to use. I’ve got a bunch of leads varying in ease of use and possibility of implementation. The ones which are obviously implementable are basically Reactive. However, my dream would be a pull-implementation of Fregl (the source code now lives here), because Fregl was great! I actually made stuff with it! In particular, everything to do with Events just rolled off my fingertips. The waiter semantics and the monad instance were the power tools analogous to the axiom of choice, where previously careful, meticulous reasoning becomes tactless and obvious under the great sledgehammer. I want a sledgehammer; that’s why I program in a Turing-complete language rather than a total one.

Fregl is no beauty on the inside, however, and this impaired the efficiency of the programs implemented in it. The GHC optimizer is dumbstruck trying to optimize any code written using the library; because of the nature of the GHC garbage collector and weak references, far more work ended up being done than necessary. Basically it was a completely push implementation, and everything in Haskell is optimized for pull implementations, so performance was terrible.

What I want most for christmas is a Haskellian Fregl implementation.

General Update

My brand new Acer laptop’s video card died a few days ago. The computer still worked, I just couldn’t see anything. So, while it’s being fixed, I’m back to using my old piece of junk which refuses AC power while turned on. It has an old battery, so I’m limited to about an hour of usage before it forces me to take a break. Needless to say, this is not enough time to get any significant coding done.

Coq is totally awesome! I proved Rice’s theorem for the SK calculus, which was a bit more satisfying than proving that every natural is either even or odd. My proof is pretty ugly, I can see a lot of ways I could clean it up, but I forgive myself; I’m a noob.

Coq is making me very excited for dependent types. For example, toward the end of the aforementioned proof, I started to define some combinators so I could construct the “impossible program”:

 Definition compose := S (K S) K.
 Lemma compose_really_does : forall f g x, compose f g x = f (g x).
  (* ... *)
 Qed.

I can envisage doing this kind of thing all the time in my regular programming practice. Write a little function, then prove that it does what I want it to. I’ve already begun exploring ways to use Coq for my every-day programming tasks, such as clean ways to hook into Haskell libraries with Extraction. Who knows how fruitful this endeavor will end up.

I have been making some interesting findings on the FRP front. Here’s an executive summary of each; I may or may not go into more detail in future posts.

(1) The semantics of behaviors are “functions of time”, but what the hell is time? In the real world, time has more structure than numbers. So I claim that in FRP what we’re actually constructing is causal transformations of time functions. A transformation R is causal if R(f)t = R(ft)t for all f, t, where the t subscript means “restricted to times before t”. As yet this has not led to any new implementation insights.

(2) FRP would be simple, easy to work with, and expressive even without its Event type, if only there were an efficient Monad instance for Behavior. For most of the cases where we would use Event, we substitute Behavior :. Future.

(3) Behavior has an efficient Monad instance, but it is not implementable in Haskell as it stands. It requires some support for linear types. Behavior itself is not linear per se, but each separate lexical instance is always accessed in a monotone pattern. So if multiple references were indicated by the compiler with split :: Behavior a -> (Behavior a, Behavior a), then you could destructively update each reference to the latest time it was accessed and avoid the expensive “fast-forwarding” necessary for the naive Monad instance. There are some other tricks, but that is the main idea.

(4) (What I’m currently exploring) It may be possible to get this efficiency without linear types by embedding the linearity in a monad, getting the “Behavior Monad Monad”. Ironically, the best candidate for doing this so far is the “Behavior :. Behavior Monad” (semantically). Whether or not it ends up working, I don’t think it will be very easy to work with.

Slicing open the belly of the IO monad in an alternate universe

I’ve been looking for a way to do the pieces of I/O that are well-defined in the framework of FRP. For example, fileContents "someFile" is a perfectly good function of time, why should we be forced to drop into the semantic fuzziness of the IO monad to get it?

Well, after a long talk with the Anygma folks about it, getting not very far in the practical world, I decided to do something else to quench my thirst for the time being since software needs to get written. I’m going to have the FRP program request actions by sending sinks to the top level, and then have the results of those requests come back via futures. So:

type Sink a = a -> IO ()
type Pipe = forall a. (Future a, Sink a)
data Stream a = a :> Stream a

liftFRP :: Sink a -> IO a -> IO ()
liftFRP sink a = a >>= sink

Okay, that’s nice, but how do we prevent uses of these things from becoming a tangled mess. Input always needs to communicate with output, and it seems like it would just be a pain to coordinate that. It turns out we can stick it in a monad:

newtype StreamReaderT v m a = SRT { runSRT :: StateT (Stream v) m a) }
    deriving (Functor, Monad, MonadTrans)
readNext = SRT $ do
    (x :> xs) <- get
    put xs
    return x

type IO' = StreamReaderT Pipe (WriterT (IO ()) Future)
-- using the IO () monoid with mappend = (>>)

liftFRP :: IO a -> IO' a
liftFRP io = do
    (fut,sink) <- readNext
    tell (io >>= sink)
    (lift.lift) fut

unliftFRP :: IO' a -> IO a
unliftFRP m = do
    stream <- makePipeStream  -- a bit of magic here
    ((fut,_),action) <- runWriterT (runStateT (runSRT m) stream)
    action
    return $! futVal fut

It looks like IO’ has the very same (lack of) semantics as IO. liftFRP and unliftFRP form an isomorphism, so we really can treat them as pulling apart the IO monad into something more versatile, and then putting it back together.

Also we get nice fun parallel version of liftFRP. I can’t decide if this should be the common one or not.

liftParFRP :: IO a -> IO' a
liftParFRP io = do
    (fut,sink) <- readNext
    tell (forkIO (io >>= sink))
    (lift.lift) fut

So using the liftFRP and unliftFRP, we are no longer “trapped in IO” as some folks like to say. We can weave in and out of using IO as we’re used to and using requests and futures when convenient. For example, it’s trivial to have a short dialog with the user via the console, and have the real time program ticking away all the while. Fun stuff!

Composable Input for Fruit

The other day I had an idea for a game which required a traditionalish user interface (text boxes, a grid of checkboxes, …). But I’m addicted to Haskell at the moment, so I was not okay with doing it in C#, my usual GUI fallback. Upon scouring hackage for a GUI widget library I could use, I realized that they all suck—either they are too imperative or too inflexible.

So I set out to write yet another one, in hopes that it wouldn’t suck. The plan is to write it on top of graphics-drawingcombinators, my lightweight declarative OpenGL wrapper, and reactive, the latest (in progress) implementation of FRP. In researching the design, Conal pointed me to a paper on “Fruit”, which is a very simple design for GUIs in FRP. It’s nice (because it is little more than FRP itself), and it’s approximately what I’m going to do. But before I do, I had to address a big grotesque wart in the design:

data Mouse = Mouse { mpos :: Point,
                     lbDown :: Bool,
                     rbDown :: Bool }
data Kbd = Kbd { keyDown :: [Char] }
type GUIInput = (Maybe Kbd, Maybe Mouse)
type GUI a b = SF (GUIInput,a) (Picture,b)

So every GUI transformer takes a GUIInput as a parameter. The first thing that caught my eye was the Maybes in the type of GUIInput, which are meant to encode the idea of focus. This is an example of the inflexibility I noticed in the existing libraries: it is a very limited, not extensible, not customizable idea of focus. But there is something yet more pressing: this input type is not composable. The type of input is always the same, and there is no way to build complex input handling from simple input handling.

I took a walk, and came up with the following:

Scrap GUI. Our interface will be nothing more than pure FRP. But that doesn’t solve the input problem, it just gives it to the users to solve. So to solve that, we build up composable input types, and then access them using normal FRP methods.

We will start with Kbd and Mouse as above. The problem to solve is that when we pass input to a subwidget, its local coordinate system needs to be transformed. So the only cabability input types need to have is that they need to be transformable.

-- A class for invertable transformations.  We restrict to affine transformations
-- because we have to work with OpenGL, which does not support arbitrary
-- transformation.
class Transformable a where
    translate :: Point -> a -> a
    rotate    :: Double -> a -> a
    scale     :: Double -> Double -> a -> a

instance Transformable Point where
    -- .. typical affine transformations on points

-- Keyboard input does not transform at all
instance Transformable Kbd where
    translate _ = id
    rotate _ = id
    scale _ _ = id

-- The mouse position transforms
instance Transformable Mouse where
    translate p m  = m { mpos = translate p (mpos m) }
    rotate theta m = m { mpos = rotate r (mpos m) }
    scale sx sy m  = m { mpos = scale sx sy (mpos m) }

-- Behaviors transform pointwise.  In fact, this is the instance
-- for Transformable on any Functor, but we have no way of telling
-- Haskell that.
instance (Transformable a) => Transformable (Behavior a) where
    translate = fmap . translate
    rotate = fmap . rotate
    scale sx sy = fmap (scale sx sy)

Widgets that accept input will have types like: Behavior i -> Behavior o where both i and o are transformable (o is usually a Drawing, or a Drawing paired with some other output). So we can transform a whole widget at once by defining a transformable instance for functions.

instance (Transformable i, Transformable o) => Transformable (i -> o) where
    translate p f = translate p . f . translate (-p)
    rotate r f = rotate r . f . rotate (-r)
    scale sx sy = scale sx sy . f . scale (recip sx) (recip sy)

The way we transform a function is to inversely transform the input, do the function, then transform the output. This is called the conjugate of the transformation.

And that’s it for composable input: just a class for affine transformations. A typical GUI might look like:

-- Takes a mouse position, returns the "pressed" state and its picture.
button :: Behavior Mouse -> Behavior (Bool,Drawing)

And if we want two buttons:

twoButtons = (liftA2.liftA2) (second over) button (translate (1,0) button)

That is, just transform each subGUI as a whole (rather than separating input and output) and combine appropriately. That’s the theory, at least. For this to actualy work correctly, we would need one of the following:

instance Transformable b => Transformable (a,b)
instance Transformable Bool    -- do nothing

Neither of these rubs me the right way. That seems like the wrong instance of transformable (a,b) to me (however, (a,) is a functor, so it’s consistent with what I said earlier). I don’t like having transformable instances for things that don’t actually transform. I’m thinking about maybe a type like this:

newtype WithDrawing a = WithDrawing (a,Drawing)
instance Transformable (WithDrawing a)

(Or the appropriate WithTransformable generalization)

Thoughts?

Fregl: an introduction to FRP

I’ve been mentioning quite a bit about my latest FRP implementation. First things first: You can find it here. The implementation is called “Fregl”, for “Functional Reactive Game Library”, roughly. But I thought I would take a moment to describe it and some of the directions I plan on taking it. The first section introduces FRP at a high level, and my intent is to write it so that someone unfamiliar with Haskell can at least get an idea of what it is about. The second section is much more technical, and goes into some of the implementation details. (Hmm, don’t have time right now to write the second section. It’s a post for another day.)

The Surface: Signals and Events

There are two major types of objects that make up the core of Fregl: Signals and Events. People familiar with FRP will feel right at home with these. A Signal is a value that varies over time, like any ordinary variable in a typical imperative language (they are defined and used very differently, though). An Event is just something that happens at some time, and then returns a value when it does happen. Both Signal and Event are type constructors, kind of like templates in C++; they carry with them an additional type. For example: Signal Int is a value of type Int that varies over time; Event Vec2 is an event that, when it happens, returns a 2D vector.

There are some core events, for example: waitClickPos, the position of the next mouse click; waitTimestep, occurs when a simulation timestep happens and returns the amount of time since the last one. There are a handful of others, but for simplicity we’ll only use mouseClick which we’ll define like this:

  mouseClick :: Event Vec2                        -- type signature (optional)
  mouseClick = waitClickPos ButtonLeft MouseDown  -- implementation

I should stress that Events are first-class objects just like any other (unlike events in C#!). There is nothing stopping a function from taking some arguments and returning an Event. In fact, waitClickPos does exactly that! It takes a button and a state and returns an Event representing when that mouse button enters that state.

It turns out that most of the logic in an FRP program comes in the form of custom events. Events are composable: you make big, complex events out of smaller ones. Here are a few examples. Non-Haskell programmers, I encourage you to try to understand this code too, it is pretty easy (and I’ve intentionally written it in a more imperative style).

  -- wait for two mouse clicks and return the position of the second one
  secondClick = mouseClick >> mouseClick

  -- wait for two mouse clicks and return the vector from the first to the second
  clickVector = do {
      pos1 <- mouseClick;
      pos2 <- mouseClick;
      return (pos2 ^-^ pos1);  -- ^-^ is vector subtraction
  }

  -- wait for a given amount of time
  waitTime time = do {
    if time <= 0 then
        return ()
    else do {
        dt <- waitTimestep;
        waitTime (time - dt);
    }
  }

  -- if there's a mouse click in the next 10 seconds, return its position,
  -- otherwise return (0,0)    (oh, how unhaskellish)
  clickInTenSeconds = eitherEvent waitTenSeconds mouseClick
      where
      waitTenSeconds = waitTime 10 >> return (0,0)

I hope by the last example you can start to see the power of such composability.

So far we have seen how to define your own Events, but Events don’t do anything. They just define what it means for something to happen, not what to do when they actually do happen. So we have to introduce an “event handler”, right?

Wrong. We need only two primitives, neither of which I would consider an event handler:

  -- return an event which happens instantly, returning the current value in the signal
  readSig :: Signal a -> Event a

  -- The returned signal takes on the value of the first signal until the event happens
  -- (and returns a signal), and takes on the value of the second signal from that point
  -- on.
  untilEvent :: Signal a -> Event (Signal a) -> Event (Signal a)

Examples of using these are coming soon. First, and this the essential difference from classical game programming, note that there is no “signal assignment”. To determine what the value of a signal at a certain time should be, you look at the signal’s definition, and that tells you everything. The only things which can affect its value are the things it depends on, not the things that depend on it. This is one of the sorts of “referential transparency” that Fregl obeys. There will be another–read on.

Okay, so how do you actually use these things? Here is one example to get the feel for it. Again, non-Haskellers should be able to understand this code given a little thought. Oh, I should mention the backquote syntax: If I write foo `baz` bar, that just turns the function baz into an infix operator; i.e. it is equivalent to calling baz with the two arguments: baz foo bar.

  -- Takes an 'initial value' and returns an event which occurs instantly, returning
  -- a signal which counts the amount of time since the event occurred plus the
  -- initial value.
  time :: Double -> Event (Signal Double)
  time init = pure init `untilEvent` do {  -- "pure" constructs a constant signal
      dt <- waitTimestep;
      time (init + dt);
  }

Let’s take a moment to study that code. It says that the returned signal takes on the constant value ‘init’ ["pure init"] until ["untilEvent"] the next timestep ["waitTimestep"], at which point it behaves like time (init + dt). It is a recursive definition. (Nit for the leet: the last line should be time $! init + dt lest we should make a huge addition thunk.)

It returns the signal inside an Event which occurs instantly. This is very common of signals. You can think of Events not just as things that happen, but things that have a perception of time. That is, our time function here needs to know what the “current time” is for its definition, which is why it must be returned inside an Event. Don’t worry if this part doesn’t make sense yet.

Now we get to step way outside the realm of conventional programming and define some really powerful stuff. It turns out that it is possible to define the following function:

  -- Returns an event which happens instantly, and returns a signal which represents
  -- the integral of the given signal over time, starting from the time the event
  -- was fired.
  integral :: Signal Vec2 -> Event (Signal Vec2)

(The actual implementation works on more general types than just Vec2)

Using this, we can define the following program (very similar to examples/follow):

  -- we use the following predefined signal:
  mousePos :: Signal Vec2

  -- defines the position of a ball that follows the mouse around
  follower = mdo {
      let acceleration = mousePos ^-^ position ;
      velocity <- integral acceleration ;
      position <- integral velocity ;
      return position ;
  }

If nothing in the article has given you chills yet, this should. In the definition of follower, first we define the acceleration to be the target position (the mouse) minus the current position. What do you mean the current position, where is that defined? Look two lines down, there it is, defined as the integral of velocity! This is a circular definition of position, essentially asking the library to solve the equation position = ∫∫ (mousePos – position).

Then, using this definition, we can easily turn this into a program which draws a ball at that position using the (very limited so far) drawing primitives:

  main = runGameSDL $ do {
    ballPos <- follower
    return (Draw.translate <$> ballPos <*> pure Draw.circle)
  }

And there’s the whole program.

This is just the beginning, the bare bones of FRP. It’s all embedded inside idiomatic Haskell, so we can use Haskell’s top-notch abstraction abilities to write out programs. To give you a flavor, here’s the signature of a neat function:

  -- Takes an event as an argument.  When this event fires, it should return a
  -- a signal (representing perhaps the state of an actor) and another event
  -- (representing the death condition for this actor).  So the returned signal
  -- will be in the output list between the time the "creation event" fires
  -- and the time the "destruction event" fires.
  actorList :: Event (Signal a, Event ()) -> Event (Signal [a])

This function implements the idea of a list of active actors, something I’ve written over and over in C++, whose implementation is always spread out across my program (iterate through it once in the draw() routine, once in the step() routine, once in the events() routine). A less general variation of it appears in examples/shmup, and it only takes about 9 lines to implement, once and for all.

That’s it for the layman’s introduction. One of my major programming-evangelistic goals is to convince people that OO is not the best way to solve every problem. I hope that this introduction has shown you that there are some non-OO techniques which can be immensely powerful that the game community has not even begun to explore.

Here’s a teaser for next time. It turns out, not by any conscious plan of my own, just by coincidence, that Fregl’s implementation of these semantics admits a very nice parallel implementation. In theory, it scales linearly with the number of processors automatically (i.e. the game programmer need not put in any extra work), which is a big deal if I do say so myself. I’ll be performing a practical test on one of CU’s 8-processor machines in a few days.

FRP: What’s the big deal?

I really have to get to bed, but I just did a little test and am so excited I need to say something:

AFRP sucks. The incredible amount of trouble I had to go through just to implement extremely simple physics with a dynamic number of balls within FRP (none of that cheating by factoring the physics elsewhere; while that may be a good thing to do, it’s a point for the power of FRP if you can do it directly in FRP) indicates that AFRP is absolutely not suitable for games. The short reason is that is chokes when faced with dynamic behavior.

On the other hand, classical comonadic FRP is great; it supports such dynamism without a sweat. That is, if it weren’t for the spacetime leak that results when you have value recursion like:

  e = integral 1 e

Which is horrible because that is the form of every differential simulation (read: games).

Because of that leak, I tried three different approaches with arrows, with varying flexibility trade-offs, but they were all bad for making games. I wanted to go back to comonadic FRP so badly, but the spacetime leak loomed in the darkness, condemning my programs to progressively slow down (and eat up more and more memory) as they ran.

As refresher, the data structure for comonadic FRP is:

  type DTime = Double
  data Signal a = Signal a (DTime -> Signal a)

And today something hit me, something so simple that it made me reconsider my sanity:

  -- 'runSignal step sig' produces the infinite list of
  -- values every 'step' seconds from 'sig'
  runSignal :: Double -> Signal a -> [a]
  runSignal = ...

  -- listSignal is the inverse of runSignal, constructing
  -- a signal from a list
  listSignal :: Double -> [a] -> Signal a
  listSignal = ...

  buffer :: Double -> Signal a -> Signal a
  buffer step = listSignal step . runSignal step

The buffer function just samples a signal at regular intervals, caching the results internally in a list. The implementation of these three functions combined is 13 simple lines; the concepts are easy too. Again, foreheadslap! Why? This is why:

  e = buffer 0.05 $ integral 1 e

No more spacetime leak! The nth timestep of e is found in linear time and constant space. To generalize the technique, just put a buffer between any value recursion to fix the leak. No, it’s not a perfect solution, but it’s a good compromise. You still get perfect granularity in your event handling (to me, an essential property), and you get to use comonadic FRP.

I’m so excited!