I’ve been able to sketch-implement a version of FRP with a strange distinction: `Env` vs. `Behavior`. I’ll give you the intuition, then the rest of the article will develop the semantics more fully. I thought “backwards” to discover this (the word “discover” used under the assumption that it has some merit: questionable), from implementation, to concept, to semantics.

The semantics of `Env a` are plain ol’ `Time -> a`, but with a caveat: there is no reactivity allowed. I.e. an `Env a` yields an `a` only by looking at stuff “now”. In the implementation, this is just a subtype of `IO`. The semantics of `Behavior` are even less defined. I guess it’s semantics are `Time -> Time -> a`, so it is given a “start time” and returns a time function. It’s sortof like a relative behavior, but shifting it around does not exactly preserve behavior. The implementation is much like `Reactive (Env a)`, where `Reactive` is from the popular reactive library. But through this exploration I am led to a very different implementation.

To formalize this silliness, first we need to add some composability and define these as *transformations*. Let’s just fix an `e` for the whole discussion, and then say the meaning of `Env a` is (a subtype of) `(Time -> e) -> (Time -> a)`. Now it’s easy to say what we mean by not allowing reactivity: an `Env a` is a function `f` of the form `f r = g . r` for some `g`. So to determine the value of `f` at time `t`, you look at its environment at time `t` and do something to the result. No peeking at other points in its environment.

For `Behavior`, I want a scanl, but on continuous functions. You can think of this as a generalization of integration. For a continuous scan to be meaningful, I require its argument to be a *continuous* function `f` of type `Real -> a -> a`, such that `f 0 = id`. By continuity I mean Scott-continuity: treat Real as an infinite-precision computable real, and then any definable total function is continuous.

Then I can find the integral function `∫f` on the interval [0,1] by computing the limit:

f 0.5 . f 0.5 f 0.25 . f 0.25 . f 0.25 . f 0.25 ...

There is no guarantee that this will converge, or uniformly converge, or any of that good stuff. But we will cover our ears and pretend that it always does.

If `f :: Real -> Bool -> Bool`, once our precision goes past the modulus of uniform continuity of `f` (the precision of the information that `f` will never look at to determine what to do), then `∫f` will be constant. So there is some notion of continuity preserved.^{1}

Okay, now we can say what a `Behavior` is! The meaning of `Behavior a` is a transformation `f :: (Time -> e) -> Time -> Time -> a` of the form — hold your breath (I’ll explain) — `f r t0 = i . ∫ (j r) (k (r t0))`, for functions `i,j,k`, where j is an Env function with the scan restriction `j 0 = id`.

Let’s dissect: First, `k (r t0)` is the initial value when the Behavior is started, which depends on the environment exactly when the Behavior is started and no other time. `j r` is the integrand, which takes into account the “previous” value and the current environment (only current because `j` must be an Env function). Finally, the `i .` out front is so we can compute things more complex than their final type; i.e. a `Behavior Bool` could have more state under the hood than just a boolean.

This definition is very much like Haskell stream fusion; we take a recursive definition and “flatten it out” into a state and an iterator function. By imposing a simple constraint on the iterator function, we have retrieved continuity of the generated stream, and can also perform continuous stream fusion in the implementation!

I still haven’t figured out how to work events back into this model.

^{1} I’m committing a blatant foul here. `Bool -> Bool` is a discrete space, so any continuous function from `Real` to it has to be constant to begin with. So that paragraph was pretty much meaningless. I still liked it to get a computational handle on things.

Maybe I don’t get somthing. But is there any difference between your

(f r t0 = i . ∫ (j r) (k (r t0)))

and simple continuation based implementation of FRP

(data B env a = B (env -> (a, B a))?

Both have initial state and tranformation function. (i .) is simple lifting. Also environment is just a continuation argument.

I assume you mean:

data B env a = B (DTime -> env -> (a, B env a))

Yes, there is, at least semantically (unless I am mistaken). How do you characterize the fact that this representation still forms a continuous function? With mine, we might not have convergence, but in B I don’t even know what convergence

means.Continuity is the main reason I explored this formalization in the first place.

Although I should probably explore this convergence idea further. It’s obvious for metric spaces, but what does it mean for a Behavior (Integer -> Bool) to converge?

In my implementation I keep DTime in environment. For me DTime is just the same part of environment state as list of currently pressed keyboard/mouse keys, mouse position and so on.

As for convergence, I think that there is a whole numerical analysis area which tries to solve such questions.

“B” doesn’t even touch this, as it is just an abstraction for value that can change with time. It is an implementation of B’s current value calculation that must(may) worry about convergence.

Maybe convergence of behaviors live somewhere at intersection of control theory, numerical and mathematical analysys (I’m not sure). But I don’t think that FRP can be easily coupled with convergence. Too large area. And it mostly mathematical, not programming. But, who knows, maybe you can get something from trying to get FRP behaviours which converge.

BTW, Euler integration used in many FRP implementation works only for constants.

pos = integral (integral gravity)

is just plain incorrect for any delta time. So it can be safely used only for calculation of linear functions from time. RK4 integration uses derivatives in several points and I don’t sure it will be simple to embed it in Behavior type. It is simpler to make it separate. But again, who knows, I didn’t think too much on this.

I suspect we are thinking past each other. I’ll make clear right away, so we hopefully don’t continue, that I am approaching this as a mathematician rather than as a hacker. There are plenty of ways to implement programs which have vague ideas thing like “value that can change with time” and “event that occurs”. I am trying to take these ideas and formulate a rigorous semantic design; make sure that these vague ideas are backed by mathematical significance; make sure that the abstractions are not mere “design patterns” but are actually abstracting

something.Let us put aside the difference between B and Behavior, since I think they are in some sense isomorphic, though maybe in a non-obvious way.

You say that B is a “value that changes with time” but leave issues of convergence up to a particular value of B. But how can you say that B truly represents a “value that changes with time” if the value it takes at a given time depends on the pattern of samples used to find it (even as we approach a limit of size-zero samples)? What

isits value at that time?If you want to put aside the issue of convergence/continuity, you must be willing to accept that your “B” is something

morethan a value that changes with time; that it is a value that changes with a series of timesteps, which is a conceptually much more complex thing. E.g. you are allowing a “time function” which counts the number of time steps, even though a time step is an implementation detail.Underneath all this rigor, I have a distant hope that I will arrive upon a semantics which will uncover a new implementation strategy. I am currently studying Abstract Stone Duality (computable topology), thus my obsession with continuity. If I find a meeting point between FRP and ASD, there is a (slim) chance it could reveal a new interface

andimplementation which would cure my dissatisfaction with existing FRP systems.The important thing about FRP to me is that it is strongly theoretically motivated. I just don’t think we’ve found the right theory yet :-). But unless we talk about things like continuity and convergence in FRP, we won’t be able to

do math onprograms; it’ll just be another muddy IO monad, where the only thing we know about what a program means is by what it does when we run it; in contrast to the pure Haskell world which is firmly founded in domain theory, where all programs have meaning separate from their execution.I don’t understand what “thinking past each other” means (literal translation to Russian sounds strange to me, maybe it is idiomatic phrase, or maybe I need to study English more ;).

But seems that I understand other things you wrote.

My understanding of B is going from its implementation and yes it is a value which changes with series of timesteps. Not a beautiful mathematical meaning but enough for practical purpouses (I write video slot games using FRP for scene descrption).

And yes for me FRP is just a small set of combinators which helps to write FSMs and describe dependencies. “Just another muddy IO monad” as you say.

As for theoretical background, it would be very interesting to get one for FRP. Things become simpler and more understandable when they have it.

“talking past each other” means both of you are failing to address what the other is saying. Imagine two people giving monologues on completely different topics while looking at each other; that’s talking past each other. It’s supposed to sound strange.

Which convergence are you referring to? Termination, or the convergence of floating point integrals to the real value of the same integral? It seems to me that Luke is talking about the former while Vladimir is talking about the latter.

I take it that you’re trying to establish that the behavior functions all terminate by constructing them from a limited set of functions guaranteed to terminate and combinators which will always yield terminating functions given terminating inputs. I guess that these combinators will be constructed by separately operating on i, j, and k. Do I have that right? I didn’t think there were any termination problems in FRP that needed solving. Such an approach may well solve the space leak issue also, but that’s a separate issue.

I don’t like the idea of including integration as a primitive to FRP precisely because of the numerical problems in it. It seems to me that a delta operation would be sufficient to emulate whatever integration algorithm you desire without causing any numerical convergence problems. You could restrict yourself to polynomials, but I wonder how long that would last, considering the first example in the first FRP paper was a sinusoidal wobble.

Hi Steven,

What I mean by convergence is closer to numerical convergence than to termination. And the issue arises even without considering integration.

The problem is continuity. Consider a function f :: Real -> Real, interpreted as a behavior. If Real is a computable real, then this function is required to be honest-to-god continuous, because in this case, Real continuity corresponds to Scott continuity. So first, I want to acknowledge that floating point is an approximation and pretend I am working on computable reals.

So to compute this function, you give it a Real number, and ask for, say, 64 bits of the result. It looks at some finite prefix of the bits you gave it to compute this. I call this “top-down” evaluation, because you’re starting with big subsets of reals (smaller prefixes) and refining downward.

Then the question is how to flip it on its side, and do “left-to-right” evaluation. However, it still represents the same top-down continuous function, so we have to make sure it actually corresponds to one. So as you make the left-to-right input step smaller and smaller, the step function you get out should start converging to a continuous one.

That is why we need convergence: so that we are sure we are working with proper functions of time, not some other weird thing. And thus my next post exploring what convergence means on arbitrary data types, and hopefully further refinements in this left-right transformation.

Did that make sense?

I see my error: I thought Scott-continuity was less general than it is. (how did we ever survive before Wikipedia?)

So simply by using the ordinary ordering on Reals, Scott-continuity becomes causality. That’s pretty neat. And you’re trying to figure out how to construct causal/Scott-continuous functions that converge to familiar Euclidean-continuous functions. I don’t know enough math yet to understand exactly in what sense a continuous function in one topological space converges to a continuous function in another topological space, but I have the intuitive concept. It all makes perfect sense to me now.