Monthly Archives: December 2008

Domain Convergence

Okay, how is this for a definition of convergence:

An infinite sequence xs :: [()] converges if there exists an n such that for every m > n, xs !! m = xs !! n. In other words, this is a bunch of programs which either halt or don’t, and after some point, either they all halt or they all don’t.

Then an infinite sequence xs :: [a] converges if map f xs converges for every f :: a -> ().

Sound good?

Continuous Stream Fusion

I’ve been able to sketch-implement a version of FRP with a strange distinction: Env vs. Behavior. I’ll give you the intuition, then the rest of the article will develop the semantics more fully. I thought “backwards” to discover this (the word “discover” used under the assumption that it has some merit: questionable), from implementation, to concept, to semantics.

The semantics of Env a are plain ol’ Time -> a, but with a caveat: there is no reactivity allowed. I.e. an Env a yields an a only by looking at stuff “now”. In the implementation, this is just a subtype of IO. The semantics of Behavior are even less defined. I guess it’s semantics are Time -> Time -> a, so it is given a “start time” and returns a time function. It’s sortof like a relative behavior, but shifting it around does not exactly preserve behavior. The implementation is much like Reactive (Env a), where Reactive is from the popular reactive library. But through this exploration I am led to a very different implementation.

To formalize this silliness, first we need to add some composability and define these as transformations. Let’s just fix an e for the whole discussion, and then say the meaning of Env a is (a subtype of) (Time -> e) -> (Time -> a). Now it’s easy to say what we mean by not allowing reactivity: an Env a is a function f of the form f r = g . r for some g. So to determine the value of f at time t, you look at its environment at time t and do something to the result. No peeking at other points in its environment.

For Behavior, I want a scanl, but on continuous functions. You can think of this as a generalization of integration. For a continuous scan to be meaningful, I require its argument to be a continuous function f of type Real -> a -> a, such that f 0 = id. By continuity I mean Scott-continuity: treat Real as an infinite-precision computable real, and then any definable total function is continuous.

Then I can find the integral function ∫f on the interval [0,1] by computing the limit:

  f 0.5  .  f 0.5
  f 0.25  .  f 0.25  .  f 0.25  .  f 0.25

There is no guarantee that this will converge, or uniformly converge, or any of that good stuff. But we will cover our ears and pretend that it always does.

If f :: Real -> Bool -> Bool, once our precision goes past the modulus of uniform continuity of f (the precision of the information that f will never look at to determine what to do), then ∫f will be constant. So there is some notion of continuity preserved.1

Okay, now we can say what a Behavior is! The meaning of Behavior a is a transformation f :: (Time -> e) -> Time -> Time -> a of the form — hold your breath (I’ll explain) — f r t0 = i . ∫ (j r) (k (r t0)), for functions i,j,k, where j is an Env function with the scan restriction j 0 = id.

Let’s dissect: First, k (r t0) is the initial value when the Behavior is started, which depends on the environment exactly when the Behavior is started and no other time. j r is the integrand, which takes into account the “previous” value and the current environment (only current because j must be an Env function). Finally, the i . out front is so we can compute things more complex than their final type; i.e. a Behavior Bool could have more state under the hood than just a boolean.

This definition is very much like Haskell stream fusion; we take a recursive definition and “flatten it out” into a state and an iterator function. By imposing a simple constraint on the iterator function, we have retrieved continuity of the generated stream, and can also perform continuous stream fusion in the implementation!

I still haven’t figured out how to work events back into this model.

1 I’m committing a blatant foul here. Bool -> Bool is a discrete space, so any continuous function from Real to it has to be constant to begin with. So that paragraph was pretty much meaningless. I still liked it to get a computational handle on things.

Reactive spaces

My recent days have been spent staring at the ceiling, drawing abstract doodles on a whiteboard, or closing my eyes and watching higher-dimensional images fly through my consciousness. No, I haven’t been on drugs. I’m after a very specific piece of mathematics, to solve a specific problem. But I have no idea what that mathematics is.

I’m after the sameness between functions of time and functions that react to events. Here is my best attempt at putting my images into words:

I will call them (generalized) Reactives, to confuse classical FRP people :-). A Reactive is defined over an environment space, which is a set of environments (with some structure). Time is an environment space; it is uniform and boring, so all of its environments are identical or isomorphic. There is also an Event environment space, whose inhabitants are roughly Events from reactive (a set of separated occurrences).

A Reactive takes an environment and outputs values in terms of it somehow. Reactives have a notion of translation. Say you have a reactive over an event space which is “False until the next mouse click and then True”. Translating this switches which mouse click it is talking about, but not when the mouse clicks occur; so the transition point will always be exactly on an external mouse click. However, since time is a uniform space, translation of a reactive over time does correspond to simple translation, since there is no interesting structure to query.

I don’t know yet what exactly an environment is. I am trying to capture the fact that reactives over an event space can only switch on occurrences of events, whereas reactives over time correspond to continuous functions. If an event environment looks like an FRP Event, what does the time environment look like?

Compact data types

Join me, fellow readers, as I learn some topology by pretending I know something about it.

I recently uploaded Martin Escardó’s infinite search monad to hackage, under the name infinite-search. It is a delightful little thing to play with, and has inspired some deeper thoughts about the connection between computation and topology.

Quick refresher on topology: a topological space is a set X together with a collection of subsets of X, called the open subsets. The open subsets must include the empty set and the entirety of X itself, and must be closed under arbitrary unions and finite intersections.

So for example, you can generate the standard topology on the real line by taking all the open intervals (a,b) and closing them under arbitrary unions. So for example, the set \cup \{ (a-\frac{1}{4},a+\frac{1}{4}) | a \in \mathbb{Z} \}, which is ball of radius 1/2 around every integer, is open. No solitary points are open; every point must have an arbitrarily small cloud of nearby points.

As another example, there is a topology on the natural numbers called the discrete topology, in which you consider every singleton set to be an open set and take arbitrary unions over those. In this topology, every subset of the naturals is open.

A space is compact if “every open cover has a finite subcover”. That means for any collection S of open sets whose union is the entire space, there is a finite collection S' \subseteq S whose union is also the entire space. Obviously all finite topologies are compact, because there are no infinite collections to begin with.

So the discrete topology on the naturals I talked about is not compact. Consider this cover: \{ \{n\} | n \in \mathbb{N} \}; just all the singleton open sets. This obviously has no finite subcollection which also unions to the entire set of naturals (if you leave out even one of these sets it fails to be a cover).

Okay, on to computation. Roughly what Martin Escardó’s work says is that compact spaces can be exhaustively searched. That is, if S is a compact space, then given a total function p :: S -> Bool, we can either compute an element of S satisfying p or compute that there is no such element. If we could figure out a way to form topological spaces on our data types, then we can know whether they are exhaustively searchable (and construct algorithms to search them).

A domain is the set of all elements of a data type, including partial ones (with ⊥s in them). So the domain for Bool is { ⊥, True, False }, and the domain for Either () is { ⊥, Left ⊥, Left (), Right ⊥, Right () }. To construct a topology (the Scott topology?) for the type, generate it by the sets of all compatible elements with some finite element of the domain. So the open sets for our Either () would be { { Left (), Right () }, { Left () }, { Right () } }. We also toss in the empty set, since it is the union of zero of these.

Earlier we saw that the naturals were not compact. This is a good sign, because if we could exhaustively search the naturals, then we could write an algorithm to eg. decide the twin-prime conjecture. However, I will now show that the lazy naturals are compact.

The lazy naturals are the data type:

data Nat = Zero | Succ Nat

What is the set of elements of this domain, and what is the topology? The set of elements is just the regular naturals plus the special one let infinity = Succ infinity. The topology has all the regular finite naturals as singleton sets, just like the discrete topology, but it also has subsets generated by { ⊥, Succ ⊥, Succ (Succ ⊥), … }. That is, sets of all of them greater than some natural including infinity. The key is that the singleton { infinity } itself is not open, it only comes from one of these infinite sets.

Now let’s say you have a cover for Nat. It includes infinity, so it must include the a set { n | n > k for some k }. So we pick that one, and then we pick however many sets we need to cover all the naturals less than k; there will be at most k of them. So we have constructed a finite subcover.

So regular naturals are not compact, but Nat is. So regular naturals are not searchable, but Nat is. But we got Nat only by adding to the regular naturals; how did we make it searchable in the process?

There was a little snag in the definition of searchable above: we needed a total predicate. Many functions which we would not be able to search on the regular naturals would get into an infinite loop if we passed the function infinity, and that would mean the predicate is not total. Intuitively, that means we can only search on predicates which are bounded in the size of the natural; which stop looking at some point.

I conjecture that every ADT in Haskell (without strict fields) is compact. It is pretty easy to write combinators which construct Data.Searchable Sets for any ADT; whether they are proper compact Sets (i.e. the search function is in fact total) is another issue.

So that’s it. We can make a compact set of Naturals and exhaustively search them. For fun, here’s a little Haskell implementation demonstrating this:

import Data.Searchable
data Nat = Zero | Succ Nat

nat :: Set Nat
nat = singleton Zero `union` fmap Succ nat

-- decidable equality on functions from Nat!
eqnatf :: (Eq a) => (Nat -> a) -> (Nat -> a) -> Bool
eqnatf f g = forevery nat $ \n -> f n == g n

Go ahead, write a Num instance for Nat and play around. Just remember that your predicates need to halt even when given infinity, otherwise they are not total. In particular, this means that you can’t use eqnatf on Nat -> Nat, because equality on Nat itself is not total (infinity == infinity loops).

Space and Time

On Thursday, just after we saw a lecture about Beethoven and Homer and had lunch together, I asked Karen (via email) if she would like to get dinner on Saturday. We have been loosely planning to go to spice china for a while. She replied, “busy then! going to a christmas concert. probably count me out for the next couple weeks… otherwise I’ll never get anything done.”

She leaves for Ithaca in two weeks. So I interpret this as saying “I can’t see you until after christmas.” I mean, that is an exaggeration, as I can see her at Wednesday at Somewhere and ICH, but those hardly count.

What the fuck? Sure, I understand she’s busy, but she has time to have dinner with the spanish students, to go to a christmas concert (without inviting me), to go to ICH and W@S, to watch anime. Suddenly as we declare that we are in a relationship, she has no time for me.

I told her that I understand and that I’d see her after her free time re-emerges. But it’s really not okay with me. I feel excluded and rejected, like she is trying to distance herself from me.

Now is the time when I would consider my options and make a decision about what to do next. Fuck that though. I don’t need a plan; I just want to live and to be happy. I have been trying to not worry about it and just be distant, but my present psyche seems incapable of that. So whatever. I’ll just be sad when I feel sad and chatty and desperate when I feel chatty and desperate and happy when I feel happy.

God damn relationships are a pain.