Monthly Archives: July 2008

The Deserted City

SNW had a session last Sunday, featuring Nolan McFadden on guitar. Here are some select tracks.

My favorite is no. 8, and it still would be even without the story. Nos. 2, 3, 4, and 5 are quite excellent as well. It was a good session.

Upon reflection though, I notice a lack of melodic content in these. These songs have many interesting and solid textures, good solos, but not really any primary melody. I guess that’s something to focus on for next time.

Semantic Design

Here’s a post I’ve been wanting to write for a while. This is an exposition of a unique, powerful, and (retrospectively) obvious software design approach. I owe the whole of the Haskell language for helping me toward this philosophy, but it is much broader in scope than Haskell. In particular I owe Conal Elliott for being explicit about it with the Anygma team earlier this month: spelling out a reason I see beauty in Haskell and in FRP, and laying out a technique for designing systems with that sort of beauty.

This is not an alternative to eg. waterfall or agile. I see it as orthogonal to those, and more technical. I’d say it’s an alternative to Design Patterns. However, I ask readers to approach it as something new to be incorporated rather than a replacement or attack on Design Patterns.

I’ll just start with the big idea: make the types and objects in your program mean something. Usually there is some rhyme and reason for creating classes the way we do—eg. list is not just a convenient grouping of procedures, it conceptually represents a list. Semantic Design takes that to the extreme. We first decide exactly what a list is, and then we make sure that every operation we expose is consistent with that idea.

To demonstrate, I’ll use the example that Conal used when he presented this to us (in seminar form): images. Here’s an exercise for you: define “image” precisely. What is the essence of an image? I’m talking about the concept itself, not the way you would represent it in memory (but if you actually think of the concept of an image as a 2D array of pixels, that’s valid).

Here is a small selection of the examples we came up with in the “seminar”:

  1. A rectangular array of colors (for a suitable definition of color);
  2. A function taking a pair of real numbers and returning a possibly transparent color;
  3. A list of colored geometric primitives (polygons, lines, circles), where objects later in the list are shown on top of objects earlier;
  4. Any object with a draw method (our formalization of this very software-oriented concept was a function from screens to screens)

Hopefully these examples hinted at the idea that these are conceptual or mathematical definitions, not software ones. The only reason I say “mathematical” is because I view math mostly as a tool for talking with precision; I don’t see defining your concepts in terms of classic real numbers, vector spaces, functions as necessary, just convenient.

A bit of jargon: one of these concepts is called a semantic domain (of Image). The semantic domain is the conceptual analog to the software abstraction we’re creating.

With each of these semantic domains, we can start to imagine what kinds of operations we’d like to do with them. In considering these operations, another big idea enters the picture: The semantic domain should be composable. This is the hardest one of the axioms for me to explain the meaning of. Roughly it means we should be able to build more complex objects from simpler ones. This gives us an exponential increase in the expressibility of our system, so we don’t have to implement much to get a lot of power.

A straightforward composition operation on images is to put one on top of another. Imagine what that would look like for each of the semantic domains:

  1. The resulting image at coordinate (x,y) is an appropriate blend of the bottom image at (x,y) and the top image at (x,y).
  2. Same as above, except x and y are real numbers.
  3. The resulting image is the bottom image concatenated with the top image.
  4. The resulting image’s draw() method is { bottom.draw(); top.draw(); }.

We would also like translation, because that together with composition gives us a fair amount of power (we can now composite images together in any layout). Here are definition sketches for translation in each of the semantic domains (where (tx,ty) is the amount we wish to translate):

  1. The pixel at (x,y) in the resulting image is a blend of the pixels around (x-tx,y-ty) (if we can translate by non-integral amounts) if it is in the array’s range. If not, it’s transparent.
  2. The resulting image at coordinate (x,y) is the original at coordinate (x+tx, y+ty).
  3. Translate each of the primitives in the list by (tx,ty).
  4. Unclear how to define translation in this domain!

Let’s stop analyzing operations for now. Usually I just do this quickly in my head, nothing formal (which is why I didn’t write it in formal mathematical language). But it seems like we have eliminated model no. 4 just by looking at the operations we want. Translation doesn’t mean anything when an image is defined by its draw method. This is an interesting development, since it would typically not be considered bad style to start writing an image library thusly:

abstract class Image {
    public void Draw();
}

(Or an interface—six of one, half dozen of the other)

To carry on with no. 4, we would have to expand our semantic domain. Maybe no. 4 would become “any object with a draw method and a translate method”. But this is obviously getting out of hand without getting us anywhere semantically. Why is it important to get somewhere semantically? This gets us to our last axiom: The semantic domain should be as simple as possible.

The motivation for this should be obvious. The simpler the semantic domain, the easier it is to reason about the implementation, both from the outside and from the inside. From the outside (the user of a library, for example), you can predict the behavior of everything you’re doing since the semantic domain is easy to understand, and we coded our library to correspond exactly to the domain. From the inside, it is easy to verify that our implementations match the semantic definitions, since they are simple. I felt like I went around in a circle in that paragraph, but I hope this is obvious.

This metric excludes no. 4 from our list of candidates, since the semantic domain gets more and more complicated with each new operation we want. Further, I argue it excludes no. 1. I used the informality “appropriate blend”, and while practitioners will know what I mean, the fact that it was too inconvenient to specify exactly is a red flag. On the other hand, nos. 2 and 3 both a simple exact specification.

Investigating operations further, it becomes clear that no. 2 is simpler than no. 3. Scaling a function by (sx,sy) is simple: (x,y) → (x/sx,y/sy), whereas scaling a primitive needs a switch on the primitive type, and consideration of its position and size. In most cases we could have intuited this just by looking at the domains themselves and not the operations, since the domain of no. 2 is simpler. No. 3 has to talk about primitives, and a precise definition would have to list the properties of each primitive, whereas no. 2 is just a function.

I’m not saying no. 2 is the best, this is just the kind of reasoning we do in Semantic Design. There could be a better domain for images, but I think no. 2 is the best of these four. We must be creative in coming up with these, and we must take care that they are expressive enough for our requirements.

Once a semantic domain is settled on, we can implement. The implementation’s representation may be very similar to the semantic domain, or it may be very different. Sometimes we have to play all sorts of tricks to get an efficient implementation. But the point is that whatever representation we choose, and however we implement the operations on it, those operations must all have precise, meaningful definitions in the semantic domain. For example, if we choose no. 2, then it is meaningless to talk about an operation that returns the number of pixels in the image, because our domain has no idea what a pixel is. That doesn’t stop our representation from being a rectangular array (but other things, like supporting accurate arbitrary zooming, do).

But it is reasonable not to expose the full power of the semantic domain through the code interface. You can imagine that if this image library is for drawing on hardware using OpenGL, then supporting an arbitrary function from reals to colors may be a speed nightmare, so we just won’t expose that. The operations we expose simply need be a subset of the operations expressible on the domain.

In summary, my interpretation is: Semantic Design is about creating abstractions that don’t leak. It gives me confidence that the approach can be taken to the very extreme, where you model your semantic domain in a proof checker and prove that your code matches it. In typical software design we cannot begin to prove code is correct, because we haven’t any idea what “correct” means! If the abstractions leak, then your implementation has a bug—a place where it does not correspond to the semantic domain. Thus, when done correctly, a semantically designed abstraction cannot leak.

There’s still plenty to say on the subject, but this post is already too long. This was an informal introduction, and most of this stuff can be formalized for the more mathematically-minded. In addition, there is another beautiful closely related concept (also due to Conal) called “typeclass morphisms”, which at the moment have a very Haskell-centric interpretation. But I think I’m close to a more general meaning for those (probably will end up being something from category theory).

Remember the principles: objects and types mean something, the semantic domain should be composable, and the semantic domain (and operations) should be as simple as possible.

Free Improvisation

The Anygma office had a MIDI keyboard lying around. Today I was hanging around at the office, and in my boredom I decided to look for the parts necessary to hook it up. A half hour later, I had a piano set up!

I haven’t played for two weeks, so this is a little rusty, but it’s okay. A 25 minute free improvisation. I also extracted the middle section which I thought was the strongest in case 25 minutes is too much. I’m actually quite fond of the latter.

Oh yeah, this was an interesting exercise, too, since this keyboard doesn’t have a sustain pedal.

The Curry-Howard isomorphism and the duality of → and ×

To people who are familiar with the Curry-Howard isomorphism, this post will probably be trivial and obvious. However, it just clicked for me, so I’m going to write about it. I had passively heard on #haskell that → and × (functions and tuples) were dual, so I played with types a bit, finding dual types, and it never really came together.

I’m not actually sure what I’m talking about here is exactly the CH isomorphism. I’ve been thinking about dependent types, and converting propositional formulae into that calculus. But if it’s not, it’s close.

Let’s use some sort of type bounded quantification in our logic. I was thinking about the following formula:

\forall n \in \mathbb{N} \, \exists p \in \mathbb{N} \, (p > n \wedge \text{Prime}(p) \wedge \text{Prime}(p+2))

The corresponding type is:

(n : \mathbb{N}) \rightarrow [ (p : \mathbb{N}) \times (p > n) \times \text{Prime}\, p \times \text{Prime} (p+2) ]

In other words, the proposition “for every natural, there is a twin prime greater than it” corresponds to the type “a function which takes a natural and returns a twin prime greater than it”. × corresponds to existentials, because it is the proof: it tells you which thing exists. The first element of the tuple is the thing, the second element is the proof that it has the desired property.

It’s interesting that ∀ corresponds to →, since logical implication also corresponds to →. The difference is that the former is a dependent arrow, the latter is an independent one. Beautifully in the same way ∃ corresponds to ×, and so does ∧.

Oh right, the duality. Knowing this, let’s take the above type and negate it to find its dual.

(n : \mathbb{N}) \times [ (p : \mathbb{N}) \rightarrow (\neg (p > n) + \neg\text{Prime}\,p + \neg\text{Prime}(p + 2)) ]

i didn’t bother expanding the encoding of not on the inner levels, because it doesn’t really make anything clearer.

The dual is a pair: a number n and a function which takes any number and returns one of three things: a proof that it is no greater than n, a proof that it is not prime, or a proof that the number two greater than it is not prime. Intuitively, n is a number above which there are no twin primes. If this pair exists, the twin prime conjecture is indeed false.

So yeah, that’s how → and × are dual. It’s pretty obvious now, actually, but it took a while to make sense.

The Luke Palmer, a new drink from Belgium

Natascha Heddendorp designed a special drink last Friday and named it after me! It will soon be on the menu of the bar at which we were drinking (so I’m told). The recipe, recalled as best as I can from memory, the Luke Palmer:

  • Fresh Coffee
  • Bailey’s Irish Cream
  • Vodka
  • Amaretto
  • Milk
  • Served cold but without ice (which is a challenge considering the coffee)

I don’t know the proportions. Less than 1/2 coffee, but not too little. It has a strong kick at the beginning, and then calms down and has a smooth aftertaste and aftertexture.

After having two glasses of this, thus piss-ass drunk, I went home and slept. The following day, I got up and got lost in downtown Antwerp, and I missed my flight back home. Therefore I conclude it is bad luck to drink a drink named after yourself.

I’m okay; I’m coming home on Monday.

Required Optimization

Beelsebob and I were talking at lunch, and we had an interesting idea. Haskell is experimenting with optimizations that C programmers would never dream of. But I say experimenting with emphasis since the optimizations are unpredictable. The Haskell optimizer can lower (and also raise) the space complexity of your program, not changing it from a program that runs slowly to one that runs quickly, but from one that doesn’t run to one that does (eg. allocating 20GB and dying vs. not allocating).

And thus the code becomes very brittle. Adding a trace, accidentally creating a thunk that references something early in a stream, or even doing a semantically nil refactor that happens to confuse the optimizer can cause your program to suddenly become a lot less efficient for no apparent reason. It’s not uncommon for a Haskell programmer to experience this in a real project.

So how can we remedy this? The idea was: annotate which optimizations you expect, and get a compiler error/warning when they don’t happen and some indication why. As a simple example perhaps:

{-# OPTIMIZE ListStream addOne #-}
addOne = map (+1)

Which would annotate that addOne is a good candidate for list fusion; i.e. if composed with another good candidate, they will—not might, but will—be fused. This pragma could be used as a way to guide the compiler, but I’m thinking of it more as a simple check: verify that this is what is happening anyway.

I don’t know enough about the optimization process to know whether ListStream would actually be a valid thing to check, what kinds of such identifiers there could be, which ones would be useful. But I always worry that the compiler will not do enough inlining to see the structure of my program and optimize intermeidate structures away. In fact I fixed a performance issue in Yampa recently that had exactly this problem: integral was not being inlined, which caused allocations instead of simple hardware loops, which caused a 90% performance hit. I’m simply exploring a way to ensure that the optimizations you expect are actually happening, and if you break them, you get yelled at.

</jot>

Fun with Flemish

I am having such a good time out here in Antwerp, I decided to stay another week. Actually this decision was not so much related to the city as to the project, but the city ain’t bad either.

Anygma moved me from the bed & breakfast I was staying at last week to an apartment they had rented for some of our art director’s students. The students haven’t arrived yet (I kind of want them to have, since it’s a bit lonely/boring). As I complained last time, I haven’t had a way to plug in my laptop, so my computer use has been constrained to work. Today I thought I would go into town and get a power converter.

Almost everybody here can speak English, but where’s the fun in that? I figure I will learn more Flemish if I don’t have something to fall back on. So as I was going into town I resolved not to talk to anybody in English. Navigating the tram system to get downtown was pretty easy—all the trains are numbered and color-coded.

Downtown was a huge, wide street with shops on either side and thousands of people walking around. It looked like a carnival. That illusion was amplified by waffle and ice cream stands on the street corners (I got a waffle covered with chocolate ice cream and fudge sauce—delicious!). However, other than the high density of people, it was all quite familiar. I found a large electronics store and grabbed the adapter, and went to stand in line to buy it. A woman who worked there babbled something in Flemish to me and pointed to another line. I knew she was saying that this one was about to close or something and I should wait in the other one.

It’s pretty interesting how much I can comprehend without understanding a word of Flemish. Communication is much more than words.

After I bought the converter, I got lost in town. I knew I was going to Groenplaats, so I asked a woman on the street, “Groenplaats?” and pointed my fingers back and forth, and she pointed me in the right direction. I walked a bit that way and didn’t find it, and that’s when my plan was foiled! I did the same thing to a guy tending a shop, and he gave me directions in Flemish. To the blank stare on my face he responded “do you speak English?”. I shook my head confusedly. “Well what do you speak normally?” Daft! He got me! In my embarrassment I responded “I speak English but I’m trying not to!” He gave me English directions… darn.

And now I am safely back at the apartment, using my computer (hooray!), able to call people with Skype (hooray!).

It’s still fun being out here. A bit tiring, but great fun. The Anygma project is amazingly cool, Conal Elliott has opened my eyes to a new approach to software design (well, maybe not totally new, but he pushed me into the pool that I was dipping my toes into). The rest of the team is smart, competent, and open-minded, and great fun to work with. The food is absolutely delicious, the strangers are kind and helpful, there are many beautiful women to look at. My adolescent reservations about moving out here are now shadowed by my excitement for this place.

Apparently I do not want to be in Antwerp

Because VISA is not here! It was a bit distressing coming here with only 15€ in my pocket and finding out that people don’t take credit cards the way everybody does in the USA. Anyway, there was a bank that understood my card and could give me cash.

This is my first time out of the USA (barring Canada), so I’m enjoying observing the differences. Although there are some interesting points, it’s more similar than I expected.

  • The outlets are those weird european ones, so I can’t plug in my computer (without ganking an adapter from a coworker). I was going to go to a store and buy one, but
  • Stores close at or around 17:00! How am I supposed to satisfy my electronics craving after dinner?
  • People write times in 24-hour format, there’s no AM/PM.
  • There is a different meme at stoplights on foot. In the USA, each person or group of people presses the button. Here, it seems that people only press the button if there’s nobody already waiting. That is, they trust people to have already pushed it… which makes sense.
  • I always like trying new weird food. This is made easier when one cannot read the menus.
  • “Met” means “with” in Dutch. Other words have other meanings.
  • A glass of water costs 1.50€ at restaurants. That could be related to the fact that tap water in this country is not what I would consider drinkable.

It’s actually not that hard to get around not being able to read anything. It’s fun being here.