Here’s a post I’ve been wanting to write for a while. This is an exposition of a unique, powerful, and (retrospectively) obvious software design approach. I owe the whole of the Haskell language for helping me toward this philosophy, but it is much broader in scope than Haskell. In particular I owe Conal Elliott for being explicit about it with the Anygma team earlier this month: spelling out a reason I see beauty in Haskell and in FRP, and laying out a technique for designing systems with that sort of beauty.

This is not an alternative to eg. waterfall or agile. I see it as orthogonal to those, and more technical. I’d say it’s an alternative to Design Patterns. However, I ask readers to approach it as something new to be *incorporated* rather than a replacement or attack on Design Patterns.

I’ll just start with the big idea: **make the types and objects in your program ***mean something*. Usually there is some rhyme and reason for creating classes the way we do—eg. `list` is not just a convenient grouping of procedures, it conceptually represents a list. Semantic Design takes that to the extreme. We first decide exactly what a list *is*, and then we make sure that every operation we expose is consistent with that idea.

To demonstrate, I’ll use the example that Conal used when he presented this to us (in seminar form): images. Here’s an exercise for you: define “image” precisely. What is the essence of an image? I’m talking about the concept itself, not the way you would represent it in memory (but if you actually think of the concept of an image as a 2D array of pixels, that’s valid).

Here is a small selection of the examples we came up with in the “seminar”:

- A rectangular array of colors (for a suitable definition of color);
- A function taking a pair of real numbers and returning a possibly transparent color;
- A list of colored geometric primitives (polygons, lines, circles), where objects later in the list are shown on top of objects earlier;
- Any object with a
`draw` method (our formalization of this very software-oriented concept was a function from screens to screens)

Hopefully these examples hinted at the idea that these are conceptual or mathematical definitions, not software ones. The only reason I say “mathematical” is because I view math mostly as a tool for talking with precision; I don’t see defining your concepts in terms of classic real numbers, vector spaces, functions as necessary, just convenient.

A bit of jargon: one of these concepts is called a *semantic domain* (of Image). The semantic domain is the conceptual analog to the software abstraction we’re creating.

With each of these semantic domains, we can start to imagine what kinds of operations we’d like to do with them. In considering these operations, another big idea enters the picture: **The semantic domain should be ***composable*. This is the hardest one of the axioms for me to explain the meaning of. Roughly it means we should be able to build more complex objects from simpler ones. This gives us an exponential increase in the expressibility of our system, so we don’t have to implement much to get a lot of power.

A straightforward composition operation on images is to put one on top of another. Imagine what that would look like for each of the semantic domains:

- The resulting image at coordinate (x,y) is an appropriate blend of the bottom image at (x,y) and the top image at (x,y).
- Same as above, except x and y are real numbers.
- The resulting image is the bottom image concatenated with the top image.
- The resulting image’s draw() method is { bottom.draw(); top.draw(); }.

We would also like translation, because that together with composition gives us a fair amount of power (we can now composite images together in any layout). Here are definition sketches for translation in each of the semantic domains (where (tx,ty) is the amount we wish to translate):

- The pixel at (x,y) in the resulting image is a blend of the pixels around (x-tx,y-ty) (if we can translate by non-integral amounts) if it is in the array’s range. If not, it’s transparent.
- The resulting image at coordinate (x,y) is the original at coordinate (x+tx, y+ty).
- Translate each of the primitives in the list by (tx,ty).
*Unclear how to define translation in this domain!*

Let’s stop analyzing operations for now. Usually I just do this quickly in my head, nothing formal (which is why I didn’t write it in formal mathematical language). But it seems like we have eliminated model no. 4 just by looking at the operations we want. Translation doesn’t *mean* anything when an image is defined by its draw method. This is an interesting development, since it would typically not be considered bad style to start writing an image library thusly:

abstract class Image {
public void Draw();
}

(Or an `interface`—six of one, half dozen of the other)

To carry on with no. 4, we would have to expand our semantic domain. Maybe no. 4 would become “any object with a `draw` method and a `translate` method”. But this is obviously getting out of hand without getting us anywhere semantically. Why is it important to get somewhere semantically? This gets us to our last axiom: **The semantic domain should be as simple as possible**.

The motivation for this should be obvious. The simpler the semantic domain, the easier it is to reason about the implementation, both from the outside and from the inside. From the outside (the user of a library, for example), you can predict the behavior of everything you’re doing since the semantic domain is easy to understand, and we coded our library to correspond exactly to the domain. From the inside, it is easy to verify that our implementations match the semantic definitions, since they are simple. I felt like I went around in a circle in that paragraph, but I hope this is obvious.

This metric excludes no. 4 from our list of candidates, since the semantic domain gets more and more complicated with each new operation we want. Further, I argue it excludes no. 1. I used the informality “appropriate blend”, and while practitioners will know what I mean, the fact that it was too inconvenient to specify exactly is a red flag. On the other hand, nos. 2 and 3 both a simple exact specification.

Investigating operations further, it becomes clear that no. 2 is simpler than no. 3. Scaling a function by (sx,sy) is simple: (x,y) → (x/sx,y/sy), whereas scaling a primitive needs a switch on the primitive type, and consideration of its position and size. In most cases we could have intuited this *just* by looking at the domains themselves and not the operations, since the domain of no. 2 is simpler. No. 3 has to talk about primitives, and a precise definition would have to list the properties of each primitive, whereas no. 2 is just a function.

I’m not saying no. 2 is the best, this is just the kind of reasoning we do in Semantic Design. There could be a better domain for images, but I think no. 2 is the best of these four. We must be creative in coming up with these, and we must take care that they are expressive enough for our requirements.

Once a semantic domain is settled on, we can implement. The implementation’s representation may be very similar to the semantic domain, or it may be very different. Sometimes we have to play all sorts of tricks to get an efficient implementation. But the point is that whatever representation we choose, and however we implement the operations on it, those operations must all have precise, meaningful definitions in the semantic domain. For example, if we choose no. 2, then it is meaningless to talk about an operation that returns the number of pixels in the image, because our domain has no idea what a pixel is. That doesn’t stop our representation from being a rectangular array (but other things, like supporting accurate arbitrary zooming, do).

But it is reasonable not to expose the full power of the semantic domain through the code interface. You can imagine that if this image library is for drawing on hardware using OpenGL, then supporting an arbitrary function from reals to colors may be a speed nightmare, so we just won’t expose that. The operations we expose simply need be a *subset* of the operations expressible on the domain.

In summary, my interpretation is: Semantic Design is about creating abstractions that don’t leak. It gives me confidence that the approach can be taken to the very extreme, where you model your semantic domain in a proof checker and prove that your code matches it. In typical software design we cannot begin to prove code is correct, because we haven’t any idea what “correct” means! If the abstractions leak, then your implementation has a bug—a place where it does not correspond to the semantic domain. Thus, when done correctly, a semantically designed abstraction cannot leak.

There’s still plenty to say on the subject, but this post is already too long. This was an informal introduction, and most of this stuff can be formalized for the more mathematically-minded. In addition, there is another beautiful closely related concept (also due to Conal) called “typeclass morphisms”, which at the moment have a very Haskell-centric interpretation. But I think I’m close to a more general meaning for those (probably will end up being something from category theory).

Remember the principles: objects and types *mean* something, the semantic domain should be *composable*, and the semantic domain (and operations) should be *as simple as possible*.