Putting aside any fears of seeming pretentious or arrogant, I’d like to write about my software design strategy. I consider myself a pretty good designer, and although I haven’t written that much, what I do write comes out clean, maintainable, and versatile. In the process, I’d like to talk about my new idea for an open source music composition program, and use it as an example.
The first thing you need in a project is your design goals. Most likely, somebody else has done the same thing you’re about to do, so your design goals tell you how your project is going to be different or better than those other poor fools’. For instance, Subversion set out to be CVS, but not broken (well, it was a bit more specific than that).
Let’s start with the design goals for my music project. Indeed, there have been attempts at previous composition programs before, but they never seemed to get anywhere. Some of the limitations were: too notation-driven (lilypond—the only successful one), too big a task (i.e. trying to do too much at once, and since this is open source, getting nothing substantial done). I’d also like to incorporate my music-wiki idea. So let’s come up with some design goals:
- Composition focus, but not exclusively (addresses the first problem). That is, focus on the notes, not the presentation.
- Component-wise (addresses the second problem). That is, my program is actually a suite of related little programs—or little libraries. It means that I can prototype the big concepts of my software without going into huge detail.
- Software-unbiased, related to the previous goal. So I could let somebody else write a notation program to go with my composition program, and it could work smoothly.
- Text-friendly (addresses the music-wiki goal). I want changes to be mergable, using the patch program.
we proceed from here. Well, our design is component-wise, so let’s start with the first component: the composing program. Rather, a composing program, because we want this to be software-unbiased. What does such software operate on; what concrete can we get into the design? The file format.
That’s the first big task on the list: make a file format. We already know some of its features, due to the last design goal. If somebody adds a measure somewhere, it should not affect every line in the file (as it would if, say, you used measure numbers). Anyway, I have some deeper ideas here, but I’ll put them off until a later post.
As a composer, I want my composition program to be able to play back the music on request, and also play each note as I write it. So that brings up the next area to design: the synthesizer layer. It’s a layer since we want to be software-unbiased. It should take our file (or any measure in our file) and make something that you can hear out of it. I say that what it produces is its choice: a MIDI file, a wav file, um… I guess that’s about it.
There’s something else that I want, and that makes it all fit together very nicely. It’s Finale’s “human playback” feature. It takes your score and analyzes it so it can make a not-so-machine-like interpretation. Again, software-unbiased, so we have a layer in-between the composition layer and the synthesis layer: the interpretation layer. And here’s where one of my principles of design comes up. Don’t tangle things up if you don’t have to; let’s keep the design linear. So let’s say that the interpretation layer takes a composition file and turns it into a playable event stream (with whatever interpretations it has added), which it then passes to the synthesis layer to play. The synthesis layer need not know any “composition file”.
And you can have all sorts of interesting design inside each layer. For instance, in the interpretation layer you could have a “dumb interpretation” which just changes the notes into their exact timings with no inflection. Then you could have an “add dynamics interpretation” which takes an event stream (not a composition file) and adds dynamics to it. Then you can put these two on top of each other. Also keep in mind that there’s nothing keeping this layer from being interactive. You write the notes, then you go into the interpretation program to customize how they are rendered. You now have a modular interpretation design. That’s important to keep in mind, but the main thing is that it’s hidden from our primary three layers. The interpretation layer takes a composition file and does whatever it needs to to turn it into an event stream. It’s a black box.
Finally, we want notation, because it’s hard to perform music without notation. I’d put this layer (you can see that I just assume that these big concepts are layers; usually that’s the case) next to the interpretation layer, serving a similar function. It takes a composition file and turns it into some visual document, after some user-defined processing. And this solves one of the biggest problems with commercial music programs: the fuzzy line between notation and music. They all get it wrong somewhere.
Oh, and put metadata capability on all of these formats. Metadata rocks.
And now we have our four-layer design. That’s the structure of the program, and you can start working on it one-at-a-time (keeping in mind whether each of the others needs to understand what you’re doing—that’s why you do the big stuff up-front).
So in general, my process is that I break it down into subproblems only far enough so that I can understand the relationship between all of them completely. That is, how much I do at each step depends on my mental capacity. If you see object designs that start with 20 different layers, you’re looking at either a genius, or someone who started with too much complexity. Ask him how layer (3) interacts with layer (16). If the answer is more than one sentence, he needs to abstract it out more. So after I understand the level that I’ve broken it into, I just focus on one of those layers, and break that down to where I can understand it (but no more). And so on (where so on is usually zero more steps; two levels is about as far as I go), but only until I have something I can start writing. I am usually not finished with my design of a particular component when I start writing; the necessary abstractions will show themselves as they come.
But the most important thing about these layers is that they’re each testable. I can write a really simple composition file that works with the file format without needing a synthesizer or an interpreter. I just manually check that the file format is coming out right. Likewise with each other layer (so long as I can hand-write some simple files I need to test them). So just as I’ve broken the problem down in my design, I’ve broken it down in implementation. I need to know when I’m done with one part, and that I can see it doing someting. This is much, much more important in community projects (the thing that I’m not so good at), so that people can see something happening and get excited. People are bewildered by pure code. Most of the time, even a single programmer is, too.
Anyway, that was the easy part. Where I really get good is in little abstractions that make big differences. Hopefully I’ll talk about those one of these days.