Monthly Archives: September 2004

One More Time

I can’t send any mail, since my mail server has been infected by a can of spam and so they’re firewalling off outgoing mail until they fix it. Meanwhile, I’m trying to switch to the University of Colorado’s mail server because I don’t agree with the policy that the current one is undertaking. Anyway, that means I get some time to take my mind off Perl 6 for a little while and exercise my mathematical fancy.

Here’s a very simple gambling game I came up with today. Not a game you could put in casinos; it’s too simple. But an interesting mathematical study case having to do with probabilities. Here’s the game: I give you a uniformy distributed random number in [0,1). You can either “hit” or “stay”. If you stay, I give you an amount of money proportional to your number (we’ll just say it’s that number of dollars, so you can make up to one dollar with rounding). If you hit, I give you another number, and you lose the previous one.

Here’s what makes it interesting: There is a fixed number of rounds, say, ten. If you hit ten times, you have to stay with the final number, no matter what it is. It could be 0.50 (the expectation), or 0.01, who knows? How do you devise strategy for maximizing the amount of money you get out of this game?

At each step in the game, you look at the current number and at the expected value of the game if you hit. If the current number is bigger, you stay, otherwise you hit. How do you compute the expected value? Well, it depends on your strategy (this is where this problem differs from most probability problems). So you have to compute it recursively. Here’s how it goes:

Call the expected value of the game with m moves left ev(m). Clearly ev(0) is 1/2, since it could be anything, and it’s uniform.

Now, at each move in the game (with m moves left), you take the current number n and compare it against the expected value of the rest of the game ev(m-1). To get ev(m) you do that for every possible thing that could come up, and multiply it by the probability of it happening: ev(m) = ∫01(If n > ev(m-1) then n else ev(m-1))dn. Integrals with conditionals inside them are usually pretty nasty, but this one can be broken up easily. ev(m) = ∫0ev(m-1)ev(m-1)dn + ∫ev(m-1)1n dn, which, expanding it, turns into the recurrence relation: ev(m) = ev(m-1)2 + (1-ev(m-1)2)/2.

And there you go. The first few values of this look like:

n ev(n)
0 0.500
1 0.625
2 0.695
3 0.742
4 0.776
5 0.800
6 0.820

Which basically means that if you have n rounds left, then you should hit if your number is less than ev(n) and stay otherwise. We can see that this tends toward one as n goes to infinity, which is what we were expecting.

The reason I was interested in this problem came up near my death in a poker tournament. I had about two big blinds left, and I was on the button. I wondered, mathematically, what criteria my hand needed to satisfy in order for me to “go with it”. There I observed that it mattered how far away I was from the blind. That problem is closely related to this one. If I can assign a single number to the quality of a hand (that’s actually impossible, if you want to keep it probabilistically well-ordered), then I can find out when to go with it, and when to wait for the next one.


Kim Burchett posted a link to a fantastic paper on his results with software methodology. I haven’t read the whole thing yet (it’s not really that long), but the first half is gold. He notes that there are many, many software development methodologies, and any of them can work for particular tasks… and any of them can fail for particular tasks. He goes on to describe that how people are encouraged to communicate on a project, and how people’s individual characteristics match the methodology is much more important to the success or failure of the project.

This is important stuff for me. I’ve managed one project in my life: IPC. It was a semisuccess. I certainly got our team to crank out something vastly more complex than any of them had ever done before. The problem was that… I did more than half of the work. I see my mistake now. I’m a good software designer. I make abstractions up the wazoo, make everything easier (if you understand how it works), extensible, etc. The trouble is that I don’t understand how to get others to understand my abstractions, or rather how to make abstractions that others will understand. If you look at the IPC code, you’ll see things like:

    template <class Ty>
    class PValVal : public PVal<Ty>

These guys had just barely been introduced to inheritance, and I’m doing generalized inheritance with templates?! No wonder they couldn’t use it. I just figured they needed to learn more, but now I see that it was my responsibility to give them something they could work with, not their responsibility to work with my stuff.

Anyway, it’s been an enlightening couple of months. Maybe by the time I’m 30 I’ll be able to design like Dan Sugalski.

Glop Refactor: Floaters, Actors, Input, …

I took a long walk last night at 9:00 to ponder some of the design decisions of Glop. I knew something had to be done about the Input subsystem, and some other things needed thinking through as well. Here’s what I came up with. I came back and implemented half of it.

First, there’s a new concept called a “Floater”. It’s just like an Actor, except, well, except nothing. It’s an Actor. I renamed Actors to Floaters to give a new conceptual feel to them. They’re lightweight, and they’re more useful than for just things on the screen. A Floater is, generically, a fire-and-forget object associated with the current game state.

I then rethought the input system using floaters. Instead of having an “input manager” in the current state, which has hooks into all the input subsystems, the current state just has a heap. This heap isn’t specialized; it’s just something that you can put stuff in. And then I split the input into “backend” and “frontend”. There’s only one backend at the moment (it’s possible that there will never be another in the core): Glop::Input::Backend::SDL. Then the frontends connect to the backend, which stores itself in the heap (or not, it’s up to the implementation. Backend::SDL does, though), which then forwards along events. The frontends translate those events into user callbacks.

What this is essentially doing is decoupling the kernel and the input. Each implementation can do what it needs to do, without having external management. This was a good choice.

Okay, Actors are called Floaters, so an Actor is something different now. What is it? It’s a kind of Floater, but it’s smart. Some actors want a geometric body associated with them, some of them want a physical body, some of them want to be able to be clicked on. So Actors can compose roles which are each of these things. If an Actor composes Actor::Role::Body, then it automatically has a body, without any work on the part of the user. If an Actor composes Actor::Role::Geom, then the user has to say what kind of geom, and then it just works.

MFOOP, Part Two

I decided just to start my paper instead of blogging about what I’m going to write about. Now I’m going to blog about some of the issues that the paper has brought up.

There’s a really neat thing that happened during the abstraction process. Remember those awful theoretical CS problems in which you’re trying to figure out whether Circle is derived from Ellipse or vice versa, and eventually you decide that they have to be siblings? That solution never seemed right to me. It’s one or the other, darn it!

The reason a Circle is not a kind of Ellipse is because you could treat it as an Ellipse and change it’s primary axis, deeming it no longer a Circle. But this doesn’t happen when the Circle is constant, because you can’t change it, and it works in every way like a constant Ellipse.

So—what if every object in the universe were constant? No, that doesn’t mean we’re in a pure functional language. You could still change objects, except you wouldn’t be changing them, you’d be mapping them to new objects. If this is the case, then a subclass is isomorphic to a subset, which is isomorphic to a subtype. No more subtle distinctions; they’re all the same thing. The set of circles is a subset of the set of ellipses, therefore, Circle is derived from Ellipse.

This is just a change in our way of thinking, and doesn’t effect language semantics. But it turns out to be a very useful change. All of those evil object design problems just vanish when you start to think of things this way. Another example is, in Perl, numbers and strings convert to each other. There was a big discussion on perl6-language about whether Num was derived from Str, or whether they were siblings, or what. With this mindset, it’s simple: Num is derived from Str. All numbers are strings. Some strings can be numbers. They don’t convert; they just are.

This is what the paper is about. It’s about some of the more subtle problems that arise when you start to think of things this way, what you need to do to fit it into what we call Perl, and what Perl needs to do to fit into this. I am confident that Perl will become a much more logically consistent language as a result, without losing its natural linguistic properties.

I still haven’t figured out what this means for multimethods. All I know is that we can’t define “distance” in that horrible way I wrote about last week, since there’s no concept of “existance”—everything that could exist does exist.

Mathematical Foundations of Object Oriented Programming

I haven’t blogged about this before, but that doesn’t mean I haven’t given some serious thought to it. It just needs more serious thought.

The problem is Multimethods. Perl 6’s operator system is all based around multimethods. And the current multimethod thinking around the entire programming community gives me the heeby geebies, as Perl 6’s junctions did before I tore them to shreds for the greater good :-). I think I need to do precisely the same thing I did there: fall back to programming’s mathematical basis.

Whenever a programming feature strays far from mathematics, things start getting inconsistent and difficult to work with. Non-mathematical features tend to have major scaling problems. I don’t know why, I just know that it’s true. When the operator system of a large language is based on something, it had better be scalable.

I argue that the current conception of multimethods are far beyond “far from mathematics”. They are a travesty in its name; they wave math in math’s own face like a baby wielding a daggar. I’ll tell you the beginnings of my argument (my whole argument will appear in the paper I write describing this). I’d be appreciative to be pointed to any prior research in the area, as I’ve come across none.

First think of a class as the set of all (possible) instances of the class. Any method in a class C is a map with domain C. Now think of what inheritance means. If a class D inherits from a class B, then all objects in D are also objects in B. So D is a subset of B.

Now consider an inheritance heirarchy where B ⊂ A, C ⊂ A, and D ⊂ B. Take a moment to picture this. In the current conception of multimethods, they say that D is more derived than B. I can understand that, since D ⊂ B. But they also say that D is more derived than C. This is the flaw. To make matters worse, most methodologies (like Class::Multimethods) say that D is exactly twice as derived as C (from A). This is preposterous! Look at the definitions carefully. Could D have fifty elements and C have one? Sure. Could even C ⊂ D? Sure it could. All that we know is that C is a subset of A, and that D is a subset of A. There could be (and are) hundreds of classes between A and C. Just because we don’t name them doesn’t mean they don’t exist.

Here’s where it gets completely awful. Okay, so D is twice as derived as C from A. So let’s define the distance d(A, D) to be 2 and the distance d(A, C) to be 1. This is a reasonable thing to do if you’re going to say that first silly statement. Now, to resolve which map m in the set M to use, we add up each distance from parameter to argument positionally. So given a map J: A × B → X (X is not important) and a particular pair of arguments in the sets (C, D) for example, the distance from the map to the tuple is d(A,C) + d(B,D) = 1 + 1 = 2. We do this for every candidate map, and whichever one has the lowest sum distance is the one we call.

Do you see my point? We’ve taken the set-theoretical concept of classes, grafted on this idea of “order of derivation”, summed these together in an arbitrary fashion (tuple inner product with operators (d, +)), and used this as the measure of applicability. We’ve strayed far, far away from mathematics, only to come back to it at the end with a bunch of irrational defintions, and we claim that the result is “mathematical”. Highly dubious.

That’s the beginning. I’ll start my blog entry on the beginnings of my solution now. Stay tuned.

Implementing the Basic Unit of Meaning

I thought that I wrote about this project (the one I’m about to tell you about) before, but a search of my archives turned up nothing. So I’ll give you a basic introduction to the concept, and then talk about how the hell we’re supposed to implement it.

A friend from GameDev, a Russian Studies major, had an idea for a massively multiplayer game. He’s quite a dreamer, and he had many ideas for this game (some completely infeasable), but there was one that really stuck out. He was tired of the classic magic system of “click on a scroll, click on a thing, BOOM”. This isn’t magic, this is just a weapon shaped like a piece of paper. He thinks that each spell should be composed of basic words that aggregate into a final spell. These words would be written on scrolls and such, and the user would have to figure out how they go together.

This is a great idea. I think it would be even better if it were taken a little bit further. Magic is a complete language, the rules of which have been long lost (in the source code ;-). It has grammatical structure, inflections, irregularities, all the goodies that come with a natural language. Scrolls you find would have complete spells on them, and these scrolls would be abundant. Let’s say you find three scrolls and experiment with what they do:

  • nih’ou tou — Fireball
  • ciui tou — Fire shield
  • nih’ou kurou — Ice ball

(It’s fun making up words) If you’re a simple-minded player you’d have these three spells. But if you’re observant, you should generalize that ciui kurou means “Ice shield”. I have many more ideas about how this language is constructed grammatically and semantically (in particular, it won’t be nearly so straightforward) and I’ll discuss those another time. Now let’s talk about how the hell do you implement that!?

Implementation here is all about representation. Can we get a representation that’s easy to traverse and construct a spell? And I think going into the language of the context-free we can do that. The parser for the language of the game is simply a translator, which translates the complex and interwoven language into a simple, abstract form which fits into a context-free tree. This tree wouldn’t be representing text, but rather how abstract nodes fit together.

At the top of the tree would be the transit node, which specifies the method of transit (bolt, shield, enchantment, etc.) along with an elemental node, which is some combination of elemental effects, or something. What’s important is that we notice that it’s context-free:

    spell:  transit element
    transit: "bolt" | "shield" | ...
    element: basic-element | "dot" element element | "cross" element element | ...

Maintaining a mathematical foundation should also help the abstraction a bit (computing the inner product of two elements is somewhat an abstract definition, but it would make a useful linguistic tool).

The graphical effects for the spell would be methods build on top of the tree nodes, instructing how to combine the various subcomponents into a modular graphic. With special-case overrides, of couse. But it’s all very simple, and fits into an elegant, orthogonal architecture.

I can do this. And it will be very cool. I promise. The rest of the RPG I’ll leave up to my friend.

Eine Kleine Wasmusik?

Nichtmusik! (A 36) A dissonant, serious piece that I wrote over the last two days. If you’re looking for some interesting runs to put in jazz improvisations, this is where you’ll find them. But note that they’re intentionally unsettling.

Anyway, I’m quite proud of this one, even if nobody is going to like it. I’ve never succeeded before in making a song that makes you feel weird.

I like pizza, I like bagels, I like hotdogs and mustard and beer. I could eat eggplant, I could even eat a baby deer.

It’s been a long, tiring night of poker. I wonder whether we should go with one tourney with rebuys, larger buy-in, and multiple payouts.