Monthly Archives: April 2006

Last Night’s Jam Session

We had a rather large group last night:

  • Denise – bass guitar, vocals
  • Jude – acoustic guitar, percussion
  • Luke (me) – organ, piano, synthesizers
  • Namaste – drums, percussion
  • Rob (my father) – acoustic guitar, vocals
  • Sue (my mother) – maracas
  • Terry – electric guitar, acoustic guitar, vocals

Here are the recordings.

It’s kind of interesting. My favorites were Nectar of Life, Transcendental Number, and Namaste Lays Down a Beat. That is, all the completely improvised ones. I would say we were the most together on take 2 of Wish You Were Here (and we had Denise’s beautiful voice to add some class to this hodgepodge of musicians).

Kernel for the logic engine

In Haskell, the list monad provides a way to do non-deterministic, backtracking computations. For example, you can generate all sums of two numbers between 1 and 5 like so:

    sums = do
        x <- [1..5]    -- let x be a number between 1 and 5
        y <- [1..5]    -- let y be a number between 1 and 5
        return (x+y)  -- return their sum

It will proceed like so: let x be 1; let y be 1; output 1+1; let y be 2; output 1+2; let y be 3; output 1+3; …; let x be 2; let y be 1; output 2+1; …. If you don’t see how cool that is, that’s okay, but trust me that it’s damn cool. Especially so when you combine this with conditions (i.e. I could have generated all such prime sums), arbitrary recursion.

However, when you enter the realm of the infinite, this is not quite ideal. For example, let’s say you’re generating ordered pairs like so:

    sums = do
        x <- [1..]     -- let x be a number >= 1
        y <- [1..]     -- let y be a number >= 1
        return (x,y)  -- return the ordered pair

The enumeration proceeds as before: let x be 1; let y be 1; return (1,1); let y be 2; return (1,2); let y be 3; return (1,3); …. If you’re observant, then you’ll notice that x will never get to be 2, or any higher number. The entire (reachable) list of ordered pairs has first element 1. If you’re going to be searching through the generated possibilities for the pair (3,4), your algorithm will go into an infinite loop. (Maybe you are starting to see the parallel to my last post). Technically, the order type of the returned list is greater than ω, so some elements will take an infinite amount of time to reach.

So what we would like is a list monad where the returned results always come in a list of order type ω, that is, every element can be reached in an (arbitrarily large) finite amount of time. I’ve written one. The fundamental idea is that of the standard well-ordering of the rationals. Consider this grid:

1,1  1,2  1,3  1,4  1,5  ...
2,1  2,2  2,3  2,4  2,5  ...
3,1  3,2  3,3  3,4  3,5  ...
...

You can put this into a list of order type ω by traversing the positive diagonals: 1,1; 2,1; 1,2; 3,1; 2,2; 1,3; 4,1; 3,2; 2,3; 1,4; …

So basically, whenever the algorithm joins two lists, it does it not by concatenating them together (as the standard list monad does), but by “diagonalizing” them together. This ensures that if you only feed it lists of order type omega, it will output a list of order type omega.

Incidentally, this provides a kernel for the logic engine I was talking about. This will enumerate all logical sentences in order type omega, so that if you look through the list for sentences that are true, you will eventually get to a true statement if there is one.

Parallel logic

An idea that Namaste and I had for the linguistic magic system was to have the spells that are cast alter the laws of the universe. Clearly this is a hard problem. But I like hard problems (see, for example, Minesweeper Infinity and Flamethrowerflies :-p). My current plan is to lay out a grid where each space may have any number of several properties. These properties interact with the players in predefined ways, but the way that they interact with each other will be defined by the user during the game.

Let’s say that two of these properties are fire(p,t) and water(p,t) (where p is position and t is time) One of the rules that a sorcerer could define would be: forall p t. !(fire(p,t) and water(p,t)), i.e. fire and water cannot be at the same place at the same time. Then the rendering phase would iterate over each position in the world and ask whether fire(p,t) is true, and if so, draw some fire there, and do the same for water.

This requires a logic engine almost, but not quite, entirely unlike Prolog. This logic engine must have the following property: if a formula’s truth value can be derived in a finite amount of time, then the engine must derive it in a finite amount of time (crossing my fingers that the construction of such an algorithm is even possible). Prolog most certainly does not have this property. Doing this for any logical form would be ideal, but I am okay with restricting the types of formulas that can be constructed. For example, universal quantification may be disallowed in queries and rules must be universal implication (the rule “fire exists” would be disallowed).

Thanks to Gödel, we know that there will either be a formula that can be proven both true and false or a formula which cannot be proven either. If fire(p,t) happens to be one of these formulas, I’m not sure what to do. We can’t just let the engine diverge (infinite loop).

The first step in my process would usually be to define a normal form that the formulae take. For example, Prolog defines rules as Horn clauses. However, when you ask Prolog a question, it either returns “yes this is true” (and gives you assignments to your variables) or “I cannot prove this”. My engine must return “yes this is true”, “no this is not true”, or “I cannot prove this either way” (and of course, “I’m still trying to prove this, but I’ve taken too long”). For example, if you have a rule “p(x) ⇒ q(x)”, the engine can either use p(x) to prove q(x) or use not q(x) to prove not p(x). It is not clear that Horn clauses have a property that makes this easy.

Hmm, actually Horn clauses have a property that makes this quite easy. Let’s say you have the formula: ∀x (∃y p(x,y) and q(x,y)) ⇒ r(x). Note that this is a Horn clause. We can apply the contrapositive and do some massaging to obtain the formula: ∀x ∀y ¬r(x) and p(x,y) ⇒ ¬q(x,y). Note that this is also a Horn clause. If there are variables that do not appear in the right side of the implication, they will be existentially quantified on the left, as the Horn clause requires. So basically, you can naively transform the Horn clause A,B⇒C to ¬C,A⇒¬B and also to ¬C,B⇒¬A. You can derive negation rules for every predicate you depend on.

With a breadth-first evaluation strategy (which is necessary for the “derivability” constraint above), you can attempt to derive multiple things at once, and stop at the first one you get to. So to ask for the truth of a statement P, you simply try to derive P and ¬P at the same time. You go with whichever one comes up first, and you cross your fingers and hope that it’s not possible to prove its negation (If it isn’t, then you’ll spin your wheels forever trying to prove it. If it is possible, then you will eventually prove it and show inconsistency. I.e. you can show that you are inconsistent, but you can never show that you are consistent.).

Ordinal Number Theory

The title is to be read where word concatenation is right-associative; i.e. Ordinal (Number Theory). We’re studying ordinals in my set theory class (and have been for a while because, well, that’s what set theory is), and I noticed that you could talk about ordinal divisibility. So I started to wonder what the infinite primes looked like. Let’s make a definition.

For ordinals α and β, α |L β if there exists an ordinal γ such that β = α×γ. This is to be read “α left-divides β”. Similarly, α |R β if there is a γ such that β = γ×α, and is read “α right-divides β”

So what properties does ω have? Recall that 2×ω = ω, but ω×2 > ω. This is true for all finite values of 2 :-). Therefore, we can see that ω is a right-prime (has no right divisors other than 1 and itself), but every finite ordinal is a left-factor of ω (it is about as far from being a left-prime as possible).

ω+1 is both a right-prime and a left-prime. 2×(ω+1) = 2×ω + 2×1 = ω+2, so 2 |L ω+2 and ω+1 |R ω+2, so ω+2 is not a left-prime or a right-prime. In fact, for every finite ordinal n, n |L ω+n and ω+1 |R ω+n, so ω+1 is the largest prime of the form ω+n.

This is just a teaser. I encourage people to discover more about this theory and comment here :-). I will be, supposing school doesn’t get in my way.

Transactional Game

I’m reading about transactional memory at the moment. I’m doing a presentation on it on Tuesday, so I don’t have much time to talk. I just wanted to write down this game idea before I forget:

I’m picturing a turn-based board game, but the idea could be generalized to many different strategy genres. Picture Chess or Go. You and your opponent take turns making moves as usual. However, you have “transactional control”. That is, you may start a transaction in which you can make a series of moves without your opponent knowing what you are doing. Differently from usual transactions, you can see what your opponent does in the mean time (unless he is also in a transaction, of course). You may commit any time you like and make your moves visible to your opponent. Here’s the kicker: if your opponent commits before you do, and his moves interfere with yours, your whole transaction gets rolled back, and it is as if you never did any of it. So it is good to use transactions to hide information, but if they get too long, then you are putting your position in the game at risk. It also encourages you to predict your opponent’s moves more, so that you can try to interfere with them.

That is all for now.

Syntax Tree Holes

I’ve had in my head a design for an experimental language called meme, whose ideas stem from my very rough concept of tagmemics. But the idea has evolved into a theory that I’m trying to develop called “Holey Syntax Trees”. Basically, you are allowed to leave out constituents and the language will infer what will be there. You could consider it to be a generalization of the concept of “topic” from Perl. For example, you could say:

    for each line in System.stdin.lines:
        print line

But you could also say:

    for each line:
        print

And it would mean the exact same thing. The “in System.stdin.lines” would be inferred from the name of your variable, and the fact that System.stdin is in the “search path” for inference. The “line” would be inferred topically like in Perl (the thing you’re most likely to be talking about is the thing you’re iterating over, in this case).

I’ve started programming this project a couple times without it going anywhere. I’ve decided that the hole filler needs to have theoretical backing, because doing it heuristically is going to end up getting all the examples I thought of right and everything else wrong. In order to infer in a sensical way, the program will have to pull information from all kinds of places. So the first thing I’m going to try to do is to build a “partial AST”:

    for (each "line" HOLE) (line -> print HOLE)

And then use attribute grammars to transform the partial AST into a complete AST. Each function probably needs to have its own hole-filling rules (or break the functions into classes of similarly-inferring functions). So what do those rules look like?

Let’s just try to get this example. Let’s start with print.

    App "print" ? → App "print" @topic

Where @topic might represent an attribute in the AG, and ? represents a literal hole. On to each.

    App "each" [x, ?] → App "each" [x, {search for collections named plural(x) or x}]

(My hand is getting tired from waving around so much). Finally for:

    App "for" [?, body] → App "for" [@topic, body]
    App "for" [coll, body] :- body.@topic = coll.@name

The second of those rules says that, in “for”, the topic of the body is equal to the name of thing thing being iterated over. Hmm, I’d like to concretify these more, but I’m out of time.

Kids language and programming language

In my lingusitics class, the professor contrasted two different forms of the same meaning:

    I went to the store.
    I bought some peanut butter.
    I took it home.
    I made a sandwitch.

And:

    When I went to the store, I bought some peanut butter, which I took home to make a sandwitch.

And it struck me that the former is a lot like imperative languages, while the latter is a lot like functional languages. Well, I’d say the latter is more like well-written Perl. Functional languages look more like:

    I made a sandwitch with the peanut butter I bought when I went to the store.

And it really reflects the characteristics of these various paradigms. The imperative and Perl are approximately as understandable as each other, but the Perl is smoother to read (I am speaking about well-written Perl here). The functional is the most concise but it takes some deciphering. The most interesting thing is that these are all valid constructs in English, and each type is used in moderation to achieve the most elegant prose. What I’m basically saying is that if you believe programming languages are like natural languages, then Perl’s philosophy is dead-on in that you take a little bit from every paradigm in order to achieve the best program.

Business Model

I talked to Bob Bruegel, a smart fellow who knows quite a lot about alternative business models, and asked him what a good model for Soylent Software would be, given that we dislike charging money, dislike copy protection, and would enjoy being open-source. He pointed out that most libraries that you can licence are either open source or cost thousands of dollars. There is a good space in the market for affordable libraries for small game studios. So we would sell to game developers, and distribute our games for free as demonstrations of our libraries. This seems like a good plan because most of our games have some interesting technical idea behind them (a fast minesweeper solver, a fluid dynamics engine, a musical improvisation engine), so those components can be marketed as libraries and the rest of the game can be open source.