# Concerto progress

I think I finally got past my composer’s block on my piano concerto. It’s still slow-going: I have 45 seconds more to show you since last time. Still, here is the current third movement. I plan to keep it low tension for another thirty seconds or so, then build to a quick climax and calm back down again for the main low tension section. From there, we’ll see where it goes. Feedback welcome.

# No monotypes

Perl 6 has a very rich type system, even though it’s not done. As far as my knowledge goes (which in this area is not very far compared to some of the lambdacamels), it makes the fewest assumptions and restrictions of any type system of any other language. I’ll be writing today about a statement that seems obvious, but has some very interesting consequences (much like “the speed of light is constant in all reference frames”). That statement is “types are just sets”.

I used that statement when arguing with Damian about multimethod semantics. The argument was against derivation metrics, in how it is possible to measure with a finite number how much one infinite set differs from another (assuming that they both have the same infinite cardinality). However, in my argument, I was claiming that types have nothing special over sets, otherwise they might have some information in them that allows you to measure distance. So, let’s look at what that means.

For one, you can no longer talk about the “type of” a certain value. When you ask that question, you’re asking “what set does this value belong to?”. And of course there are infinitely many of them. So the best you can do is ask “does this value belong to this set?”1.

(For the following example, assume that Perl 6 is pure-functional: that is, no side-effects allowed. We’ll incorporate those later.)

Consider this function:

```    sub foo (::T --> ::T) {...}
```

In Perl 6, that means that foo is a function that takes a value of type T and returns a value of type T, for any type T. Let’s look inside that definition more mathematically: for every set T, x ∈ T ⇒ foo(x) ∈ T. You might as well not write the body of the function, for the only function that this could possibly be is the identity! (Note that the only function in Haskell with this signature is, in fact, id.)

If we are to write expressive type signatures, it’s pretty clear that we need to steal Haskell’s type classes. A type class is just a set of types, or a set of sets. It itself is a type. For instance, Numeric could be a type class that says that its member types obey closure under addition, multiplication, etc., along with things that Perl can’t prove, like associativity under addition and multiplication. This is why you have to declare type class instances. Then we can declare (pseudosyntax):

```    sub double (::T --> ::T | ::T (in) Numeric) {...}
```

And have it mean something other than the identity. It probably adds the argument to itself and returns it. Since ::T obeys the laws of Numeric you’re allowed to add it to itself and get the same thing back.

How do we define such type classes, then? How about like this:

```    role Numeric ::T {
multi infix:<+> (::T, ::T --> ::T) {...}
...
}
```

(Incidentally, that implies that you might define normal classes like so:

```    class Dog \$fido {
method bark () { say "\$fido.name() says 'woof'" }
method name () { "Fido" }
}
```

Which is an interesting, er, “solution” to the \$self problem.)

And of course, this implies that you can probably have type-class classes, whatever those are.

1You might think about asking “what is the smallest set that this value belongs to?”, but then you’d quickly realize that the answer you’d get would be the singleton set of that value :-).

# LMN

That is, Luke, Max, and Namaste. Max leaves tomorrow, so we jammed tonight. This one deserves credits:

```    Luke Palmer - Keyboards, Violin?
Max Rebuschatis - Guitar, Violin?
Namaste - Conga
```

The “Violin?” is an instrument, and it is separate from the “Violin”, which was not played.

In my opinion, #3 was by far our best. I’ll also note that anyone who listens to #6 all the way through is clinically insane (there is no “music”), and anyone who listens to #5 all the way through needs to find something to do with his time (there is no music).

# More Max and Luke

Here are the recordings Max and I made on August 12:

The only one that I’ve attentively listened to the whole way through is #1. I started indexing the parts I liked about half way through, and I strongly suggest you check out about 48:00-53:30 (especially 51:00 onward). There’s good stuff before 34:00, but I don’t know where it is.

# Logically Backwards and Forwards

I tried to ask Ovid (the author of AI::Prolog) about forward-chaining logic, but I realized that I didn’t really know what I was asking. Let this post serve as a brainstorming session to find that.

A smart game AI would like to be able to make inferences based on what any particular NPC knows. For instance, if I own a house, and it contains a couch, it would be reasonable to assume that I own that couch. That’s easy using backward chaining logic in Prolog:

```    owns(X,Y) :- owns(X,Z), contains(Z,Y).
```

But there’s also the idea that knowledge changes. For instance, I’d like to specify something like (pseudo-syntax):

```    owns(me,X) --> !owns(me,X)  { GET MAD! }
```

That is, if a piece of knowledge enters the database that causes owns(me,X) to go from true to false for a particular X, obviously it has been stolen. I’ll say that the new knowledge invokes a predicate transition. So one way to do this would be, for every transition hook, to poll once every certain while and check whether it has changed. That sucks though. The way it should work is the opposite way from general backward-chaining logic. It much more like dataflow.

Hmm… that defines the problem pretty well. I’m just wondering if that’s the way I want to do it. Is it transitions that I am really interested in? I think it is. If it weren’t, then I suppose I would just query the expression and bind the value out.

So in order to define such a transition, I need to tell owns/2 that when it changes, I need to update and possibly act. Huh, that’s interesting. It seems to be a knowledge base problem more than a combinator problem. The only thing I suppose I’m wondering is if there’s a better way to update knowledge than to re-run the rule.

Well, that defined the problem. How’s that for unstructured prose?

Damian and I have been intelligently screaming at each other on perl6-language, finally really fleshing out the Manhattan distance MMD debate. It has settled down recently, after I discovered that Damian had been arguing against something that was almost, but not quite, entirely unlike my algorithm. I looked through the message history and realized that I had been arguing this algorithm this whole time, and I never said what it was! The most visible place I put it was in the pugs repository at the end of a document.

Anyway, I implemented my algorithm in a Perl 5 module, Class::Multimethods::Pure (the algorithm is also described there). The only current problem is that it is turtle slow (or whatever the expression is). I’ve never been very good or interested in writing locally fast code: I prefer abstraction to raw speed. However, if I’m seriously proposing this algorithm for Perl 6, I’d better have a good algorithm for dispatching in sublinear time. And algorithm design happens to be one of the things I am interested in.

A multimethod is composed of a set of multimethod variants, which are essentially lists of parameters together with the corresponding code. Currently, when the module compiles, it sorts the set of variants into an ordering of singular sets. That is, I go along the list of such sets, and for each set:

• If it has no elements which match the input parameters, I move on to the next set (or die “no method found” if there are no more sets).
• If it has exactly one matching set, I succeed and call that variant.
• If it has more than one element, I die with an ambiguity error.

So that means that whenever a variant a is more specific (see the docs above for a precise description of what that means) than a variant b, b is necessarily in a later set than variant a. So the most specific variants are way up front. And that means that the more generic method you’re going to dispatch, the slower the algorithm gets. That is probably unavoidable. Here’s the problem: By the time you get to the last set, you’ve asked each “does” many too many times. Keep in mind that such questions can involve a subtype condition, which can (but shouldn’t) involve heavy computation.

The approach that I’m working out now is to build a DFA where the states are questions about “does” relationships and the transitions are “true” or “false”. I want to construct a DFA that asks as few questions as possible in order to determine whether there is a unique matching variant. The way I see this as a win is that if you ask the question “Does the first argument do C?”, then if so and C is a subset of A, you already know the answer to “Does the first argument do A?”. Likewise, if you ask if the first argument does A and it’s false, then you already know that it doesn’t do C.

That’s about as much as I know for sure. I’ve tried a few times to find an method of constructing these DFAs with no avail. I’ve tried having each state be a set of candidate methods, and each question narrow the state set until you have a dominated set (where there is one method in the set that is more specific than all the others). But consider this example:

```    B does A
variant 1: foo(A, A)
variant 2: foo(A, B)
variant 3: foo(B, A)
variant 4: foo(B, B)
```

If you ask “does argument 1 do B” then you get the set 234. From the set 234, if you ask “does argument 2 do B”, you get the set 234. Obviously variant 4 is correct, but a “set of viable variants” approach doesn’t keep enough information to to tell you that.

If anybody has any ideas for this, I’d love to hear them.