Tag Archives: perl

My paradigm is better than your paradigm

On twitter I participated in the short snarky exchange:

@stevedekorte – Threads sharing state by default is like variables being global by default.
@luqui – state is like globals :-)
@stevedekorte – @luqui only shared state – which is why FP ultimately fails – it trades comprehensibility for shared state optimizations
@luqui – @stevedekorte, wow, sneaking “FP ultimately fails” as if an obvious truth in a reply to a Haskell programmer
@stevedekorte – @luqui, a bit like sneaking in “[all] state is like globals” to an OO programmer? :-)
@stevedekorte – @psnively @luqui my only issue with FP is the decision to trade expressivity and reusability for less state while calling it progress

The conversation goes on (and on) between many twitterites, having a fun but serious argument about this and that benefit of this and that style. Dynamic/static types come up, OO vs. functional, usefulness, mathematical foundation, learning curves; none of the standard artillery is spared. What irritates me about this style of argument is all the sweeping adjectives (1) used with no definition, thus impossible to support with evidence, and (2) synonymized with better.

In this post, I will draw attention to this irritating vocabulary, so that the next time you use it you can realize how little you are saying.

(Disclaimer: this post is not intended to criticize stevedekorte specifically, even though I identify two of the terms he used below. It was a long, typical programming zealot argument, and all parties involved were being equally dumb :-)

Expressive

A person is expressive if he expresses himself — he has an idea and wants to write it down. So I would say a language is expressive if it allows or enables the programmer to be expressive. Languages that restrict expression are not expressive. So we have the following facts:

  • Dynamically typed languages are more expressive than corresponding statically typed ones, because statically typed languages forbid you from expressing some ideas.
  • Multiparadigmatic languages are more expressive than languages which are paradigmatically pure, because the former allow you to express yourself if you are not thinking within the framework.
  • A language which you are fluent in is more expressive than a language you do not know very well.

By these observations, we might guess that Perl is the most expressive language, Haskell is the least.

Do you notice yourself already equating expressive with good, and thus perhaps feeling offended? Everyone wants an expressive language, right? Here are some reasons some programmers might not want an expressive language:

  • Most of my ideas are made of bullshit. Have you ever had a great idea, gone to write it in your blog, and realized it was nonsense because you were unable to write it? So writing is less expressive than thinking. Is thinking better than writing?
  • Every programmer has a different way of structuring their thoughts. An expressive language will bring out the differences in thought structure between programmers, and introduce impedance mismatches between programmers on a shared project.

I’m not arguing that expressiveness is bad. I’m just arguing that it doesn’t mean good, it means expressive.

Reusable

A language “is reusable” (to abuse language a bit) if code written in that language can be easily reused.

This “obvious” statement is hiding something very important; namely, reused how? For what? We are in an unfortunate situation in programming: code is designed to be reused in a particular way, and if you want to reuse it in a different way you are pretty much out of luck. An OO widget library is designed for the addition of new types of widgets, but if you want to reuse a program written in the library on a new platform you are in for a lot of work. A functional drawing library is designed so that you can transform and export your drawings in an open-ended variety of ways, composing new ways out of old ones; but if you need to draw a circle you have to build it out of lines, even if there is a much better way to draw a circle on your target. (This is essentially the expression problem).

An abstraction will always expose some things and conceal others. Different languages enable abstraction in different ways, which makes exposing certain things easier and others harder. The zealot will reply, “but in my experience, real-world programs are naturally organized around <insert preferred paradigm>, and <insert uncomfortable paradigm> doesn’t support that style as easily.” I would suggest to this zealot to look deeper into the definition of “real-world” to discover its many facets of subjectivity. (What domain do you work in? Who architected the real-world software you have worked on, and what was their background? What kinds of programs do you consider not to exist in the real world, and what values are you using to minimize them?)

Easy to learn

A language is easier to learn than another language if it takes less time to become competent/fluent programming in that language.

I don’t think this one is as automatically synonymized with “good”. Haskell programmers are aware how much relative effort was required to learn Haskell, and are usually grateful that they put in the effort. But all other things being equal, a language easier to learn ought to be better than one harder to learn.

The deadly omission in the above definition is that people are doing the learning. A language is easier or harder to learn to a single person, and that is entangled with their background, their approach, and their ambitions. When arguing “X is easier to learn than Y”, I encourage you to add one of the following phrases to the end:

  • for programmers who already know Z.
  • for people with a mathematical background.
  • for people with a non-mathematical background.
  • for children.
  • for me.

Or something similar. The following phrases do not count.

  • for almost everyone.
  • for people with a typical background.
  • for people who want to do useful things.

I’ll close this section with this remark: Haskell is the easiest language to learn, because I already know it.

Conclusion

I know I am frequently irritated by many of these kinds of words, and I’ve only mentioned three here. But you see where I am going. Respect the values of your fellow engineers. If you are a polyglot and like a paradigm, it probably comes from a set of values you have — a set of things you consider important, and that differ from the values of others. Concentrate on that; communicate your values, and try to understand the values of others. If you have toyed with a paradigm and quickly lost interest because of some surface feature (I have — e.g. I will throw out a language without support for closures) or impression, consider the possibility that you like what you like because simply because it is familiar (other equivalent notions: easy, natural). Which is fine, some people like to tinker with their thinking patterns more than others, but remember that you have trained yourself to think in a particular way, so consider that your “objective” judgements about these ultimately human languages could be biased.

(For the record, that last phrase originally read: “all of your ‘objective’ judgements about one of these ultimately human languages are biased”, which is quite a bit stronger than what I had intended)

Flattr this

OO is not the One True Paradigm, but Haskell still sucks at it

I just read osfameron’s Readable Perl talk. It’s a pretty typical language advocation talk, nothing special, but it reminded me of Perl. Those who have not been reading my blog since 2005 may not know that I used to be a Perl nut. I was even on the Perl 6 design team, attended several design mettings with Larry Wall (and a couple other notable geniuses), had the Perl 6 syntax memorized, …. Quite insane I was.

I have been pondering recently about the cognitive mismatch between OO and functional approaches. The two have been fused in languages, see O’Caml, but I argue that code in such a fused language will mostly end up looking like one or the other, not a beautiful balance as we would like.

My thesis is that the two support different models of the universe. The functional approach supports a “mathematical” view, where we know a lot about the structure of our data; the object approach supports an “empirical” view, where we know a lot about the data itself. Let’s use something I was playing with today as an example: the SK calculus.

The obvious functional approach is to define a data type representing the cases of the AST:

data AST = S | K | AST :@ AST

(The :@ is the name of an infix constructor; i.e. it is just a node with two subtrees)

Then to implement a function that, say, reduces the top level of a tree, we can analyze the data by pattern matching:

reduce (S :@ x :@ y :@ z) = Just $ x :@ z :@ (y :@ z)
reduce (K :@ x :@ y)      = Just $ x
reduce _                  = Nothing

Here we know a lot about what kind of argument reduce will get. Whatever it gets, we know that it will be either an S, a K, or an application. We then define what it means to reduce data of this form.

Now I could half-bake a standard OO retort showing how much incredibly better functional programming is by how awkward this would be in a typical OO language (and it would be). But that’s trying to apply the functional world-view, that we know a lot about the structure of our data, to an OO setting. I think a good OO design for this same problem would take quite a different form. I see it something like this:

interface Combinator {
    int arguments();
    AST apply(AST[] args);
}
class S implements Combinator {
    override int arguments() { return 3; }
    override AST apply(AST[] args) {
        return new Apply(new Apply(args[0], args[2]), new Apply(args[1], args[2]);
    }
}
class K implements Combinator {
    override int arguments() { return 2; }
    override AST apply(AST[] args) {
        return args[0];
    }
}
...

While I gave a working Haskell program above in four lines, my “good” OO solution (in whatever language this is… looks like C#) has much more than that and is not even complete. I have left out the definition of Apply and the function which actually does the reduction. But I’m not bashing OO here (but please do understand if I bash OO syntax, which as the years go by seems to get more and more verbose). Instead it’s just that this problem is very well-suited to functional programming.

But this program has very different properties from the Haskell version. In particular, it is easy to add a new combinator, a new object, that does something different. Whereas in the Haskell program, adding a new primitive combinator would change the assumptions of every function that worked with combinators. Conversely, adding data manipulation functions which depend on particulars, namely whether something is an S or a K (or whatever else), involves touching every object to add that method. Whereas in Haskell, we can just add a new function. Astute readers will recognize this as the famous “expression problem”.

This trade-off starts to affect the way we proceed. If we were to implement identical functionality in the two programs, our approaches will diverge greatly. For example, today I added a function to invert an application. In Haskell, I just enumerated by cases: this is how you invert an S-reduction, this is how you invert a K-reduction, etc.

In the OO program I wouldn’t add a visitor though, that would be stupid. Instead I would create a new node type for variables, apply the transformation to a number of variables equal to the number of expected arguments, and analyze the positions of the variables in the result. I would end up with a function that can invert any combinator. That is, the natural next step in the OO example is to write more generic code than in the functional example.

Anyway, there’s what I consider a nice side-by-side comparison of the two approaches. Maybe by analyzing these two examples and where they led, you can start to see what I’m saying about the two cognitive models.

And this brings me back to Perl. The slides I read mentioned Moose, an object model module for Perl. It is rich: supports inheritance, role composition, privacy, coersion, and many other tools. I think an OO system needs to have these tools: the OO world view counts on data having many facets, capabilities, constituents, and concerns itself with what is possible when you know about certain ones. An OO programmer must be an expert at creating data with many facets, capabilities, and constituents. This is contrasted to functional programming where everything is known about the data, and the focus is on manipulating it.

Haskell has no support for these tools. Its type system, although powerful, is too static. Haskell provides too many guarantees for OO to work; it wants to know too much about what’s going on. Similarly I don’t consider C++, Java, C# to be “powerful” OO languages. In fact, I might go so far as to say a language with static types and OO is not properly embracing the philosophy. You’re creating rich, smart data. If your superclass guaranteed that a particular method was const, i.e. does not change the object it was called on, how are you supposed to smartly cache the results of that function? Well, with the mutable hack. But that’s a hack, essentially telling the type system that it should go take a hike because you know what you’re doing.

Perl and Smalltalk “get it”, in my opinion. They have simple models, but provide tools for looking inside and changing data on the fly. They are about creating and manipulating smart data, leaving the guarantees up to the naming of the methods and their intended use, because trying to force guarantees would restrict their ability to be smart. If you want guarantees, use Haskell; it will almost prove your code is correct for you. But Haskell has no smart data, that’s the only way it can provide guarantees. You can’t have case analysis of data structures and objects that can reverse-engineer your function at the same time.

That’s it: when you’re in OO, don’t design functionally. Keep your guarantees wide open, and focus on making smart objects.

More Multimethod Madness

Damian and I have been intelligently screaming at each other on perl6-language, finally really fleshing out the Manhattan distance MMD debate. It has settled down recently, after I discovered that Damian had been arguing against something that was almost, but not quite, entirely unlike my algorithm. I looked through the message history and realized that I had been arguing this algorithm this whole time, and I never said what it was! The most visible place I put it was in the pugs repository at the end of a document.

Anyway, I implemented my algorithm in a Perl 5 module, Class::Multimethods::Pure (the algorithm is also described there). The only current problem is that it is turtle slow (or whatever the expression is). I’ve never been very good or interested in writing locally fast code: I prefer abstraction to raw speed. However, if I’m seriously proposing this algorithm for Perl 6, I’d better have a good algorithm for dispatching in sublinear time. And algorithm design happens to be one of the things I am interested in.

A multimethod is composed of a set of multimethod variants, which are essentially lists of parameters together with the corresponding code. Currently, when the module compiles, it sorts the set of variants into an ordering of singular sets. That is, I go along the list of such sets, and for each set:

  • If it has no elements which match the input parameters, I move on to the next set (or die “no method found” if there are no more sets).
  • If it has exactly one matching set, I succeed and call that variant.
  • If it has more than one element, I die with an ambiguity error.

So that means that whenever a variant a is more specific (see the docs above for a precise description of what that means) than a variant b, b is necessarily in a later set than variant a. So the most specific variants are way up front. And that means that the more generic method you’re going to dispatch, the slower the algorithm gets. That is probably unavoidable. Here’s the problem: By the time you get to the last set, you’ve asked each “does” many too many times. Keep in mind that such questions can involve a subtype condition, which can (but shouldn’t) involve heavy computation.

The approach that I’m working out now is to build a DFA where the states are questions about “does” relationships and the transitions are “true” or “false”. I want to construct a DFA that asks as few questions as possible in order to determine whether there is a unique matching variant. The way I see this as a win is that if you ask the question “Does the first argument do C?”, then if so and C is a subset of A, you already know the answer to “Does the first argument do A?”. Likewise, if you ask if the first argument does A and it’s false, then you already know that it doesn’t do C.

That’s about as much as I know for sure. I’ve tried a few times to find an method of constructing these DFAs with no avail. I’ve tried having each state be a set of candidate methods, and each question narrow the state set until you have a dominated set (where there is one method in the set that is more specific than all the others). But consider this example:

    B does A
    variant 1: foo(A, A)
    variant 2: foo(A, B)
    variant 3: foo(B, A)
    variant 4: foo(B, B)

If you ask “does argument 1 do B” then you get the set 234. From the set 234, if you ask “does argument 2 do B”, you get the set 234. Obviously variant 4 is correct, but a “set of viable variants” approach doesn’t keep enough information to to tell you that.

If anybody has any ideas for this, I’d love to hear them.

PCPP

Way back when Glop was going to be called GLIC1, it was going to be an extension to C++ with closures and such. I started by trying to write a huge grammar for C++, which I would then extend with my own special rules.

I’ve just implemented a proof-of-concept framework for a very similar idea, which, instead of being thousands of lines of code, is 150. I think I’m going to turn it into a module. It is pcpp, the Perl C(++) Preprocessor. You can define macros in terms of context-free rules (Parse::RecDescent format) and associate metainformation with lexical scopes. You can even make the macro affect elsewhere in the program (for instance, to declare your variables for you at the beginning of the block). It’s very simple, and only knows about blocks (it does nothing at the statement level). Still, I think it’s good enough to do (almost) everything I want to do with it.

I’ll slap on a nice non-hash interface and make the parser handle a few more things, and I’ll have a neato module, which will probably end up getting used in games I write in C++.

1I now find myself amazed that I missed the super-cool name GLOC

Parse::RecDescent vim trick

Just thought I’d share this. If you’re using Parse::RecDescent and editing with vim, you’d probably like your grammar to be syntax highlighted (at least I do). Here’s a nice little trick to fool vim (and I don’t consider this a bug; vim folks, don’t fix this!):

    my $grammar = <<'#'EOG';
    #\
    grammar goes here
    #'EOG

vim doesn’t grok ‘ inside heredoc quoting, so it thinks that the # is the end of the heredoc, and puts normal highlighting back after that.

Logic.pm preview

Due to a request from several people, I am releasing a not-yet-CPAN-ready module called Logic.pm. The development of the idea to write this module is long and boring. But it’s my blog, not yours, so I’ll talk about it.

Many of the problems that I have been solving recently in programming have involved backtracking: rule matching, a simple-minded expert system, pattern matching in argument lists (the first and third are for Perl 6). So I decided to write a backtracking engine module that would help me out. Since I couldn’t find a good namespace to put it in, it was just going to be called Backtracker. I began that on saturday at about 1:00 pm, and finished at about 2:00 pm. You see, the backtracking algorithm is very simple in concept, and a module that implements it doesn’t buy you much. Essentially all a backtracker is is an object interface, plus two combinators: and and or.

Unsatisfied with my result, I decided that I was going to include a proof-of-concept implementation of my Argument Patterns proposal. In my prior art research, I downloaded gprolog, and started playing around. When I saw:

  | ?- append(X, Y, [1,2,3,4]).

  X = []
  Y = [1,2,3,4]

  X = [1]
  Y = [2,3,4]

  X = [1,2]
  Y = [3,4]

  X = [1,2,3]
  Y = [4]

  X = [1,2,3,4]
  Y = []

I was totally impressed. I decided that I must implement prolog unification in my engine. One hour and 135 lines of code later, I had finished Logic::Data::Unify. Wow. It’s amazing how simple and elegant the extremely powerful unification algorithm is.

Anyway, the status of the module: the logic subsystem works pretty well. It’s low on primitives, but those are pretty easy to write. There’s no syntax; you write everything out in essentially syntax-tree form (see examples/append.pl). There’s no documentation, but there are tests (and more than the one-test-for-every-ten-lines-of-code ratio that I like to maintain). The design is solid, but needs a little refactoring, as I’m seeing some limitations as far as custom backtracking control.

Here’s where I’m going: I have to implement some sort of global statefulness so you don’t have to pass that pesky $state object around. Then I’ll try to make some intuitive operator syntax based on overload (but that is a large challenge given the available operators). Maybe I’ll punt and make it look like lisp with the head moved out front. Then I plan to implement junctions using Logic, and finally multimethod dispatch. Then it’s CPAN time.

Two modules in One day

I just wrote and uploaded Perl6::Attributes to CPAN. It came out of frustration for code like this:

    sub populate {
        my ($self, $n) = @_;
        for (1..$n) {
            push @{$self->{organisms}}, Organism->new(rand($self->{width}), rand($self->{height}));
        }
    }

Which I can now write like this:

    sub populate {
        my ($self, $n) = @_;
        for (1..$n) {
            push @.organisms, Organism->new(rand($.width), rand($.height));
        }
    }

Much nicer, no?

Hypotheticals in Perl 5

In Appendix C of Apocalypse 12, Larry speculatively mentioned the syntax:

    let foo();

Which would “try” foo, and then revert things back to the way they were before that if something failed in your block. Well, I’ve implemented a Perl 5 module, Data::COW (which hasn’t been indexed at my time of writing, but will be at your time of reading), which could, er, hypothetically let you do that. Interestingly, I implemented it so you could do that on a smaller scale for the logic programming module I’m writing.

First, assume that all external state in your program is accessed through %$state, and everything other than that is purely functional. This seems like a big assumption, and it kind of is, and it kind of isn’t. If you don’t change the state of any modules (a fairly easy thing to do, depending on what modules you’re using), it would be fairly trivial, though maybe not easy easy, to massage all your own state into a single global hash. (Note that all stateful things need not be in the hash: the hash is just your “root set”, corresponding quite closely to all of your globals and file-scoped lexicals.)

Then I can easily write temp foo():

    my $old = $state;
    $state = make_cow_ref $state;
    foo();
    # remaining code in the block
    $state = $old;

foo() is allowed to change whatever it likes, and it will be restored (of course, only those things that can be copied can be restored) after the block ends. let() is approximately the same:

    my $old = $state;
    $state = make_cow_ref $state;
    eval {
        foo();
        # remaining code in block
        1 } or $state = $old;

It’d be sweet if Perl 6 could allow that kind of thing to operate on the global state instead of a single hash. Sweet indeed, but I don’t know how practically useful.

Continuation-passing style in Perl 5

While I was thinking about my latest perl module, a logic programming framework, I realized that it would be really nice to have continuations. So after going through all the contortions of getting around that limitation using closures, I discovered that it’s possible to use continuation-passing style in Perl:

sub fibo {
    my ($x, $c) = @_; sub {
    if ($x < 2) { sub {
        $c->(1);
    }}
    else { sub {
        fibo($x-1, sub {
        my $f1 = $_[0];
        fibo($x-2, sub {
        my $f2 = $_[0]; sub {
        $c->($f1 + $f2);
    }})})}}}
}

my $next = fibo(10, sub { print "$_[0]\n"; exit });

# kernel
$next = $next->() while 1;

Don’t you just dream of programming like that? It’s scary how reminiscent of LISP that is… (Note that the big slew of braces at the end of fibo() has two closing parentheses in it). If you’re having trouble reading that, you’re not alone. But the basic idea is that each time you call a routine, it does something and then returns what it should do next. You can’t use traditional control structures like while() with this style, but you can get the same behavior by passing continuations around.