OO is not the One True Paradigm, but Haskell still sucks at it

I just read osfameron’s Readable Perl talk. It’s a pretty typical language advocation talk, nothing special, but it reminded me of Perl. Those who have not been reading my blog since 2005 may not know that I used to be a Perl nut. I was even on the Perl 6 design team, attended several design mettings with Larry Wall (and a couple other notable geniuses), had the Perl 6 syntax memorized, …. Quite insane I was.

I have been pondering recently about the cognitive mismatch between OO and functional approaches. The two have been fused in languages, see O’Caml, but I argue that code in such a fused language will mostly end up looking like one or the other, not a beautiful balance as we would like.

My thesis is that the two support different models of the universe. The functional approach supports a “mathematical” view, where we know a lot about the structure of our data; the object approach supports an “empirical” view, where we know a lot about the data itself. Let’s use something I was playing with today as an example: the SK calculus.

The obvious functional approach is to define a data type representing the cases of the AST:

data AST = S | K | AST :@ AST

(The :@ is the name of an infix constructor; i.e. it is just a node with two subtrees)

Then to implement a function that, say, reduces the top level of a tree, we can analyze the data by pattern matching:

reduce (S :@ x :@ y :@ z) = Just $ x :@ z :@ (y :@ z)
reduce (K :@ x :@ y)      = Just $ x
reduce _                  = Nothing

Here we know a lot about what kind of argument reduce will get. Whatever it gets, we know that it will be either an S, a K, or an application. We then define what it means to reduce data of this form.

Now I could half-bake a standard OO retort showing how much incredibly better functional programming is by how awkward this would be in a typical OO language (and it would be). But that’s trying to apply the functional world-view, that we know a lot about the structure of our data, to an OO setting. I think a good OO design for this same problem would take quite a different form. I see it something like this:

interface Combinator {
    int arguments();
    AST apply(AST[] args);
}
class S implements Combinator {
    override int arguments() { return 3; }
    override AST apply(AST[] args) {
        return new Apply(new Apply(args[0], args[2]), new Apply(args[1], args[2]);
    }
}
class K implements Combinator {
    override int arguments() { return 2; }
    override AST apply(AST[] args) {
        return args[0];
    }
}
...

While I gave a working Haskell program above in four lines, my “good” OO solution (in whatever language this is… looks like C#) has much more than that and is not even complete. I have left out the definition of Apply and the function which actually does the reduction. But I’m not bashing OO here (but please do understand if I bash OO syntax, which as the years go by seems to get more and more verbose). Instead it’s just that this problem is very well-suited to functional programming.

But this program has very different properties from the Haskell version. In particular, it is easy to add a new combinator, a new object, that does something different. Whereas in the Haskell program, adding a new primitive combinator would change the assumptions of every function that worked with combinators. Conversely, adding data manipulation functions which depend on particulars, namely whether something is an S or a K (or whatever else), involves touching every object to add that method. Whereas in Haskell, we can just add a new function. Astute readers will recognize this as the famous “expression problem”.

This trade-off starts to affect the way we proceed. If we were to implement identical functionality in the two programs, our approaches will diverge greatly. For example, today I added a function to invert an application. In Haskell, I just enumerated by cases: this is how you invert an S-reduction, this is how you invert a K-reduction, etc.

In the OO program I wouldn’t add a visitor though, that would be stupid. Instead I would create a new node type for variables, apply the transformation to a number of variables equal to the number of expected arguments, and analyze the positions of the variables in the result. I would end up with a function that can invert any combinator. That is, the natural next step in the OO example is to write more generic code than in the functional example.

Anyway, there’s what I consider a nice side-by-side comparison of the two approaches. Maybe by analyzing these two examples and where they led, you can start to see what I’m saying about the two cognitive models.

And this brings me back to Perl. The slides I read mentioned Moose, an object model module for Perl. It is rich: supports inheritance, role composition, privacy, coersion, and many other tools. I think an OO system needs to have these tools: the OO world view counts on data having many facets, capabilities, constituents, and concerns itself with what is possible when you know about certain ones. An OO programmer must be an expert at creating data with many facets, capabilities, and constituents. This is contrasted to functional programming where everything is known about the data, and the focus is on manipulating it.

Haskell has no support for these tools. Its type system, although powerful, is too static. Haskell provides too many guarantees for OO to work; it wants to know too much about what’s going on. Similarly I don’t consider C++, Java, C# to be “powerful” OO languages. In fact, I might go so far as to say a language with static types and OO is not properly embracing the philosophy. You’re creating rich, smart data. If your superclass guaranteed that a particular method was const, i.e. does not change the object it was called on, how are you supposed to smartly cache the results of that function? Well, with the mutable hack. But that’s a hack, essentially telling the type system that it should go take a hike because you know what you’re doing.

Perl and Smalltalk “get it”, in my opinion. They have simple models, but provide tools for looking inside and changing data on the fly. They are about creating and manipulating smart data, leaving the guarantees up to the naming of the methods and their intended use, because trying to force guarantees would restrict their ability to be smart. If you want guarantees, use Haskell; it will almost prove your code is correct for you. But Haskell has no smart data, that’s the only way it can provide guarantees. You can’t have case analysis of data structures and objects that can reverse-engineer your function at the same time.

That’s it: when you’re in OO, don’t design functionally. Keep your guarantees wide open, and focus on making smart objects.

About these ads

8 thoughts on “OO is not the One True Paradigm, but Haskell still sucks at it

  1. Take a look at Bart Jacob’s work on coalgebra as a theoretical basis for object-oriented languages. In you SK example the Haskell solution is clearly algebraic, if you defined a coalgebraic solution it would look a lot like a good OO solution. It can even be done in four lines of code ;)

  2. I haven’t looked at Jacob’s work yet, but I’m going to guess. The only coalgebra I can think of for SK calculus would be something stupid like:

    class SK {
        bool isS();
        bool isK();
        Maybe<SK*SK> isApp();
    }
    

    Which I would not call a good solution. I suspect Jacob’s work is not this simple-minded. I’ll look at it. Thanks!

  3. First, you complain that C++’s type system is too static for it to properly support OO, and argue for radically more flexible type systems. Yet when the C++ type system shows an inch of flexibility in its handling of const (using mutable), you immediately condemn it as a “hack”…

    Mutable absolutely pales in comparison to the vile hackery you can do in the kind of “smart” type systems you advocate.

  4. If one does not take my arguments as an attack on C++, they make more sense.

    One of my arguments was that static types and OO are at odds with each other. The existence of mutable is a point in favor of this argument, since mutable is, as you say, an inch of flexibility which subverts the type system. (Well technically it does not, it is designed in, but conceptually it is declaring that a guarantee is broken)

  5. I disagree with your analysis, as such a design is definitely possible in Haskell, using type-classes.

    Additionally, there is an important distinction that I think you have missed between the two designs:

    An ADT defines a closed (sealed) type which enables an open-world of users of that type who are exposed to any details they may need inside the type.

    An interface (in your case, a class, in Haskell’s case, a type-class) defines an open (extensible) type, but that forces the users of that type to use a sealed interface, and they can only access the type’s data through this limited, sealed interface.

    In your example, if you wanted to have another data constructor that significantly differs from the existing constructors, clearly you couldn’t mold it into the interface of arguments and apply that you have defined.
    It is those cases where OO programs have the choice: use dynamic down-casts (and lose type-safety), or extend the base interface class (in which case you’ve broken all the users, just like in the ADT).

    Haskell allows you to make the choice:

    A. Place a restriction on the guy defining the type, and liberate all of the users of the type (ADT)

    B. Place a restriction on the guys using the type, and liberate the type so it can be extended by anyone (Typeclass).

    While OO forces you to choose B.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s