You heard me. Encapsulation is an obstacle to the reuse of code.

When I say encapsulation, I mean having a region of your program that knows or has access to some information about the implementation of something, and hiding that information from the rest of the program. If you have another definition of encapsulation, I’m not arguing against that.

Why do software engineers encapsulate? I claim it is for two major reasons: (1) to reserve the right to change the encapsulated code later without breaking anything, and (2) to minimize the propagation of assumptions through the program. To illustrate this, consider the following C code which uses a (supposedly) abstract `List` type:

int sum(List* list) { int i; int accum = 0; int length = List_length(list); for (i = 0; i < length; ++i) { accum += ((int*)list)[i]; } return accum; }

This code *breaks encapsulation*. It assumes that `List` is represented as an array. So if we wanted to go back and change `List` to a linked list because it is more efficient in the majority of cases for which we are using it, `sum` would break. (In this example, it would break very badly -- i.e. it might not even segfault but instead return some nonsense number)

Here is an example of code that correctly respects encapsulation.

int sum(List* list) { int i; int accum = 0; int length = List_length(list); for (i = 0; i < length; ++i) { accum += List_get_elem(list, i); } return accum; }

If you're about to object that we should have exposed an iterator interface instead of a indexed get interface because we knew we wanted to change it to a linked list, I preemptively respond that we didn't know that at the time -- as in most code, when you are encapsulating, you don't usually know what you're going to come back to change -- if you did, why didn't you just write it like that in the first place?

The two reasons above are noble. They promote flexibility and simplicity, so that the minor decisions we make do not ripple their way through our architecture, making it impossible to change or understand. It takes the pressure off.

Although I agree with the goals, I believe this is the wrong solution. I think we should turn the barriers inside-out, and use only encapsulation's dual: *abstraction*. When I say abstraction, I mean code that is defined *polymorphically* with respect to its assumptions. The above C code is abstract with respect to the `list` argument, since we can pass it any list we like. It is not, however, abstract with respect to the the `List` *type* and its operations -- it fixes a single choice.

Encapsulation and abstraction are duals in the following sense: let's say your program is defined in two parts A and B. A knows some information C, and B does not. There are two ways to look at this. A *encapsulates* C, or B *is abstract with respect to* C. That is, if you change C, you have only affected the specification of A, and B reacts polymorphically. So just divide your program at an information boundary -- one side is encapsulating, the other side is abstracting.

While mathematically they are two sides of the same coin, there is a very real software engineering difference between them. Let's say you are have three functions which use your abstract interface: `sort`, `reverse`, and `frobnicate` (some complicated business logic that can't be written in 5 lines). With encapsulation, you might have this sort of usage graph:

And then, in one fell swoop, you can change it to:

That's power, that's flexibility. But... it's lacking something. Now we have another part of our program which uses arrays, and I sure wish we could use that sort code we've already written. But we need that sort code for lists. Hmm, well we could copy and paste. But that sucks, maybe we could go back and make sort polymorphic in the list type. Yeah, that's the right way to do it.

But, then why didn't we just do that in the first place? Look what happens when we do:

Look at all those combinations. Those are the possibilities for code reuse without changing anything. Imagine if every encapsulation you made was an abstraction instead. Your usage graph would be basically black with arrows. But these aren't bad dependency arrows, these are good reuse arrows. These are your possibilities for correctly using code that has already been written.

So I suggest: instead of advertising your guarantees and hiding details, **advertise your assumptions and abstract over details**. This has a pretty profound effect on the way your code is structured -- like I said before, it turns all the encapsulation barriers inside-out.

My inspiration for this idea came from studying the form of mathematical theorems. They advertise their assumptions prolifically: "given a ring R with (+) and (*) as operations, ...", as opposed to "a real number has (+) and (*) and some implementation details". This allows theorems to be maximally reusable, since although the mathematician was thinking about real numbers when he proved the theorem, he realized the same logic could work on any ring -- including the integers, modular integers, matrices, and new things that were discovered later.

Object oriented programming seems to get close. One can consider a class (or an interface) as the specification of an assumption, and an instance the implementation of an assumption. Then you just take parameters that correspond to your assumptions, which are nicely bundled into classes so you don't have to say "I use (+) and (*) and (-) and absolute value and ...". But this essential idea is muddied up by all sorts of crap, both in the language design and in popular usage.

I started to list the features we need to add/remove from OO languages to make them support this style, but it just got long and nitpicky. So I'll just say this: I think we should be using objects differently. Objects as implementations of assumptions, not as "agents". A natural number is not an object, *The Natural Numbers* is an object. As generics gurus are noticing, this:

interface Number { Number operator + (Number y); }

has got to go. The assumptions are not specified properly -- you are requiring that + take any kind of number whatsoever, when we probably mean the same kind of number as on the left side. What does it mean to add *2 (mod 4)* to *7 (mod 13)*? Instead we are learning to write:

interface Numeric<N> { N operator + (N x, N y); }

That's the right way. Notice that, although it is specified with different notation, `N` is just another abstract member of Numeric. We specify the set and its operations together, but the set is distinct from the interface. The interface is the name for the *assumption* of such a set. In some cases the interface and the set can be bundled into one, and it is these cases that provide most of the study material for OO design. But inside the soul of pop OO lies a form of modeling that is so much more powerful.

To reiterate, this is not breaking the guarantees of encapsulation -- anybody who uses `Numeric` gets encapsulation for free. They are not allowed to see the details of `N`, just as if you had encapsulated it. It's just that now you can swap out different N's in the same code, where you couldn't before.

I am pleased to see software engineering practice slowly converging on this already. I just thought I would point out the uniform rule behind it. Next time you write an encapsulation, ask yourself, what would it look like to make the users of this code abstract instead? Is it a small enough notational burden to do it that way? If not, what kind of language would allow it to be?