On why Goodstein sequences should terminate

In this post, I will give an intuition behind the ridiculous theorem that all Goodstein sequences eventually reach zero. I call it ridiculous because the construction of a Goodstein sequence is so strange that it’s surprising anything can be said about it at all, and that it accelerates upwards so quickly it is hard to imagine it ever coming back down. But after a little exploration, we will see what is happening with the structure of the numbers, and be able to picture an algorithm for computing how long it should take to come back down. Of course, the really interesting thing about Goodstein’s theorem is that it is equivalent to the consistency of Peano Arithmetic (and thus PA cannot prove it). I won’t give an intuition for this part, because I don’t fully understand it yet.

To define a Goodstein sequence, we need to define hereditary base-n notation, which I will denote H-n. To write a number is H-n, first write it in base n, and then (recursively) write all the exponents in H-n. So for example, to write 18 in hereditary base 2, we do:

18
24 + 21
222 + 21
2221 + 21

To generalize to larger bases, we rewrite any exponent that is greater than or equal to the base.

Now we can define a Goodstein sequence starting at n. First, write n in H-2. Now, change all the 2′s to 3′s in that representation, compute the number, subtract 1, and rewrite in H-3. Change all the 3′s to 4′s, compute the number, subtract 1, and rewrite in H-4. Continue forever or until you reach zero.

Let’s do an example. Let G be the Goodstein sequence starting at 4. I will start the sequence at index 2, so that the index in the sequence is the same as the base.

G(2) = 4 = 22
G(3) = 33 - 1 = 27 - 1 = 26 = 2 * 32 + 2 * 31 + 2
G(4) = 2 * 42 + 2 * 41 + 2 - 1 = 32 + 8 + 2 - 1 = 41 = 2 * 42 + 2 * 41 + 1
G(5) = 2 * 52 + 2 * 51 + 1 - 1 = 50 + 10 + 1 - 1 = 60 = 2 * 52 + 2 * 51
G(6) = 2 * 62 + 2 * 61 - 1 = 72 + 12 - 1 = 83 = 2 * 62 + 61 + 5
G(7) = 2 * 72 + 71 + 5 - 1 = 98 + 7 + 5 - 1 = 109 = 2 * 72 + 71 + 4
...
G(11) = 2 * 112 + 111 - 1 = 242 + 11 - 1 = 252 = 2 * 112 + 10
...
G(21) = 2 * 212 - 1 = 882 - 1 = 881 = 212 + 20 * 211 + 20
...
G(41) = 412 + 20 * 411 - 1 = 1681 + 820 - 1 = 2500 = 412 + 19 * 411 + 40

And so on. So the sequence goes 4, 26, 41, 60, 83, 109, … (A056193). Those who have followed this example closely may already be seeing why this sequence will eventually terminate.

Let’s introduce a device to understand what is happening to the structure of these numbers. At each step, most of the H-n structure stays the same, the base is just increasing. So let’s write each step with a symbol, say ω, instead of the increasing variable. We will see what to do with the fringes in a bit. I will be essentially copying the last representation from above. This example would begin:

G(2) = ωω
G(3) = 2 * ω2 + 2 * ω1 + 2
G(4) = 2 * ω2 + 2 * ω1 + 1
G(5) = 2 * ω2 + 2 * ω1
G(6) = 2 * ω2 + ω + 5
G(7) = 2 * ω2 + ω + 4
...
G(10) = 2 * ω2 + ω
G(11) = 2 * ω2 + 10

Ah! Now the sequence looks much more regular! At each step we are simply subtracting 1, and if there is an ω at the right, we are replacing it with a finite number. The number we replace it with keeps growing with the index of the sequence, but each time it is finite, which “deconstructs” the structure a little bit. So even though the numbers are growing, the structure of the H-n representation will never be more “complex” than it was in the first step. I haven’t defined complex; for the sophisticates, it is the ordering on Cantor normal forms of ordinal notations.

We haven’t yet seen what happens with limits that are more complex than ω. For example, at some point, the G sequence will reach:

G(n) = n2
G(n+1) = (n+1)2 - 1 = n * (n+1) + n

In our ω-notation:

G(n) = ω2
G(n+1) = n * ω + n

And indeed the structure has become a little simpler. I’m struggling to describe it in words, but I hope these examples have demonstrated how this sequence is always “decreasing in structure”, which is really all there is to Goodstein’s theorem.

A few technical notes: The H-2 representation of all possible starting numbers correspond to the Cantor normal forms, which are ordinal notations involving finite exponentiations of ordinals involving ω. So for Goodstein’s theorem to work, we need to know that the Cantor normal forms are indeed well-ordered, which is equivalent to ε0 (the supremum of all the Cantor normal forms; the limit of ω, ωω, ωωω, …) being well-ordered. There is a hint of the deep connection to ε0, I wish I could say more about it.

Watering the blog

In order to keep this blog from shriveling up and dying, here’s an entry, even though I don’t know what it’s about.

My thoughts have been consumed, after a brief detour following my viewing of the excellent Feynman quantum electrodynamics lectures, by CodeCatalog and its consequences. Even though there have not been visible changes on the site, we have stuff in the oven. The biggest change in direction came when we decided to support the process of coding, rather than just the result, on the site. This has a lot of consequences to our model so it is taking a while to follow the ripples, but we also get to support wicked stuff like this:

  • Automatically searching for snippets when you use an undeclared identifier (building an immediate vocabulary)
  • Providing a codepad-style sample evaluation, being able to extract sample evaluation into a test with a click.
  • Showing documentation for functions you are using and suggesting useful snippets based on correlations to other users (distant future).

I got sidetracked for a little while developing jsexp, an in-browser structural (i.e. working with code as syntax tree rather than text) editor. I had some bullshit reasons that I convinced myself it was a good idea, but really I just wanted to make a structural editor. After that proved a greater challenge than my justification-utopia had laid out, I had to abandon ship and refocus on what we’re really trying to do. Thanks to Jude for kicking me in the butt and making me step back — er, I guess it would be forward. :-)

So we’re aiming to make it a nice place to edit code — not because it’s the world’s greatest editor (although we are using CodeMirror which is decently spiffy), but because the world’s greatest standard library will be loyally awaiting your command with no hassle. That’s valuable to me, at least — every time I have to integrate a new library it breaks my flowstate.

My thoughts have taken me wandering around this general area, returning to my catch-phrase and the opportunity of software organization it presents. How shall we organize an infinitely large (in the sense that it is open to expansion in all directions) codebase? This codebase can’t turn to mush as it gets big because its goal depends on it being big; how do we finagle that?

Anyway, up to my usual antics of long walks talking to myself, whiteboarding, staring at the ceiling, and occasionally not forgetting to water my blog.

Flattr this

The “Whole Program” Fallacy

This StackOverflow question has generated a buzz of zealous reactions in the Haskell community. Here are the important bits of the question:

I often find this pattern in Haskell code:

options :: MVar OptionRecord
options = unsafePerformIO $ newEmptyMVar

doSomething :: Foo -> Bar
doSomething = unsafePerformIO $ do
  opt <- readMVar options
  doSomething' where ...

Basically, one has a record of options or something similar, that is initially set at the programs beginning. As the programmer is lazy, he don’t wants to carry the options record all over the program. … Now each part of the program has to use unsafePerformIO again, just to extract the options.

In my opinion, such a variable is considered pragmatically pure (don’t beat me). …

In this post I will give my own zealous reaction.

To ask a question like this assumes something about the nature of software. The assumption is hiding in these phrases: all over the program, each part of the program. Here, the poster assumes that a program is a large, monolithic beast such that every part of it will need access to this variable, and yet the definition of this variable is not known to the programmer. That means that the program depends on this value. If we purely model the structure of the above example program, we see that every function depends on OptionRecord. So we have (taking the context of a compiler):

parse :: OptionRecord -> String -> AST
compile :: OptionRecord -> AST -> LinkerObject
simplify :: OptionRecord -> AST -> AST
freeVars :: OptionRecord -> AST -> Set Variable
safeName :: OptionRecord -> Set Variable -> Variable -> Variable

These are perhaps not the cleanest signatures for parse, compile, and simplify, but they are conceivable in the real world. There is some junk — surely not all three of those functions depend on every option of OptionRecord. It would be cleaner to declare that they depend on exactly the things they actually depend on.

But the problem becomes much more unsettling at freeVars. freeVars takes an OptionRecord — staying true to the original problem description, it must, because it or a function it calls may end up depending options. But what on earth could a global OptionRecord determine about a free variables function? Perhaps there are multiple ways to find free variables — do we count type variables, what scoping mechanism to use — but those are not global options. Different functions will require different behaviors out of that function depending on what they are doing.

We even get such pathologies as shortestPath :: OptionRecord -> Graph -> Node -> Node -> [Node] — a plain, simple, reusable graph algorithm which somehow depends on this global options record. We have no way of telling the compiler — or, more importantly, ourselves — that this algorithm really has nothing to do with the specific compiler we are implementing. Somewhere deep in shortestPath‘s call chain, there is a call to some function which calls an error function which depends on one of the options. Suddenly this beautiful, well-defined function is not reusable. To take it out and use it in another project means to include OptionsRecord in that project, and OptionsRecord has things about compiler and type system extensions, configuration files, who-knows-what, but certainly having nothing to do with graphs. Sure, we can go and dig out the OptionsRecord, replace it with a record that is more suited to the program we are reusing the code in. But you have to go read, understand, mutate code that you just want to work please so you can get on with your project. We have all suffered the head-throbbing pain of integration problems. This is their source.

When I think of software as thousands of lines of specification for something, my mind jumps to problems like the original question. How am I going to write something so huge purely without it being really inconvenient? I see the need for global options, often global state, things ending with Manager (often a global trying to convince you it is a good abstraction), big systems talking about big ideas which are only applicable to my project.

But I have begun to think about software another way. Consider 100 lines. That is potentially a lot of information. The only reason 100 lines is not very much in the code world is because we lack the vocabulary to say what we mean. We are caught up in the details of manipulating lists of identifiers, building sorting trees, defining what we mean by “first” in this or that context. Could you describe your project in 100 lines of English? Perhaps not, but you could get a lot closer than with 100 lines of code.

I’m beginning to think that my latest greatest software project should be as small as possible. I need to build up vocabulary so I can describe it in a small space, but that vocabulary is not a part of my project. That vocabulary belongs to everyone in the same-ish domain of software. And nobody else cares about the meaning of OptionsRecord.

When I think of software this way, the idea that I need to pass around an OptionsRecord as a parameter to every function in my project starts to make sense. Every part of my project depends on its options, but my project is just a little thing that is standing on the shoulders of the giants of vocabulary used to get it there. I don’t mind passing options around between a few functions after I load them.

This is an ideal. I hope to move software closer to this ideal with my CodeCatalog project. Your current project probably cannot be phrased in a few hundred lines right now. But think about what it would look like if you structured it as a small main program + a lot of supporting vocabulary. Think of the supporting vocabulary as something any project could use. What does that mean for the modularity, reusability, and adaptability of your code? What does it mean for the clarity of your specification? What does it mean about the concept of “architecture”?

Flattr this

Sociocracy Game

This is going to be a short, idea-jot post. I have been reading up a storm about sociocracy. The wikipedia article does not really do it justice. It is a very interesting system of governance based on cybernetic principles that claims to achieve decisions that are in the interests of much more people than a voting democracy. See We the People: Consenting to a Deeper Democracy if you are interested in more details. It does not need to be universal, it can spring up in specialized circles and those can gradually “link together” to connect everyone. These are all desirable properties.

But I am no genius, so I cannot forsee all the degenerate cases that might come up, and I am picturing some possible degenerate cases that may never come up in practice. That’s why I want to give it a trial run. So I want to start a game in which we organize sociocratically in a toy country with toy issues to see how the dynamics play out. You could play a game where you get to be a political leader! It would be nice to have as many people as possible. And it doesn’t matter if everyone cares — in fact, in real life, most people do not get involved in political decisions, so perhaps a healthy dose of apathy would be good for the realism of the game.

If you are interested in this experiment, join the announce group so you can know when we’re ready to start. Also feel free to share ideas in the comments.

Flattr this

Information Economics

Some say the information revolution happened in the 1970s upon the advent of the personal computer, some say it happened in the 90s when the internet reached critical mass. These were incredibly important events in the history of humanity, but I claim the information revolution has seen only its beginnings.

Consider the case of the automobile. The transportation revolution did not happen overnight in 1897 when Rudolf Diesel built the first combustion engine. Instead, it happened gradually as the world was changed around the people as a result of this new technology. Oldsmobile and Ford refined the process of creating cars, and with that transformation came a new kind of economics based on the assembly line. The transportation revolution came to its apex upon the construction of the interstate highway system, upon which the choice of where a person lived and where he worked became decoupled.

We have seen the analog of the advent of the combustion engine and the beginning of Ford’s innovations. “The Information Superhighway” is superficially related to the creation of the highway system, but I think of that as an echo of the revolution of the highway system, not the revolution of the information age. We are at the creation of the assembly line, before it gained wide adoption. Google is Ford.

The reason is that the information age is about information, which is a totally different kind of beast than traditional commodities, around which our economy is based. As few as five years ago, businessmen tried to charge $50 for ready-made software sitting on a shelf as if it were a television or a bag of rice. But that is an ancient conception that completely fails to reflect the economics of what a software package is.

To see this issue clearly, we have to step back from our personal conceptions of money as a thing which allows us to live and operate in society, and think about it in terms of what it was when it was created: an abstraction for trade, which served to make society as a whole more efficient. Money is about allocating scarce resources to where they will most benefit society. It isn’t perfect at doing that job, but it is pretty damn good all things considered. It works way better than communism, which puts that allocation in the government’s hands.

Back to the television and the software package. A single television requires resources to produce. When you buy a television from the shelf, you are communicating “a television has value to me, please continue to allocate resources to produce televisions”. As society moves beyond the need for new televisions, people stop buying them, money (an abstraction for resources) stops flowing to the manufacturer of the televisions, and the company shrinks or dissolves so that those resources can be allocated somewhere where they can more benefit society.

Try to apply this story to software. Software costs resources to produce initially, but after it is on the shelf, all the resources it will ever consume have already been spent during its development, modulo its useless and wasteful packaging. Now there is a component of continuing to improve the software, but the cost of improving the software is not proportional to the number of users the way the cost of producing televisions is proportional to the number of people that want televisions. While treating software as a commodity does serve to compensate the creator for producing the software, when seen from the perspective of the economy as a whole rather than a single business, it makes no sense. The idea of commodity software is a misallocation resources.

Fortunately, the idea of commodity software is gradually fading away. We have mostly done away with the wasteful practice of putting software — essentially free to reproduce — into boxes, which have a cost to reproduce and are only advertisements until they are thrown away by the purchaser. But the model persists in the App Store, among other places. But note how the great Giants of the age are no longer using this model. Apple is profiting off of others using this model, but they are not using it directly. Google and Facebook will have nothing to do with it. Microsoft is dying a slow, painful death.

While there is a social realization that the old commodity model isn’t working anymore, it is not clear to me that anyone sees where it is headed. Google has hit a sweet spot where they can provide value to everyone without charging consumers any money — by collecting data about people, they make it easier for producers and consumers to connect when they stand to benefit from each other, and they found a nice place to skim compensation off of that arrangement. Google essentially has one very valuable product. Apple’s business model is basically that of a hardware company. But how does a typical software company work in the new age of information?

To explore this idea, I will take the vantage point of looking at society as a whole and follow the scent of efficient resource allocation. Resources are required to produce software in the first place: we need ideas, programmers, testers, and marketers. After the software has been conceived of, written, and tested — that is, at the point when the consumer uses the software — all the resources required for producing the software have already been expended. It is nonsense to charge people at this point; society would benefit more if you simply gave your software away, because the cost of doing so is (almost) zero. We need a way to pay for ideas, programmers, testers, and marketers. The resources required for providing a product are proportional to the difficulty of its creation, not the scale of its distribution.

I picture a combination of Kickstarter and an economic extension of UserVoice due to John De Goes. Allow people to pledge money for the improvement (or creation) of a product or feature, to be paid when that feature is usable. The features that are most valuable to people will have the most money pledged to them, providing incentive for the company to develop those features. We are now allocating resources where they need to be: in improving the product, rather than paying for the vacation of somebody who created valuable software in the past, somebody whose mind and expertise would be more beneficial to society developing or improving their product. This is just one idea, I’m certain there are other models that will accurately reflect information economics as well. In particular, this model compensates those who implement an idea, but not those who came up with the idea in the first place, which is a place for improvement.

Observe how this new type of model has shifted the economic emphasis to one derivative higher. People are compensated for continuously improving their product and creating new products, rather than having one great idea and banking on it. This may frighten innovators: their great innovations now stand to make them less money; we now need to constantly work to create value instead of sitting atop a great idea allocating resources. But look at it from society’s perspective: we are coming up on an age of immense growth, in which every worker in the economy is seeking not just to continue the status quo, but to improve it! Everyone is an innovator or an enabler of an innovator. And this all comes from software being free to copy. When something is free to copy, everyone should have equal access to it. Any other way is short-changing society.

It’s time to stop clinging to software as if it is consumed when it is used. There is an economic boom waiting to happen, if we just let information resources flow the way they want to.

Another way to support the new economy is to Flattr this. ;-)

Announcing CodeCatalog

I’d like to share what I’ve been working on instead of homework for the past month: codecatalog.net. From the FAQ:

CodeCatalog is a lot like a wiki for source code. We aim to socially build a database of high-quality, well-documented, instantly-usable code in a variety of languages.

It is the fruit of my thoughts’ recent focus on reusability. I noticed that because of purity, Haskell was very close to being able to have a “database of all reusable code ever”, and I kept focusing on what language features were missing from languages that prevented that. After sharing the idea with Max (cofounder of Hubris Arts), we determined that the main missing feature was, er, a database of all reusable code ever. CodeCatalog is the first buddings of an attempt to create one.

Check out the FAQ for a glimpse of our philosophy. Haskell sophisticates might be able to see beyond my simple wording into the underlying vision — that was my intent at least. I’ll write a post soon describing our deeper plans in more detail. We’ll be working on this all summer, and if we meet our goals, by the end of the summer it should be pretty freaking cool. :-)

Oh, the code that we have on the site so far is mostly Python and Javascript, mostly because that’s what we wrote the site in and we were eating our own dogfood while developing it. But Haskell is also supported.

Anyway, fellow Haskell community, I encourage you to check out the site. We would appreciate any feedback you have about it. There’s a UserVoice tab on the right if you’re into that kind of thing. Thanks. :-)

And if you would like to support us financially, you can always Flattr this.

The essence of metastrategy

A recent post on Less Wrong, Levels of Action, reminded me of a game I created whose dynamics I wanted to explore. I still have not explored the dynamics to a great level of depth, but I thought it would be interesting to the nerdy community that reads my blog.

The idea came after playing Castle Wars 2. In that game you try to build your castle as tall as possible while keeping your opponents castle as short as possible. The basic game dynamic is an action/meta-action trade off: (oversimplifying) you can play a card to gain 10 bricks, or you can play a card to gain one brick per turn for the rest of the game. I was surprised by the amount of subtlety derived from such a simple dynamic, and I recommend the game to anyone wanting to kill an hour. It’s not the best game ever, but it’s not as trivial as it at first seems.

I wondered what would happen if I removed the cards, the weapons, the defense from that game and replaced them with more levels of this same dynamic. Here’s what I came up with.

You can play it with a chessboard and poker chips (my old game design standby). You don’t need 6 of the rows of the board. Each player plays on a side of the board, and has eight squares which we will label, from left to right, 1 to 8. Each square can have up to eight chips in it. The goal of the game is to get eight chips in the eighth square. Here is how play proceeds:

On your turn, place a single chip in any of your squares. Your opponent does the same. Before each turn, “cancel out” any chips that both players have on corresponding squares. That is, if you have 4 chips on the 5th square, and your opponent has 5 chips on the 5th square, remove 4 chips from both, so that you have none and your opponent has one. Then (still before your turn) duplicate each square to the next higher position and truncate down to 8. So before this action if your eight squares had these values:

0 0 1 4 5 2 0 3

Then after this action, the state of your board will be:

0 0 1 5 8 7 2 3

Another way to think about it is that you slide a copy of your board one position to the right and add (then truncate).

Then place your new chip, and your opponent takes his turn. That’s it. The first player to eight in the eighth square wins.

Despite this game’s simplicity, I have been unable to devise a good strategy for it. The strategy for the game seems to revolve around estimating the length of the game. If you know how many turns the game will last, it is fairly easy to determine how to play optimally. But knowing how long the game will last is not so easy to determine.

Try it out, think about it. Let me know if you discover anything.

Like this post?
Flattr this

Change a-stirring

As is the nature of college life, my consciousness is filled with lots of lost ideas, floating around like strands of RNA trying to construct their counterparts. This post is an attempt to write some of them down, just to bring my awareness to them and perhaps to crossbreed with others’ random ideas. No, I’m not in a biology class, though you wouldn’t know it from this paragraph.

I feel like I am in the midst of a spiritual awakening. I am certainly epiphanizing a lot. Knowing my history, I am probably just in an epiphanizing mood, and the true learning I am acquiring from this time in my life will become clear to me in about half a year. I feel an increased devotion to integrity and “right action” (perhaps dharma is an appropriate word). I am developing my ability to approach all situations from a place of love and compassion toward myself and others. This involves taking opportunities to stretch when I realize I am avoiding something from a place of fear or discomfort, taking time to mini-meditate and focus, and creating external reinforcement for my internal goals (this was a huge realization for me).

Last night I had an epiphany about mortality. It was the most potent realization I have had that my consciousness ends. That whatever I do in my life, at some point I will no longer be able to observe the consequences, that the world will continue without me. So even if I do help to change the world for the better (as we all wish to), at some point I will have to trust it to continue for itself. It makes me want to make my impact now, because every day that passes is a missed opportunity to do something meaningful. Anyway, this sentiment is tired, I have read it many times before, and I have to arrive at it myself for it to mean something. Without loss of generality I assume that I am similarly failing to communicate it to my readers.

This state of mind has stirred up an internal conflict. I went back to school to finish the degree I started, and now I feel like I have something to say and am too busy with school to devote energy into saying it. Do I wait until I finish school? I am just now breaking a detrimental pattern in my life: that of waiting for some other event to happen before I do the right thing. It’s a form of procrastination. I see it coming out through my school dilemma: I really want to work on this economic responsibility project, but not until I finish school. Eighteen months from now who knows whether my passion for this project will persist.

I am holding to my commitment. I am certain that school seeded the ideas that pushed me in this direction. Perhaps I can trust that continuing school will help me continue to refine the idea, so that when I finally finish my picture will be so clear there will be no stopping me. Indeed, one quality of my current idea is that I don’t really know how to execute it — I have some ideas, but none that is obviously the right way to go.

My recent thoughts about social issues share a common thread: the stable equilibrium. The areas in society where we have the most trouble (from my present, incredibly biased perspective) are those areas that resist gradual change. For example, relating to the theme of my my last post, Americans are spending their money irresponsibly because there is not a good source of information about what constitutes responsible spending according to each person’s values. Many people know what kinds of companies they would like to support and which they wouldn’t, but the information to take that goal and derive which products to support is not available. The reason this is a stable equilibrium is that companies must choose to reveal this information, and in the current climate, any company that chooses to reveal this information puts itself at a disadvantage with respect to the ones that do not. If the culture shifts such that most companies reveal this information, then it looks suspicious not to reveal yourself and you put yourself at an advantage by revealing your details. But there is no mechanism to get from the former to the latter — the former reinforces itself. Indeed there are many powerful people with a vested interest in maintaining the status quo.

A similar pattern arises in the state of public education. Schools are judged and funded by standardized tests. It is the opinion of many progressive educators that the nature of the tests is too broad and shallow, and reinforces a superficial, fact-memorization-based education, the kind that doesn’t really produce competent students. But any school that experiments with its methods will deviate from these tests before it has a chance to show long-term achievement, and thus get its funding cut (or if it is a single teacher, the teacher will get fired by No Child Left Behind). The system discourages variation and reinforces its own status quo, so its problems cannot be evolved out by the natural forces of gradual variation and competition.

Examples of such equilibria abound. Reforms must be taken to the top, which is a dangerous place to reform because the effects are so sweeping. Reforms that should be taken to the top are those that enable competition and variation, so that gradual improvement is allowed to happen. Unfortunately, for the above two examples, I cannot think of what such a policy would be. (The latter could theoretically be addressed by privatizing education — indeed many private schools have very effective new methods — but there is a cacophony of social issues that comes with that, which includes, among other things, further widening the class gap).

In order to focus discussion, I will save my (relatively few) technical ideas for another post.


Flattr this

The Almighty Dollar

@luqui – Buy the change you wish to see in the world.

This Gandhi rip-off tweet is a summary of an idea that I would like to share in more detail. Last week I got sniped and swindled by a street peddler of Children International, a charity organization, largely due to a weakness of boundaries I had at the time. I wasn’t really feeling charitable, and I just wanted him to stop talking at me without being rude. The most immediate way out to my uncreative brain was to sign up and cancel later. I may have felt more charitable if it wasn’t so much money — $30 per month — but I was sure that I was going to cancel, and put it on my immediate to do list.

When I had a moment to clear off some items on the list, as I was looking up the number to call, I pictured the phone call. Having already done research and found that the organization was legitimate and even efficient, there was no excuse there. Then I was going to say “I just can’t afford it right now”… which would be a lie. I am a poor college student, but I am still privileged by college, and I spend about $300 per month on food. Saying I couldn’t afford $30 per month is just outright wrong. Lying to a charitable organization is beyond my (comparatively flexible) morals.

I couldn’t find a way out that was consistent with my self-image. Thus, I haven’t canceled, and I don’t think I’m going to. Faced with the inability to prove that I shouldn’t spend this money, I began to search for reasons why I should. And the above twitter quote is the one I found, in a nutshell.

America is a severely capitalist nation. It has a fair amount of socialism mixed in, but it is still one of the most money-driven countries on this planet. We criticize big corporations in general for being immoral, corrupt, greedy entities that are ruining the world. They have great power, and they wield it in offensive ways. Damn them! Clearly pure capitalism can’t support a humanitarian world.

I used to believe this. But let’s think about it: from where is their power derived? From the economy of their country, of the world. They have tons of money and power because we give it to them. They provide us with valuable services, and we in turn compensate them with money, which is essentially equal to power in a capitalist society. And then we complain when they use that power in a way that offends us. So we are not really unconditionally giving them power: we wish to say “you can have the power to do things we agree with”. Not really power at all. We want to use their services without compensating them.

A dollar is a vote! It is a unit not only of trade, but of trust. But we routinely buy products from companies we do not trust. And no wonder they do evil things… we gave them a symbol of our trust without actually trusting them. We “the proletariat” are the ones who are ultimately producing the value in this nation, and we are collectively being compensated for it. Taken as a whole, we have enough power to match or exceed any corporation (I haven’t done the research, but I think the principle is pretty easy to agree with). We are being very irresponsible with our money — our tokens of trust — giving it to people who we know are evil. We are creating the evil in the world, simply by being fast and loose with our money.

It is widespread knowledge at this point that Monsanto is a profoundly evil corporation. They produce genetically modified plants, and then claim ownership of any plant that crossbreeds with it (using their money to out-lawyer any farmer who disagrees). With the chaotic nature of seeds in the wind, left unchecked they could eventually claim ownership of every plant in the world by seeding a single field on every continent. They buy out politicians to allow them to pollute acres and acres of United States land. The world would be a better place without them — their technology is great, but their behavior as an entity is abhorrent.

A conversation comes up in a grocery store about the evil of Monsanto, and while complaining and loathing the evil in the world, they pick up the cheapest loaf of bread and put it in their basket, thereby handing the evil in the world another token of their trust. If the world refused to buy any Monsanto-derived food, Monsanto would die. Poof, evil extinguished! The choice in what food you buy is asking you a question — do you prefer cheap food, or a moral world? If you buy Monsanto-derived food, you are saying you prefer cheap food. And the world really does listen.

I want to live in a world in which every person has an equal shot at equal lives (if you want more, you get less of a shot). But that hardly describes the Earth. Is my desire for this ideal Earth greater than my desire for three SubWay sandwiches per month? Could I put up with putting in a little more effort to make my food in exchange for helping the world to achieve this goal? Would it be worth it to you? If you say yes but don’t pay for it, you are lying. This isn’t hyperbole. I consider failing to “put your money where your mouth is” an outright lie.

How many proud Americans lie every day? Are you one of them? I still am. I am working to change that in myself.

Do your research! Pay for things made by companies whose behavior is agreeable to you. Don’t just look at the price. Tell the truth about your vision for the rest of the world, the future, even just a little bit. We, the hard-working, moral people of this planet wield most of the power. Let’s use it responsibly.

Buy the change you wish to see in the world.

(Do you value posts like this one? Well… Flattr this :-)

Connectedness

I’m a student of the University of Colorado again. I’ve gone back to finish my bachelor’s in mathematics, which essentially involves fulfilling a bunch of core requirements. I’m going to start the discussion by mixing my experience of one class (religions of south Asia) with a concept from another class (connectedness from topology).

Last spring I took my (now ex-)’:girlfriend on a trip to Hawaii. While we were there, we attended a weekend immersive class on Sanskrit. The class was very “new-agey” — we chanted, meditated, in addition to learning Devanagari (the Sanskrit/Hindi alphabet) and something about Indian religion. The ideas combined with the approach fascinated and inspired me. I have never been much of a religious person; the religious ideas I had heard of always sounded a bit naive and silly. But this new approach gave me a glimpse of another way of looking at the world: the words of the Bhagavad Gita played with the gods, using them half as entities, half as concepts. The philosophical ideas, language, and religion we studied were clearly inseparable, all connected and synthesized into a single world view. Further, this world view seemed to incorporate my objections to the naivety of western world views — emphasizing the duality in all things, focusing not so much on right and wrong but on purpose and spirit, using the malleability and metaphor of truth.

My curiosity whetted, I enrolled in a class about Hinduism at the university. So far it has been a disappointment. What drew me to these ideas in the first place was the connectedness and duality — the yin and yang, so to speak — I perceived in the world view. And we have started by drawing thick lines categorizing the different approaches to divinity. An especially potent event in bringing to my attention my disappointment with the class occurred during our discussion of Bhakti. The professor began to describe the philosophy of Bhakti: that connecting with the divine is about love and devotion, that the details of ritual are not as important as a true spiritual devotion to god. Immediately after this description, the professor wrote on the board BHAKTI RITUALS. Um, teacher, did you not feel that just now? How did you build your immunity to cognitive dissonance?

We have been categorizing, deconstructing, analyzing this beautiful philosophy as if engineers. After the class I suspect I will know many facts, but have no understanding. If I were to talk to a yogi, he will consider me no closer to understanding his spirituality than any other American out of the hat. This is disappointing, since I don’t consider myself to have learned something until I understand it. We have a Hindu temple here in Boulder; I hope to find a way to study there and use the class as a supplement.

But why I am really writing this post is to help me to grip a vague sense I felt as I was processing after the BHAKTI RITUALS class. I am in a topology class this semester, and we are learning set-theoretic point-set topology. The constructivist in me winces every few minutes, lamenting the non-computability of everything we are discussing. I think the same cognitive orientation is fueling my dissatisfaction with the Indian religions class and my taste for constructivism. Classical mathematics seeks to separate the world into true and false, existence and nonexistence, equal and inequal. The inclusion of the law of excluded middle as obvious is evidence of this, as is the surprise felt by the mathematical world over Gödel’s incompleteness theorem. “What? We can’t eventually separate everything into two categories?!”

If you ask a set theorist whether ℕ = ℚ, they will probably say they are not equal (although have equal cardinalities). If you ask a type theorist whether ℕ = ℚ they will say “huh?”. The question cannot be answered, for we must consider what it means to treat 1 : ℕ as a ℚ, and we don’t know how to do that — not without a function that shows how. Indeed, in constructivism we have to be careful when talking about real numbers, since the set of observations matters, i.e. it matters how we look at them. And for any reasonable construction of the reals, their connectedness falls out of the constructivism of the theory: we cannot separate them into two categories in any way. A set theorist can, and has to define himself into a more realistic world where he can’t using the mechanism of topology.

Mathematicians are probably getting upset at me or thinking I’m an idiot. This isn’t a mathematical post, it’s philosophical, thus my fuzzy intuitive discussions. If you have the desire to leave an emphatic corrective comment at this point, maybe take a step back and try to make out the landscape with me. I don’t consider any of this true, I’m just trying to get a feel for the philosophically general idea of connectedness, outside of a particular formal system. I have the impression that we can think of the world — the real one or the mathematical one — this way and it might lead to a more accurate, if less “clear-cut”, way of thinking.

The pure untyped lambda calculus is connected in the Scott topology. This fact has fascinated me since I heard of it, trivial though it might be. We are used to adding traditional totally disconnected types to the lambda calculus and pretending bottoms don’t exist. I have been curious about what it would look like if we embraced this connectedness and extended lambda calculus with connected concepts. They may play more nicely in a connected system. I still have not made any concrete progress on this idea, but it appeals to me as potentially beautiful and powerful. Maybe we are computing in an awkward way without realizing it.

Did you like this post? Accelerate the flow of Karma :-) Flattr this