The great artist, the passionate learner, the hubristic engineer. In the world of accomplishment, each of us comes to terms with our own lack of greatness in
just a sec, got a text
our own lack of greatness in different ways. Some of us… um, where was I? Some of us see our lives as a struggle against our demons, causing us to procrastinate and take time on things that are not truly important. Others have convinced themselves of their comfort with their place in society, happy to be working on a fun project with fun people, quietly dying of not making a real difference.
Oh did you see that #occupywallst had bigger attendance today than ever before? I think it’s a pretty cool movement, even though they don’t really have a message. No, I mean maybe I’ll go someday soon, but I’m just interested to see how it develops.
So, yeah, as I was saying. One does not need to be a creative type to be a slave to our modern emptiness. The search for real love, life fulfillment, or a family to devote yourself to. There is always something standing between your life as it is and the life of your dreams. Even those great people we idolize are living in this disparity; the life they want being something other than the life of greatness we perceive. It is the only way we have motivation to grow.
However, I do not mean to dismiss it. There are traps. For years at a time we may find ourselves piddling away our time on something that is not important — a time we could be using for growing. It reminds me of a youtube video I saw. sec. It was something about a
sorry about that, g/f im’d me.
Right, youtube. What was I looking up? Ugh, that’s frustrating. Oh well.
What I’m saying is that we never reach the potential we set for ourselves, but we can still become great. There are ways to refocus our energy not on distracting ourselves from realizing our shortcomings, but on achieving pieces of the ever-changing life goals. Just because you can’t reach it doesn’t mean you can’t move nearer. But it does require us to step back for a moment and see, is this one of those piddling times? Are we truly, deeply satisfied with our progress along our path right now? Or are we hiding, in a world where
I have been going through an intense period of self-discovery and reconstruction. I realized that the path in life I have been following for the past several years is not working for me; it is not resonating with me, and it is taking me somewhere I don’t want to be. This realization was revealing itself to me at the same time as a complicated and heartbreaking end to a (short) relationship unfolded, and everything I believed crashed down and came into question. I was a programmer with no desire for a computer, I was a calm communicator behaving violently, I was an atheist experiencing God.
I believe that I am now picking up my pieces and realigning with my dreams — dreams I had forgotten or dumbed down. I’ve believed this several times during the past weeks, only to find another layer collapsing beneath me, so I may be full of shit. But all I can do is to use the best information I have now. It’s a very interesting, emotional time for me.
One of the axioms that crashed during this experience was the idea that I have any control over what happens in the world. This began as a grounded life principle: my attempts to control life only led to more suffering, so I should surrender to the flow of the world. It percolated up to my intellect, combining with the studies of physics I was using to distract myself from my emotions, eventually leading me to the confusing world of philosophy that I love to entertain.
The idea that there is something physically more to a human being than a physical system is something I consider absurd. The conventional non-spiritual idea is that you put more and more molecules together and suddenly a light turns on called consciousness. Humans have consciousness, dogs probably do, lizards perhaps not as they are simple stimulus-response machines, bacteria have no brains so certainly not. Associated with consciousness is the ability to make decisions as an independent entity: free will. Cognitive scientists are madly in search of the magical light that turns on consciousness, a holy grail in our search to understand ourselves.
My developing position — I won’t call it a belief, but I’ll say I am considering it and its implications seriously — is to reject the above narcissism. I see what we define to be consciousness as a gradual increase in sophistication of these biological machines. There is not self-awareness and self-unawareness, merely a band of sophistication in which we communicate that there is a definite “I” and that it is aware of itself. We can communicate that to ourselves, by having a little simulated conversation in our brains in which we say such things to some abstract person.
My experience, particularly at the end of the aforementioned relationship, showed me that a great deal of my self-awareness — my free will — is a hoax. I listened, I reasoned, I concluded the best action. I watched as a ridiculous prediction took hold of my reasoning process. I watched as I carried out, in a state of mental contradiction, the opposite of what I had concluded. I watched myself crying, simultaneously astonished and unsurprised by the way things actually unfolded. I saw myself not as a single unified “I”, but as an ensemble of communicating (or not) decision-making machines, combined with a mechanism retroactively justifying my ridiculous actions.
That free will I was so convinced I had struck me as a process, always living a moment in the past, existing to analyze and retrain my unconscious decision making processes for the future. I was a sophisticated machine, but a machine. I am governed by the same laws as a rock tumbling down a landslide. When asking whether it is possible that I will not push publish in a few minutes and share my thoughts with the world, I’m expressing not a set of a decisions available to me, but a state of uncertainty about what my action will eventually be.
I was walking down the mall and had the strongest urge to pick up a brick and throw it through a window. Jail schmail, money schmoney, I just wanted to do something nuts to release the pressure. And I did not; I watched the urge pass, frustratingly, as I didn’t carry out the action I had pictured so strongly. I couldn’t; my consciousness is not a decider but a justifier, and the action was not there to justify. There was no immediate reason it could come up with for why I didn’t — I was even disappointed that I didn’t. One might view this post as the belated conclusion of my justifier of that situation — that it does not in fact have control of my actions.
I see the universe as a great continuous four-dimensional tapestry, that I have the capability to view only a little slice at a time. I cannot ground the idea that there is some “I” which can cause the tapestry to be altered meanwhile existing within it. What could “altered” even mean in this situation: altered from what? I have been seeing this as a physicist studying something external for quite some time, but to incorporate it, to understand it as something I am part of, is taking me to a whole new place.
On twitter I participated in the short snarky exchange:
@stevedekorte – Threads sharing state by default is like variables being global by default.
@luqui – state is like globals :-)
@stevedekorte – @luqui only shared state – which is why FP ultimately fails – it trades comprehensibility for shared state optimizations
@luqui – @stevedekorte, wow, sneaking “FP ultimately fails” as if an obvious truth in a reply to a Haskell programmer
@stevedekorte – @luqui, a bit like sneaking in “[all] state is like globals” to an OO programmer? :-)
@stevedekorte – @psnively @luqui my only issue with FP is the decision to trade expressivity and reusability for less state while calling it progress
The conversation goes on (and on) between many twitterites, having a fun but serious argument about this and that benefit of this and that style. Dynamic/static types come up, OO vs. functional, usefulness, mathematical foundation, learning curves; none of the standard artillery is spared. What irritates me about this style of argument is all the sweeping adjectives (1) used with no definition, thus impossible to support with evidence, and (2) synonymized with better.
In this post, I will draw attention to this irritating vocabulary, so that the next time you use it you can realize how little you are saying.
(Disclaimer: this post is not intended to criticize stevedekorte specifically, even though I identify two of the terms he used below. It was a long, typical programming zealot argument, and all parties involved were being equally dumb :-)
A person is expressive if he expresses himself — he has an idea and wants to write it down. So I would say a language is expressive if it allows or enables the programmer to be expressive. Languages that restrict expression are not expressive. So we have the following facts:
- Dynamically typed languages are more expressive than corresponding statically typed ones, because statically typed languages forbid you from expressing some ideas.
- Multiparadigmatic languages are more expressive than languages which are paradigmatically pure, because the former allow you to express yourself if you are not thinking within the framework.
- A language which you are fluent in is more expressive than a language you do not know very well.
By these observations, we might guess that Perl is the most expressive language, Haskell is the least.
Do you notice yourself already equating expressive with good, and thus perhaps feeling offended? Everyone wants an expressive language, right? Here are some reasons some programmers might not want an expressive language:
- Most of my ideas are made of bullshit. Have you ever had a great idea, gone to write it in your blog, and realized it was nonsense because you were unable to write it? So writing is less expressive than thinking. Is thinking better than writing?
- Every programmer has a different way of structuring their thoughts. An expressive language will bring out the differences in thought structure between programmers, and introduce impedance mismatches between programmers on a shared project.
I’m not arguing that expressiveness is bad. I’m just arguing that it doesn’t mean good, it means expressive.
A language “is reusable” (to abuse language a bit) if code written in that language can be easily reused.
This “obvious” statement is hiding something very important; namely, reused how? For what? We are in an unfortunate situation in programming: code is designed to be reused in a particular way, and if you want to reuse it in a different way you are pretty much out of luck. An OO widget library is designed for the addition of new types of widgets, but if you want to reuse a program written in the library on a new platform you are in for a lot of work. A functional drawing library is designed so that you can transform and export your drawings in an open-ended variety of ways, composing new ways out of old ones; but if you need to draw a circle you have to build it out of lines, even if there is a much better way to draw a circle on your target. (This is essentially the expression problem).
An abstraction will always expose some things and conceal others. Different languages enable abstraction in different ways, which makes exposing certain things easier and others harder. The zealot will reply, “but in my experience, real-world programs are naturally organized around <insert preferred paradigm>, and <insert uncomfortable paradigm> doesn’t support that style as easily.” I would suggest to this zealot to look deeper into the definition of “real-world” to discover its many facets of subjectivity. (What domain do you work in? Who architected the real-world software you have worked on, and what was their background? What kinds of programs do you consider not to exist in the real world, and what values are you using to minimize them?)
Easy to learn
A language is easier to learn than another language if it takes less time to become competent/fluent programming in that language.
I don’t think this one is as automatically synonymized with “good”. Haskell programmers are aware how much relative effort was required to learn Haskell, and are usually grateful that they put in the effort. But all other things being equal, a language easier to learn ought to be better than one harder to learn.
The deadly omission in the above definition is that people are doing the learning. A language is easier or harder to learn to a single person, and that is entangled with their background, their approach, and their ambitions. When arguing “X is easier to learn than Y”, I encourage you to add one of the following phrases to the end:
- for programmers who already know Z.
- for people with a mathematical background.
- for people with a non-mathematical background.
- for children.
- for me.
Or something similar. The following phrases do not count.
- for almost everyone.
- for people with a typical background.
- for people who want to do useful things.
I’ll close this section with this remark: Haskell is the easiest language to learn, because I already know it.
I know I am frequently irritated by many of these kinds of words, and I’ve only mentioned three here. But you see where I am going. Respect the values of your fellow engineers. If you are a polyglot and like a paradigm, it probably comes from a set of values you have — a set of things you consider important, and that differ from the values of others. Concentrate on that; communicate your values, and try to understand the values of others. If you have toyed with a paradigm and quickly lost interest because of some surface feature (I have — e.g. I will throw out a language without support for closures) or impression, consider the possibility that you like what you like because simply because it is familiar (other equivalent notions: easy, natural). Which is fine, some people like to tinker with their thinking patterns more than others, but remember that you have trained yourself to think in a particular way, so consider that your “objective” judgements about these ultimately human languages could be biased.
(For the record, that last phrase originally read: “all of your ‘objective’ judgements about one of these ultimately human languages are biased”, which is quite a bit stronger than what I had intended)
I have been working my way through Volume III of Feynman’s lectures, the one on quantum mechanics. A few months ago I watched his Quantum Electrodynamics lectures for the lay public and I was fascinated by the beauty and simplicity of the presentation. Now I want to dig deeper.
The basic idea is summarized in the quote (can’t find its source, probably Feynman though :-): “Everything that can happen, does. Physics is then reduced to the problem of finding out what can happen.” This is not philosophical many-worlds garbage postulating the existence of infinitely many alternative universes (I will get to that), but instead the interpretation of the Lagrangian form: if you want to find the probability amplitude of some event, you just add up the amplitudes for all the different ways it could happen. The generality of the principle is astounding, and making only very weak additional assumptions it is possible to completely derive the workings of electrons and photons (except for the mass of the electron, which is still a mystery). The rule is not just for electrons and photons though; those are just the easiest kinds of particles to get at. The entire universe works this way: the amplitude of an event is the sum of all the ways (including classically absurd ones) it could happen.
In the beginning of my studies, I was constantly tripped up by my conception of time. In the double slit experiment, a photon interferes with a version of itself leaving the excited atom at a different time. It was very hard to picture this when I was still attached to my idea of time and causality. This is the logic of the universe, not the dynamics. That is, we aren’t really computing the amplitude of an event to happen so much as the amplitude that, given some assumptions are true, some other thing about the universe will be true. We phrase the double slit experiment like this: given that this atom is excited at t0, what is the amplitude that this other atom is exited at t1? There is no notion of happening or the flowing of time, it’s just a connection between statements about the universe. Realizing this was an important step in my understanding. Of course, the way that these two atoms are connected does involve time — that manifests itself in the different “ways it could happen” and thus affects the amplitude.
Ok, so we have this logic which connects facts about the universe together as amplitudes, which are complex numbers. How do we take these amplitudes and get some information we can use? The rule is: the probability of an event, er I mean, a fact, is proportional to the absolute square of the amplitude. Simple enough. So you set up an experiment and calculate the amplitudes for all the different ways it could come out (you have to calculate all the ways, because the probability is only proportional, so you need to normalize them so they sum to one — I find this unsatisfying). Then you do the experiment, and what actually happens at the end of the experiment is one of those ways, proportional to the absolute square of the amplitude for that way.
This is extremely unsatisfying to me. Almost all of the resources I have used for learning QM have described it this way and left it at that. I’m pretty sure it’s because nobody really knows the answer to the next question: when, exactly, do you take the absolute square? If you take it too soon, e.g. before “the experiment” is over, then you will lose the interference effects and do not get an accurate answer. But you can’t just delay taking it forever, because then you only ever have amplitudes, not probabilities. There is this arbitrary barrier between the “quantum” world and the “real” world, and that’s when you take the absolute square. This is intentionally ignoring the idea that your experiment apparatus, your measuring devices, etc. are all governed by the quantum logic above as well, because that is too hard to think about. This is the piece I am determined to understand; I am interested in QM philosophically, not practically, so it is not at all satisfying to me to say “it works in practice, get used to it.”
The theory of quantum decoherence provides half of the answer. It shows how this interpretation of the barrier is equivalent to the state of the experimental apparatus (including the state of you, the scientist performing the experiment) becoming entangled with what happened in the experiment. Eventually the whole universe gets entangled with the result of the experiment and that’s what “really happened”. God got a bunch of amplitudes for the way the universe could be; he took their absolute squares, rolled the dice, and picked one. Now the arbitrary boundary has been pushed out as far as it can go — to the edges of spacetime — instead of being between experiment and apparatus. Quantum decoherence shows a sort of compositionality of this quantum logic. This is getting more satisfying.
I love it because it is right on the edge of my ability to conceptualize. All the “decisions” in the entire universe could go this way or that, and if they both lead to the same thing and have opposite amplitudes, they could interfere with each other and make that thing impossible. It is because the universe is a chaotic system, that small changes give rise to large changes, that we can’t observe quantum interference on large scales. These little decisions are very unlikely to lead to the same state. Entropy gives rise to the classical world.
When I get really deep into philosophizing, I explode into the annoying considerations of consciousness. Perhaps God did not pick a universe at random, but our consciousness did. Our memory must conceive of time linearly, it would violate entanglement not to, and that’s why we think there is a single “chosen” universe instead of the explosion of all possibilities. But whether all possibilities exist or there is a single universe chosen at random is likely not an observable distinction, so it is merely fodder for pipe dreams.
If there were some device that could measure some things about the universe, without disturbance, set up in such a way as to negatively interfere with itself when its measurements were “undesirable”, it could potentially control the way the universe would go. Now you see where the title of this post comes from. I have not been able to sketch this device as a black box, nor fully understand why it should be impossible. I suspect it has something to do with the uncertainty principle, the derivation of which I have yet to completely understand.
Quantum Mechanics is fascinating to me, and I am trying to keep my mind open to the deep, philosophical, passionate curiosity it invokes without descending into the insanity of a quantum crackpot. It is a challenge.
I am technically enrolled in classes again, but my heart isn’t in it. I have a middle school teaching practicum (awesome), and a writing class (great), and a larger dose of boring bullshit. Are these two classes worth $6,000? Hardly. I am on the edge of dropping out again. My life has been exploding into worlds of hippie love and search for purpose. I am high in the clouds, riding the shifting winds.
My writing class has shown me a wonderful outlet: freewriting. It is my source of peace these days. I sit down, get comfortable, and start moving my pen. Whatever happens, my pen has to keep moving. Sometimes I just repeat the same word over and over — I have filled up an entire page with a single word. Sometimes I go on a deep symbolic exploration of my subconscious, exploring dungeons of memories. Sometimes I write nonsense stories, sometimes I just write nonsense. Sometimes I focus on the shape of my letters, sometimes I write as slowly as I can, sometimes I write as quickly as I want. I have hypnotized myself in this book.
It is so much more visceral than thinking. I am forced to stay with a thought long enough to finish a sentence. A thousand other thoughts arise and fade away before I get to that blasted period. If I were just thinking, one of those thoughts would have brought me back to the deeply emotional experience of breaking up with a recent love, and I would spiral down for hours. But when I am writing there is more force, more intention behind the thoughts, even as I go off into an unconscious trance. A passing whim is less capable of derailing me. Or sometimes I will be in the middle of an exploration when she comes into my mind — Moriah, I write — and having acknowledged her presence I can return to where I was.
It is important that it be handwritten. I love watching the shapes of my letters change as my mental state changes; sometimes it looks like a doctor’s rushed scribbles, sometimes it is deeply sensuous and feminine. I can evoke a sense of liberation by ignoring the margins, filling up the entire page. I can get a sense of spaciousness by closing my eyes and letting my words land where they will. Sometimes I feel the urge to draw a picture (I have drawn very little in my life). I have freewritten on a computer before, but it has never felt like this.
My right brain is waking up again. Welcome back.
I’m in a postmodernism class at university this semester. Despite the eyerolling of my peers when I mention the topic, I find it very interesting so far. Today’s class got me thinking about some math.
In the past, I have worked a lot with frameworks for doing programming and math. I wanted to find a fundamental system in which all my ideas could be expressed; I latched on to Martin Bunder’s IΞ and IG as being particularly simple and beautiful. I used to be a compulsive programming language designer, searching for simple, beautiful ideas with which to express all programs. I still have the urge now and then, but never follow through anymore.
In today’s pomo class, we were talking about the postmodern view of modernism (as I suspect we will be for quite some time): having some framework in which to conceive of the world, of all of history, and viewing that framework as fundamental. If that framework begins to crack, then we throw it out and make a new framework (calling the new one “modern” this time) in which to conceive of everything. Postmodernism views this cycle as one that continues indefinitely in this stage of history.
Some of the culture of mathematics follows this trend (ZFC, NBG, Löf type theory, intuitionism), but especially programming languages follow this trend. We are always in seek of new frameworks in which all programs can be expressed: structured programming (Dijkstra), object-oriented, functional. There is leakage, but programmers tend to throw out old models with cracks and adopt new ones as if there will be no more cracks. There seems to be a swing toward the functional recently — dare we imagine that it has any cracks?
At this point I suspect some of you are already developing your indignant responses. As programmers, we are entrenched in modernist ideals. I seem to be criticizing this cycle, and surely if I am criticizing then something must replace it. Whatever replaces it is going to be just another cycle. Indeed, as I indulge in my postmodernistic ramblings (which I assure you I have no confidence in — it is only the second week), in the back of my mind I keep imagining a language which transcends these problems. But of course it does not, because it is itself just another fundamental system. The key for me, and all of us trapped in this mode of thought, is merely to observe without trying to replace it with something better, something new.
Another possible response is that this cycle of reinvention is the only way progress can be made. That is a very good point, but in it we are taking for granted the assumption that progress exists and is desirable.
Programming is a difficult thing to talk about in this context, because it is a medium that we use to create things, and we can (at least roughly) measure how good we are at creating things in various frameworks. Being good at creating things is an implicit goal for a programming system, which comes out of the idea that creating things is progress and progress is good. Mathematics may have a nature that will make this exploration clearer, and then we can perhaps take our realizations about mathematics and apply them back to programming.
You might say that mathematics is about progress. After all, the ultimate pursuit of mathematics is to prove as many theorems as possible; or so those who study mathematical frameworks would have you believe. I have never been enamored with math as a competition to prove theorems; I like it because it alters (not “improves”) the way I think. I replace constructs with which I used to represent ideas with new ones. I used to think of real numbers as “fractions with extra stuff”, now I think of them as programs incrementally refining information. It has led me to think more topologically and less algebraically. Is more topologically better? I doubt it; it is just more recent. It permits seeing analogies that we could not previously see, but at the same time it hides analogies that we could previously see (though we do not notice that half, because we have already developed our intuitions about those using the previous framework). Mathematicians of tomorrow may only think topologically, and they will find more analogies by “discovering” algebraic thinking.
Programming is not so different. I do love functional programming, but that is because I have taken a lot of time to develop those ideas. There are object oriented programmers who are very talented at expressing ideas in that language, and I cannot understand their designs (or, perhaps more accurately, why their designs should be considered better than many nearby designs). Good ol’ procedural structured programming is still a good way to communicate to a computer how to do something fast. As the future drives forward, the past is erased; when it is forgotten its niche will re-emerge and it will be re-invented.
In this post, I will give an intuition behind the ridiculous theorem that all Goodstein sequences eventually reach zero. I call it ridiculous because the construction of a Goodstein sequence is so strange that it’s surprising anything can be said about it at all, and that it accelerates upwards so quickly it is hard to imagine it ever coming back down. But after a little exploration, we will see what is happening with the structure of the numbers, and be able to picture an algorithm for computing how long it should take to come back down. Of course, the really interesting thing about Goodstein’s theorem is that it is equivalent to the consistency of Peano Arithmetic (and thus PA cannot prove it). I won’t give an intuition for this part, because I don’t fully understand it yet.
To define a Goodstein sequence, we need to define hereditary base-n notation, which I will denote H-n. To write a number is H-n, first write it in base n, and then (recursively) write all the exponents in H-n. So for example, to write 18 in hereditary base 2, we do:
18 24 + 21 222 + 21 2221 + 21
To generalize to larger bases, we rewrite any exponent that is greater than or equal to the base.
Now we can define a Goodstein sequence starting at n. First, write n in H-2. Now, change all the 2′s to 3′s in that representation, compute the number, subtract 1, and rewrite in H-3. Change all the 3′s to 4′s, compute the number, subtract 1, and rewrite in H-4. Continue forever or until you reach zero.
Let’s do an example. Let G be the Goodstein sequence starting at 4. I will start the sequence at index 2, so that the index in the sequence is the same as the base.
G(2) = 4 = 22 G(3) = 33 - 1 = 27 - 1 = 26 = 2 * 32 + 2 * 31 + 2 G(4) = 2 * 42 + 2 * 41 + 2 - 1 = 32 + 8 + 2 - 1 = 41 = 2 * 42 + 2 * 41 + 1 G(5) = 2 * 52 + 2 * 51 + 1 - 1 = 50 + 10 + 1 - 1 = 60 = 2 * 52 + 2 * 51 G(6) = 2 * 62 + 2 * 61 - 1 = 72 + 12 - 1 = 83 = 2 * 62 + 61 + 5 G(7) = 2 * 72 + 71 + 5 - 1 = 98 + 7 + 5 - 1 = 109 = 2 * 72 + 71 + 4 ... G(11) = 2 * 112 + 111 - 1 = 242 + 11 - 1 = 252 = 2 * 112 + 10 ... G(21) = 2 * 212 - 1 = 882 - 1 = 881 = 212 + 20 * 211 + 20 ... G(41) = 412 + 20 * 411 - 1 = 1681 + 820 - 1 = 2500 = 412 + 19 * 411 + 40
And so on. So the sequence goes 4, 26, 41, 60, 83, 109, … (A056193). Those who have followed this example closely may already be seeing why this sequence will eventually terminate.
Let’s introduce a device to understand what is happening to the structure of these numbers. At each step, most of the H-n structure stays the same, the base is just increasing. So let’s write each step with a symbol, say ω, instead of the increasing variable. We will see what to do with the fringes in a bit. I will be essentially copying the last representation from above. This example would begin:
G(2) = ωω G(3) = 2 * ω2 + 2 * ω1 + 2 G(4) = 2 * ω2 + 2 * ω1 + 1 G(5) = 2 * ω2 + 2 * ω1 G(6) = 2 * ω2 + ω + 5 G(7) = 2 * ω2 + ω + 4 ... G(10) = 2 * ω2 + ω G(11) = 2 * ω2 + 10
Ah! Now the sequence looks much more regular! At each step we are simply subtracting 1, and if there is an ω at the right, we are replacing it with a finite number. The number we replace it with keeps growing with the index of the sequence, but each time it is finite, which “deconstructs” the structure a little bit. So even though the numbers are growing, the structure of the H-n representation will never be more “complex” than it was in the first step. I haven’t defined complex; for the sophisticates, it is the ordering on Cantor normal forms of ordinal notations.
We haven’t yet seen what happens with limits that are more complex than ω. For example, at some point, the G sequence will reach:
G(n) = n2 G(n+1) = (n+1)2 - 1 = n * (n+1) + n
In our ω-notation:
G(n) = ω2 G(n+1) = n * ω + n
And indeed the structure has become a little simpler. I’m struggling to describe it in words, but I hope these examples have demonstrated how this sequence is always “decreasing in structure”, which is really all there is to Goodstein’s theorem.
A few technical notes: The H-2 representation of all possible starting numbers correspond to the Cantor normal forms, which are ordinal notations involving finite exponentiations of ordinals involving ω. So for Goodstein’s theorem to work, we need to know that the Cantor normal forms are indeed well-ordered, which is equivalent to ε0 (the supremum of all the Cantor normal forms; the limit of ω, ωω, ωωω, …) being well-ordered. There is a hint of the deep connection to ε0, I wish I could say more about it.
In order to keep this blog from shriveling up and dying, here’s an entry, even though I don’t know what it’s about.
My thoughts have been consumed, after a brief detour following my viewing of the excellent Feynman quantum electrodynamics lectures, by CodeCatalog and its consequences. Even though there have not been visible changes on the site, we have stuff in the oven. The biggest change in direction came when we decided to support the process of coding, rather than just the result, on the site. This has a lot of consequences to our model so it is taking a while to follow the ripples, but we also get to support wicked stuff like this:
- Automatically searching for snippets when you use an undeclared identifier (building an immediate vocabulary)
- Providing a codepad-style sample evaluation, being able to extract sample evaluation into a test with a click.
- Showing documentation for functions you are using and suggesting useful snippets based on correlations to other users (distant future).
I got sidetracked for a little while developing jsexp, an in-browser structural (i.e. working with code as syntax tree rather than text) editor. I had some bullshit reasons that I convinced myself it was a good idea, but really I just wanted to make a structural editor. After that proved a greater challenge than my justification-utopia had laid out, I had to abandon ship and refocus on what we’re really trying to do. Thanks to Jude for kicking me in the butt and making me step back — er, I guess it would be forward. :-)
So we’re aiming to make it a nice place to edit code — not because it’s the world’s greatest editor (although we are using CodeMirror which is decently spiffy), but because the world’s greatest standard library will be loyally awaiting your command with no hassle. That’s valuable to me, at least — every time I have to integrate a new library it breaks my flowstate.
My thoughts have taken me wandering around this general area, returning to my catch-phrase and the opportunity of software organization it presents. How shall we organize an infinitely large (in the sense that it is open to expansion in all directions) codebase? This codebase can’t turn to mush as it gets big because its goal depends on it being big; how do we finagle that?
Anyway, up to my usual antics of long walks talking to myself, whiteboarding, staring at the ceiling, and occasionally not forgetting to water my blog.
This StackOverflow question has generated a buzz of zealous reactions in the Haskell community. Here are the important bits of the question:
I often find this pattern in Haskell code:options :: MVar OptionRecord options = unsafePerformIO $ newEmptyMVar doSomething :: Foo -> Bar doSomething = unsafePerformIO $ do opt <- readMVar options doSomething' where ...
Basically, one has a record of options or something similar, that is initially set at the programs beginning. As the programmer is lazy, he don’t wants to carry the options record all over the program. … Now each part of the program has to use unsafePerformIO again, just to extract the options.
In my opinion, such a variable is considered pragmatically pure (don’t beat me). …
In this post I will give my own zealous reaction.
To ask a question like this assumes something about the nature of software. The assumption is hiding in these phrases: all over the program, each part of the program. Here, the poster assumes that a program is a large, monolithic beast such that every part of it will need access to this variable, and yet the definition of this variable is not known to the programmer. That means that the program depends on this value. If we purely model the structure of the above example program, we see that every function depends on OptionRecord. So we have (taking the context of a compiler):
parse :: OptionRecord -> String -> AST compile :: OptionRecord -> AST -> LinkerObject simplify :: OptionRecord -> AST -> AST freeVars :: OptionRecord -> AST -> Set Variable safeName :: OptionRecord -> Set Variable -> Variable -> Variable
These are perhaps not the cleanest signatures for parse, compile, and simplify, but they are conceivable in the real world. There is some junk — surely not all three of those functions depend on every option of OptionRecord. It would be cleaner to declare that they depend on exactly the things they actually depend on.
But the problem becomes much more unsettling at freeVars. freeVars takes an OptionRecord — staying true to the original problem description, it must, because it or a function it calls may end up depending options. But what on earth could a global OptionRecord determine about a free variables function? Perhaps there are multiple ways to find free variables — do we count type variables, what scoping mechanism to use — but those are not global options. Different functions will require different behaviors out of that function depending on what they are doing.
We even get such pathologies as shortestPath :: OptionRecord -> Graph -> Node -> Node -> [Node] — a plain, simple, reusable graph algorithm which somehow depends on this global options record. We have no way of telling the compiler — or, more importantly, ourselves — that this algorithm really has nothing to do with the specific compiler we are implementing. Somewhere deep in shortestPath‘s call chain, there is a call to some function which calls an error function which depends on one of the options. Suddenly this beautiful, well-defined function is not reusable. To take it out and use it in another project means to include OptionsRecord in that project, and OptionsRecord has things about compiler and type system extensions, configuration files, who-knows-what, but certainly having nothing to do with graphs. Sure, we can go and dig out the OptionsRecord, replace it with a record that is more suited to the program we are reusing the code in. But you have to go read, understand, mutate code that you just want to work please so you can get on with your project. We have all suffered the head-throbbing pain of integration problems. This is their source.
When I think of software as thousands of lines of specification for something, my mind jumps to problems like the original question. How am I going to write something so huge purely without it being really inconvenient? I see the need for global options, often global state, things ending with Manager (often a global trying to convince you it is a good abstraction), big systems talking about big ideas which are only applicable to my project.
But I have begun to think about software another way. Consider 100 lines. That is potentially a lot of information. The only reason 100 lines is not very much in the code world is because we lack the vocabulary to say what we mean. We are caught up in the details of manipulating lists of identifiers, building sorting trees, defining what we mean by “first” in this or that context. Could you describe your project in 100 lines of English? Perhaps not, but you could get a lot closer than with 100 lines of code.
I’m beginning to think that my latest greatest software project should be as small as possible. I need to build up vocabulary so I can describe it in a small space, but that vocabulary is not a part of my project. That vocabulary belongs to everyone in the same-ish domain of software. And nobody else cares about the meaning of OptionsRecord.
When I think of software this way, the idea that I need to pass around an OptionsRecord as a parameter to every function in my project starts to make sense. Every part of my project depends on its options, but my project is just a little thing that is standing on the shoulders of the giants of vocabulary used to get it there. I don’t mind passing options around between a few functions after I load them.
This is an ideal. I hope to move software closer to this ideal with my CodeCatalog project. Your current project probably cannot be phrased in a few hundred lines right now. But think about what it would look like if you structured it as a small main program + a lot of supporting vocabulary. Think of the supporting vocabulary as something any project could use. What does that mean for the modularity, reusability, and adaptability of your code? What does it mean for the clarity of your specification? What does it mean about the concept of “architecture”?
This is going to be a short, idea-jot post. I have been reading up a storm about sociocracy. The wikipedia article does not really do it justice. It is a very interesting system of governance based on cybernetic principles that claims to achieve decisions that are in the interests of much more people than a voting democracy. See We the People: Consenting to a Deeper Democracy if you are interested in more details. It does not need to be universal, it can spring up in specialized circles and those can gradually “link together” to connect everyone. These are all desirable properties.
But I am no genius, so I cannot forsee all the degenerate cases that might come up, and I am picturing some possible degenerate cases that may never come up in practice. That’s why I want to give it a trial run. So I want to start a game in which we organize sociocratically in a toy country with toy issues to see how the dynamics play out. You could play a game where you get to be a political leader! It would be nice to have as many people as possible. And it doesn’t matter if everyone cares — in fact, in real life, most people do not get involved in political decisions, so perhaps a healthy dose of apathy would be good for the realism of the game.
If you are interested in this experiment, join the announce group so you can know when we’re ready to start. Also feel free to share ideas in the comments.