Tag Archives: games

Sociocracy Game

This is going to be a short, idea-jot post. I have been reading up a storm about sociocracy. The wikipedia article does not really do it justice. It is a very interesting system of governance based on cybernetic principles that claims to achieve decisions that are in the interests of much more people than a voting democracy. See We the People: Consenting to a Deeper Democracy if you are interested in more details. It does not need to be universal, it can spring up in specialized circles and those can gradually “link together” to connect everyone. These are all desirable properties.

But I am no genius, so I cannot forsee all the degenerate cases that might come up, and I am picturing some possible degenerate cases that may never come up in practice. That’s why I want to give it a trial run. So I want to start a game in which we organize sociocratically in a toy country with toy issues to see how the dynamics play out. You could play a game where you get to be a political leader! It would be nice to have as many people as possible. And it doesn’t matter if everyone cares — in fact, in real life, most people do not get involved in political decisions, so perhaps a healthy dose of apathy would be good for the realism of the game.

If you are interested in this experiment, join the announce group so you can know when we’re ready to start. Also feel free to share ideas in the comments.

Flattr this

The essence of metastrategy

A recent post on Less Wrong, Levels of Action, reminded me of a game I created whose dynamics I wanted to explore. I still have not explored the dynamics to a great level of depth, but I thought it would be interesting to the nerdy community that reads my blog.

The idea came after playing Castle Wars 2. In that game you try to build your castle as tall as possible while keeping your opponents castle as short as possible. The basic game dynamic is an action/meta-action trade off: (oversimplifying) you can play a card to gain 10 bricks, or you can play a card to gain one brick per turn for the rest of the game. I was surprised by the amount of subtlety derived from such a simple dynamic, and I recommend the game to anyone wanting to kill an hour. It’s not the best game ever, but it’s not as trivial as it at first seems.

I wondered what would happen if I removed the cards, the weapons, the defense from that game and replaced them with more levels of this same dynamic. Here’s what I came up with.

You can play it with a chessboard and poker chips (my old game design standby). You don’t need 6 of the rows of the board. Each player plays on a side of the board, and has eight squares which we will label, from left to right, 1 to 8. Each square can have up to eight chips in it. The goal of the game is to get eight chips in the eighth square. Here is how play proceeds:

On your turn, place a single chip in any of your squares. Your opponent does the same. Before each turn, “cancel out” any chips that both players have on corresponding squares. That is, if you have 4 chips on the 5th square, and your opponent has 5 chips on the 5th square, remove 4 chips from both, so that you have none and your opponent has one. Then (still before your turn) duplicate each square to the next higher position and truncate down to 8. So before this action if your eight squares had these values:

0 0 1 4 5 2 0 3

Then after this action, the state of your board will be:

0 0 1 5 8 7 2 3

Another way to think about it is that you slide a copy of your board one position to the right and add (then truncate).

Then place your new chip, and your opponent takes his turn. That’s it. The first player to eight in the eighth square wins.

Despite this game’s simplicity, I have been unable to devise a good strategy for it. The strategy for the game seems to revolve around estimating the length of the game. If you know how many turns the game will last, it is fairly easy to determine how to play optimally. But knowing how long the game will last is not so easy to determine.

Try it out, think about it. Let me know if you discover anything.

Like this post?
Flattr this

If you’re going to have upgrades in your game, don’t be so dumb about it

Every week, I find myself spending some time checking out the newest interesting-looking flash games on Kongregate. Most of them end up not being so interesting, but every once in a while I come across a well-done innovative game (Level Up! and Little Stars for Little Wars are two that seem to be memorable at the moment). But I also get a chance to see the same games made over and over again in different skins.

You know, I don’t have that much of a problem with the same game being made over and over. It’s like speciation, this game is randomly varying itself on the little choices to find what works best. What annoys me is when I see a bad decision repeated again and again, for which the only reason I can think is convention. Today I will discuss the top such bad decision on my list: upgrades.

Upgrade systems are becoming quite ubiquitous among flash games. Play the game for a while and accumulate points, then spend the points on upgrades to make yourself more powerful. Upgrading yourself and feeling your power increasing is a great feeling, and providing a choice of which upgrades to buy allows the game to (ostensibly) diverge into many different games with unique dynamics, depending on the player’s choices. However, there is a common design that stifles this divergence.

Let me explain with a typical example. You start with a ship that has everything in level I. Between battles, you may level up one of three tracks — Guns, Engines, or Shields — according to the cost matrix below.

Guns I, 5 damage Guns II, 10 damage (+100%), $100 Guns III, 15 damage (+50%), $500 Guns IV, 20 damage (+33%), $2500
Engines I, 1 m/s Engines II, 2 m/s (+100%), $200 Engines III, 3 m/s (+50%), $500 Engines IV, 4 m/s (+33%), $1000
Shields I, absorb 2 damage Shields II, absorb 3 damage (+50%), $400 Shields III, absorb 4 damage (+33%), $1600 Shields IV, absorb 5 damage (+25%), $6400

I have added a percentage, roughly estimating how much more badass you will be after acquiring each upgrade. For independent gameplay elements, upgrades usually increase your strength by roughly a factor (as opposed to a constant): if you make yourself shoot three times as fast and also shoot three times as many bullets, then you will be nine times more powerful. In this post, I am addressing the particular way the prices are distributed. Their cost always seems to increase geometrically along each track. Sometimes, as is the case here, returns (expressed as factor of increase) will also diminish along a track, sometimes they will stay constant, but they seldom increase.

So let’s say I’m the kinda guy who likes tanks: slow with lots of firepower. In my first two turns I have invested $600 into firepower, and now here are my choices:

Upgrade Cost per % increase
Guns IV, +33%, $2500 $75
Engines II, +100%, $200 $2
Shields II, +50%, $400 $8

Comparing the cost/benefit ratio in the second column, engines appear four times better than shields, and shields appear eight times better than guns. Guns are a terrible investment, and will remain so until engines and shields are also at level III. This upgrade system only enables multiple styles of gameplay for people who are terrible with their money (aka people who are playing far below optimally) — shrewd investors will notice that the right strategy is (rougly) to get everything of level II, then everything of level III, then everything of level IV: getting uniformly stronger throughout the game, rather than specializing to one style.

I can imagine this convention emerging by deciding that upgrades are bought with points before considering the consequences. After the decision has already been made, the designer observes that as you get more upgrades, it becomes easier to get the same number of points. Therefore, upgrades had better get more expensive throughout the course of the game, otherwise it will either be way too long before you get your first upgrade, or you will finish them all off too fast.

Another way of phrasing that is: to achieve an upgrade system that encourages diversity and specialization, you have to discard the assumption that upgrades are purchased with points (or somehow design around an increasing rate of point gathering, but that’s no fun because people like to see ever bigger numbers). As an alternative, I suggest separating scored points from upgrade points. Award upgrade points by time (say, one at the end of each level), or by the inverse of the typical point-acquiring growth rate. For example, let’s say each upgrade adds 1 to the multiplier. Every point the player gets is multiplied by the multiplier first and then added to his score. So, assuming for simplicity that he gets one point in between each upgrade, his score as a function of the number n of guys he kills will be \sum_{i=1}^{n} i = n (n+1) / 2 = O(n^2), and you should award upgrade points at quadratically increasing scores1, say 100, 400, 1600, 2500, 3600, ….

After you have this corrected upgrade points currency, assign each upgrade a cost in the same order of magnitude, no matter how deep it is in the tree. Then the player will have a bunch of choices which are all approximately equal according to the first-order cost/benefit analysis, and to pick the right one, he has to think about (gasp!) what they actually do!


1 I have a beef with Geometry Wars 2, which uses the aforementioned multiplier system, because extra lives and bonuses are awarded at exponentially increasing scores. The ratio of an exponential to a quadratic is still exponential, so it gets exponentially harder (essentially impossible after a point) to get bonuses throughout the game. I have an incling that the designers did not realize this, probably due to a common incorrect perception that any superlinear function is “exponential”.

New Year’s Resolutions: Produce, Believe

I bring you two personal experimental hypotheses for 2010.

I am a Haskell module author. Constituting my released modules are those ideas which resisted the least when I opened the text editor. But two of them, data-memocombinators and graphics-drawingcombinators have gained some popularity, and I am feeling rewarded having written them.

Most of my ideas functionalize pieces of Haskell programming that are currently imperatively-minded, as you can see with the two aforementioned. But FRP, a particularly greedy such idea has been stealing my tuits away from the others. I have envisaged functional command lines, package management, event handling, persistence, testing, and probably more that presently slip my mind. This brings me to my first new year’s resolution for 2010: Produce! It’s time to start coding these up instead of letting one very hard problem block the solution of moderately hard ones. Kicking off the year, I rewrote graphics-drawingcombinators to have a stronger semantic foundation, becoming version 1.0.0 (thanks Peaker and sinelaw for egging me on!).

My second resolution addresses an irrational fear I noticed a few days ago. The story begins with Hubris Arts, the game studio my friend Max and I started in July. We released our first game in November. We are in development on our second (codename “R4″) and prototyping our third (codename “Evo”). All of our development so far has been in C# with XNA, for a familiar reason: it was the path of least resistance. But as I prototype Evo I see all my beloved functional design patterns leaping out at me from between the lines of imperative blasphemy. So far the implementation seems a great fit for Haskell, but I am uneasy. Some part of me doesn’t believe that Haskell can do it. Haskell’s role in my life so far has been that of a language for beautiful koans and interesting experiments, but not for Real Software. But why not? My only evidence is my own fear.

Thus the second resolution: Believe in Haskell! I have decided to do Evo in Haskell. It is only by tackling Real Software that a language matures — it may not be ready now, but by doing it anyway, we will make it ready. If we can do it, that will be a wonderful success for the language, perhaps permanently parting us from C# von Neumann’s clutches. If we can’t, at least we’ll have reasons why not — ideas for things to improve — instead of unsupported uneasiness and unconfidence.

Gravmari Released!

After fifteen weeks, Hubris Arts has completed its first game, Gravmari, directed by Max Rebuschatis.

The game is a journey through the depths of space in a strange and alien ship called the Playertoid. It was inspired by Katamari Damacy, Kubrick’s 2001, the efforts of NASA, and, most importantly, dreams of floating through the cosmos. We wanted to convey the feeling of infinity, both in zoom and scope, and wonder at the mysteries of the stellar wasteland.

Best played with the lights off, with headphones (or a nice sound system) and an Xbox 360 controller. Mouse and lights on also supported.

Collision Detection with Enneatrees

Many of my games boil down to, at some level, a large collection of circles (or spheres) interacting with each other. I use circles for collision detection, and otherwise whenever I can for organization, because the results look more natural than using boxes. If you organize using boxes, your simulation will tend to “align” to the axes, which nature surely does not do.

So naturally, detecting when two of these circles are touching each other is one of the larger computational problems that my games face. And bewilderingly, I have not found anywhere a good solution that covers circles in a wide variety of sizes. That is what this article is about.

If your circles are all about the same size, and in particular fairly small in comparison to the needed accuracy of the algorithm, the obvious and de-facto solution is to use a quadtree. That is, start with the whole screen and divide it up into 4 regions, and do so recursively until you have a nice spatial arrangement of your objects, categorizing objects by which region their center lies in.

When you have a circle and want to see what other circles are touching it, you just look for other objects in the smallest region. Well, except you are not a point, you are a circle, so you might penetrate the edge of the region, in which case you need to look at the adjacent region as well. Well, except other circles might penetrate the edges of their regions, so you actually need to look at all adjacent regions. Well, except you might be really big, so you need to look in all the regions that you touch. Well, except other objects might be really big, so… you need to look in all the regions.

A quadtree has not bought us anything if circles can be arbitrary sizes; collision detection is still linear. You could say things like the number of very big circles is usually small (because they would collide away their competition), so you can put an upper bound on the size of circles in the tree then just do a linear check on the big ones. But that is an awful hack with a tuning parameter and doesn’t generalize. Shouldn’t the right solution work without hacks like that (for the fellow game developers in my audience: despite your instincts, the answer is yes :-).

I have worked with quadtrees before, and saw this problem a mile away for my game with circles of many different sizes. The first variation I tried was to put circles into regions only if they fit entirely. Circles on the boundaries would be added to the lowest branch that they could fit into. Then to check collisions on an object, you look at where it is in the tree, check all subregions, and then check all superregions (but not the superregions’ subregions). You will get small things in your vicinity on your way down, and large things on your way up. And it’s still logarithmic time… right?

Wrong. Suppose that every object in the world lay on the horizontal and vertical center lines. Even very small objects would accumulate in the top node, and no region subdividing would ever occur. This degenerate situation causes linear behavior, even though just by shifting the center of the region a little bit you get a sane subdividing. It’s not just a “let’s hope that doesn’t happen” thing: there are a lot of bad situations nearby it, too. Any object lying on those center lines will be checked when any other object is testing for its collisions, because it is on the way up. This did, in fact, become a major issue in my 10,000 particle simulation.

But if we could somehow salvage the original idea of this solution without the degenerate case, then we would get logarithmic behavior (when the field size is large relative to the radii of the particles).

To understand this solution, I would like you to turn your attention to one dimensional simulations. Here the previous solution would be subdividing the real line into a binary tree.

But the new solution divides intervals into not two, not four, but three child intervals. However, not in thirds like you would expect. (0,1) is divided into (0,1/2), (1/4, 3/4), and (1/2,1) — the intervals overlap. Any “circle” (in 1-D, so it’s another interval) with diameter less than 1/4 must fit into at least one of these subregions.

We use that property to organize the circles so that their size corresponds to their depth in the tree. If you know the diameter of the circle, you know exactly how deep it will be. So now when we are looking downward, we really are looking for small circles, and when we are looking upward we are looking for large ones, and there is no possibility of an accumulation of small irrelevant circles above us that are really far away.

However, on your way up, you also have to check back down again. If you are in the left portion of the (1/4,3/4) interval, you might intersect something from the right portion of the (0,1/2) interval, so you have to descend into it. But you can prune your search on the way down, cutting out “most” (all but a logarithm) of it. For example, you don’t need to check the interval (1/16,1/8) and its children even though you are checking its parent (0, 1/2) — it is too far away.

When you are deciding which region to put a circle in and you have a choice, i.e. the circle fits in multiple subregions, always pick the center one. When the circles are in motion, this gets rid of the degenerate case where a circle bounces back and forth across a deep, expensive boundary.

If you generalize to 2D, you end up cutting a region into 9 subregions (thus the name enneatree). The way I did it was to alternate cutting horizontally and vertically, so that I didn’t have to have 9-way if-else chains. That had more subtleties than I had expected. I have yet to find a straightforward, elegant implementation.

The algorithm seems to work well — I am able to support 5,000 densely packed circles in C# without optimizations.

The inspiration for this algorithm comes from the field of computable real numbers (infinite — not arbitrary, but infinite precision numbers). You run into trouble if you try to represent computable reals infinite streams of bits, because some interactions might be “non-local”; i.e. you might have to look infinitely far down a stream to find out whether a bit should be 0 or 1. Those guys solve the problem in the same way I did, by dividing intervals into 3 overlapping subintervals, and using this 3-symbol system as their number representation.

I see a glimmer of a connection between the two problems, but I can’t see it formally. Maybe it would become clearer if I considered infinite collections of infinitesimal objects needing collision detection.

New game: “SpaceBattle”

Over the last week, I took a break from Dana and wrote a Geometry Wars-style action game. The main idea the game explores is virtuosity. Essentially there is so much to control that our puny human brains cannot comprehend it all for a long time; the game is engineered to have a very long learning curve.

spacebattle1

SpaceBattle.zip | SpaceBattle.ccgame (I don’t know which one of these will work — you might have to have XNA game studio installed for one or both of them…)

Here are some examples of gameplay dynamics:

  • “Bomb enemies” (more precisely, enemies with a bomb component) explode in a few units radius, and if you are nearby, you die. On the other hand, any other enemies nearby also die, and because you control the spawn location of new enemies (they always spawn opposite the direction you are shooting), you can use this to your advantage.
  • “Strong enemies” take three hits to kill, so generally you don’t want them around. However, if you have the prism gun, then the strong enemies refract and split your bullets, making many more of them. So there is incentive to keep the strong enemies around to use to your advantage if you pick up a prism gun.
  • Usually there is too much going on to keep it all under control. So you can shoot “time rings” (ripped off from Braid) which slow down the world around them, to keep the part you are not currently dealing with from evolving too fast. Using it effectively requires foresight and a good idea of what’s going on on the entire screen, in addition to the “micro dynamics” of shooting and dodging.

Each of the enemies’ sound effects layer together rhythmically to create the music for the game.

I’m pretty happy with this game. It is fun and addictive, and like Geometry Wars has an affinity for inducing flow states and causing you to do things you didn’t know you were capable of. Be warned though — this game is not for the faint of heart: it is very difficult. If you feel like devoting a few hours to a game to learn it well enough not to die in the first 30 seconds, this is the game for you :-). That is, after all, the point. (Geometry Wars players will have an easier time, because the basic mechanic is the same, so they can start on the second layer) My high score is 2715, consider that a challenge to beat it! :-)

Best played with an XBox 360 controller. There is a keyboard and mouse mode (keys: wasd, 123, spacebar), but I have not playtested it very much in that mode, because I’m developing on a laptop with a touchpad. Gross. To use the keyboard and mouse mode, you need to enable that control scheme in the options.

The game was developed in C# using Microsoft’s XNA game toolkit.

Braid!

I just had my mind blown by the trial of Braid, by Jonathan Blow, which just came out on XBox Live Arcade. This is the most interesting puzzle game I have played in many years. It’s a platformer about playing with time, and in incorporates this very effectively to allow clever solutions to puzzles which seem impossible. Not very many games get my money these days, but this one does!

Bravo!

Making Twilight Imperium a bit less random

Twilight Imperium is a marvelous board game introduced to gamedev by Hagan Koopman (“King Koopman?”) some time ago. It is ridiculously complicated, has a 60 page dense rule book, and takes 15 hours to play if everybody already knows how to play. I used my roommates’ TI parties as an excuse to go on hikes with Jenn.

But after months of my roommates tirelessly raving about it, my interest was piqued and I played the next two games (you need one game to “get it”… quite a learning investment). And it is excellent. It’s plays as the strategy of every game you’ve ever played mixed into one: economics, politics, military, alliances, short-term deals, bidding, deception, pedantry. The complex social networks that emerge are something to behold.

But I have a problem with it: the combat sucks. You spend all this money and effort over many turns to build a massive army and decide to invade someone else’s space. And it comes down to dice rolls. Hundreds of dice rolls. There are some decisions to be made, but mostly they are trivial and can be described with an algorithm in half a page. You might as well be playing on a computer and say “resolve!” and it says “Luke wins!”. Combat! In a space warfare game!

Tonight at gamedev we did a tactical strategy game jam, and our group (Travis Offtermatt, Jude, and I) attempted to create an alternative that was more strategic. After some playtests, I think the game is quite a success. Fast-paced (by design, lest we yet lengthen this hideously long game), enjoyable, balanced, and quite in harmony with the outcomes of the dice if both players are playing at the same level (namely: noobs, since we just invented it).

It is played on a chess board, or any grid for that matter, with the TI combat pieces: Fighters, cruisers, destroyers, dreadnaughts, carriers, and war suns. Players begin on opposite sides of the board, and arbitrarily configure as many pieces as they like from their fleet in the two rows closest to them. Players take turns moving a single piece and then shooting with it if possible. After each turn a player may place more pieces from his fleet in the closest two rows.

Each piece has a “movement” attribute, which is the number of spaces that can be moved in a single turn. Movement is horizontal and vertical, not diagonal, not ever through a space that another piece is on, and backtracking is allowed. Furthermore, a piece is facing in the direction that it last moved; i.e. opposite the direction it came from. Pieces can only change orientation by moving, there is no in-place rotation.

Each weapon also has a range, which is the distance it can shoot. For range-1 weapons, the piece has to be directly horizontally or vertically adjacent, and also in the correct orientation. Things generalize naturally to more range: it has to be in the same line as the orientation of the weapon (which, for example for dreadnaughts, is not always the same as that of the ship). For example, fighters can only shoot in the direction they are facing, they can not move and then shoot in a different direction than they were moving.

Finally, war suns and dreadnaughts have an additional hit point. When you shoot one of these, it is turned on its side and continues to fight as normal. If you shoot one that is already on its side, then it dies. All the other pieces die immediately when shot. Shots can fire through friendly pieces without doing harm, but must stop and detonate at the first enemy piece on their path.

The characteristics of each of the pieces follow:

Unit Type Movement Weapons Hit Points
Fighter 2 Forward range 1 1
War Sun 1 3 shots, each to any space in the surrounding 5×5 square. 2
Cruiser 3 Forward range 1. Can shoot (once) at any time during motion, not just after motion 1
Destroyer 3 Forward range 1 or, once per battle (turn on side after this is done), an “anti-fighter barrage” that destroys all fighters in the 2×3 rectangle in front of the unit 1
Dreadnaught 2 Forward range 2 or on both left and right simultaneously with range 2 2
Carrier 1 Forward range 1 1

A few technical notes: since keeping units off the board can be a way to hide, if your unit gets to the other end of the board it may fire “off the edge” of the board to attack any of the pieces that are not yet in play. Further, it is only legal for any player to have 10 fighters on the board at a time, since that’s how many fighter pieces there are. The others must be kept off the board until some of the ones on the board die.

Enjoy your newly enhanced ludicrously long game!

Git Revolution

I’ve been learning about git, and have generally been excited about the distributed model. I mean, I used svk, but a robust distributed model is quite exciting. Tonight I had a completely crazy idea: to make a game around git.

I brainstormed and I brainstormed, many brains rained. And then it hit me: this is perfect for my interesting-concept-but-needs-work-game Revolution.

A slight refresher: Revolution is an open-ended strategy/role-playing board game; i.e. people have roles, and they can do things consistent with their roles and their resources, but are not restricted to strict game rules. Whether they are capable of doing something is determined by an impartial game master. We did a few play-tests, and it was interesting, but it was frustrating for many people. This was because the core game dynamics were not very rich (intentionally, I wanted the player-created content to create the richness), but also because players felt cheated by the GM because other players were getting advantages for “unfair” moves that they didn’t think of. A lot of balancing decisions had to be made by the GM in real-time, and it’s very easy to disagree with those if they work out in your opponent’s favor.

My current idea, which I hope is mature enough to play-test tomorrow, is to do away with the GM in favor of a GM-like game dynamic. And that game dynamic will be provided by git!

Here’s the idea: everybody gets a repository which has the state of the world in it. Everyone also has files somewhere describing their role (some kind of player-specific ability perhaps) and what resources they currently have. And then during a game round everyone makes changes and commits them with a description of why they were capable of making that change. During the game round people also share their repositories with each other, and people merge in changes as they see fit. A soft rule is that you shouldn’t merge changes which don’t make sense; i.e. if a farmer builds an international airport, nobody should merge that. But they could if they wanted, it’d just be weird.

So there are a bunch of worlds floating around which probably disagree with each other. What is the real state of the world? I think it’s just the most popular commits. So if more than half of the player branches have incorporated a commit, it gets merged into the “real world” branch. And the real world branch is what determines if people have accomplished their (hidden) goals to score points. I’m not sure the goal/point thing still makes sense, but that’s all I can think of for now.

Git should make this super easy. For ease of communication, I’ll probably just set up a central repository with one branch per player. Players are allowed to modify whatever the hell they want, so commit hooks are not necessary. The only thing I need is a script which finds the most popular commits and merges them.

All that’s left is the hard part: the actual game dynamics…