Imagine for a moment that you’ve been dropped into some random spot in the Himalayas. Your goal is to reach the top of Everest. If you just keep going up at every opportunity, will you reach it? Probably not. Ignoring for a moment all the issues of food, oxygen, and human strength, there’s another problem: you probably landed closer to some other mountain. If you insist on going up, you’ll probably ascend that mountain, with Everest still towering above you and no way to reach it without going down first. You will have reached what was called a “local maximum” in your freshman math class. Since any optimization problem can be viewed as finding a path through the graph defined by same function, this same hill climbing behavior and this same potential to reach a dead end occurs in many other fields from the technical to the political.

Another concept similar to that of a local maximum is from game theory – a “Nash equilibrium” is a set of strategies for all players where no player can improve their own outcome by changing only their own strategy. To show how broadly applicable such concepts are, consider the article which brought all of this to mind for me – Removing Roads and Traffic Lights Speeds Urban Travel.

Using hypothetical and real-world road networks, they explain that drivers seeking the shortest route to a given destination eventually reach what is known as the Nash equilibrium, in which no single driver can do any better by changing his or her strategy unilaterally. The problem is that the Nash equilibrium is less efficient than the equilibrium reached when drivers act unselfishly—that is, when they coordinate their movements to benefit the entire group.

The solution hinges on Braess’s paradox, Gastner says. “Because selfish drivers optimize a wrong function, they can be led to a better solution if you remove some of the network links,” he explains. Why? In part because closing roads makes it more difficult for individual drivers to choose the best (and most selfish) route. In the Boston example, Gastner’s team found that six possible road closures, including parts of Charles and Main streets, would reduce the delay under the selfish-driving scenario. (The street closures would not slow drivers if they were behaving unselfishly.)

The game-theory version of this problem is a bit more like a maze than a mountain climb, with certain directions precluded by the self-interest criterion rather than by a limited field of view, but the potential for a hill-climbing approach to lead to a dead end is essentially the same. Anybody who actually paid attention in that freshman math class would know that more complex functions tend to have more local maxima (and minima). Since there are few functions as complex as those describing economic behavior, the graph of any economic function tends to be especially fraught with dead ends. This is why so much of economics is BS. A statement that “going in this direction leads upward” might be true in a purely local context cherry-picked by the person making the statement, but that truth might also be utterly useless in the part of the graph that actually reflects reality. In particular, understanding these freshman-level mathematical concepts blows a huge hole in the claim that individuals pursuing self-interest will result in a globally optimal outcome such as widespread economic prosperity.

To illustrate this divergence between selfishness and optimality, I invented a little game called the “2-4-6″ game. Two players roll dice a fixed number of times, adding each roll to their score, but with two twists.

  1. If the last digit of a player’s current score is 2, 4, or 6, they must roll exactly that number to increase their score. Any other roll leaves them stuck.
  2. A player who is not stuck (either before or after rolling) may “donate” one point from their roll to someone who is stuck, to get them unstuck. Such donation cannot result in the donor becoming stuck, even if the result is a score ending in 2, 4, or 6.

The important dynamic here is that donating always involves both an absolute and relative loss for the donor, compared to not donating. The donor loses one point, and the recipient gains at least two – from an effective roll of zero to their original roll plus one. However, this also means that the total score for both players will always increase as well. I’ve even written a Python script to simulate the results, both when players choose to help each other and when they decline (as the “selfish is good” contingent would claim is best). The result is that players in the selfish variant usually get a score of about 125 for 100 rolls, whereas players in the altruistic variant usually get about 140. The most important difference is this: even the “losers” in the altruistic variant do better than the “winners” in the selfish one.

The moral, of course, is that it’s always important to consider not only the local effects of a decision, but also what kind of system a multitude of similar decisions will create. Whether the subject is throughput in a congested computer network or GDP in a national economy, a system in which individual actors sometimes “sacrifice” immediate self-interest for the sake of creating or maintaining a better system often yields better results for everyone including those making the so-called sacrifice.