I’ve watched cliques form online for quite a few years – on Usenet, on LambdaMOO, in various web-based message boards, on IRC, even on M-Net a long time ago. The processes seem remarkably similar, so I thought I’d collect some of my thoughts on the subject for future reference. Although this stuff is relevant to both (small group) sociology and game theory, I’ll be keeping it pretty informal.
Whenever an “outsider” comments on or complains about a clique online, it’s treated like a conspiracy theory. What I’d like to point out right away is that clique formation does not require any conspiracy, or even collusion, but occurs in a fully decentralized way. The central observation here is as follows:
The only thing that is necessary for a clique to form is the existence of a small set of people less willing to criticize each other than to criticize others.
Note that the above represents a minimal requirement for clique formation. Cliques often share many other features, but those can be considered incidental and not necessary. Two particular elaborations of the above are also particularly important:
- The size of a group is most usefully measured not by its membership, but by their combined activity level. In many online communities there are vast disparities in activity levels, so that an effective clique can form with very few actual members.
- The conflict avoidance within the clique need not be based on friendship or fellow feeling (though it often is). It could just as easily be mere mutual respect, or even mutual fear and wariness.
The “accidental” cliques formed solely by the above factors are very weak indeed. However, once they form other mechanisms serve to strengthen them. For example, imagine that Alex and Barbara are both members of such a loose clique. If Alex and Barbara disagree, they do so with a certain amount of restraint. However, if Alex disagrees with Charlie who is outside the group, things might be less restrained. In the natural way of the Internet, Alex and Charlie’s disagreement might well escalate into an all-out flame war. At the same time, Barbara might have her own issues with Charlie and be expressing them just as openly. Barbara might even take an opportunity to express dislike for Charlie by pointing out what a jerk he must be for picking fights with Alex; such shows of support strengthen the bonds between clique members while actively excluding non-members.
Over time, Alex and Barbara might notice that their shared interactions are less confrontational than their separate interactions with others. If both experience frequent conflict with “obnoxious newbies” their shared interaction might be significantly more positive (or less negative) than average, making them even more kindly disposed toward one another. Furthermore, Alex might notice that Dawn and Eugene, who are also clique members, never seem to be in conflict with Barbara. Because Alex has a certain amount of respect for Dawn and Eugene, their opinion of Barbara matters, and the effect is to make Barbara – in Alex’s eyes – even more a member of what is quickly congealing into a community. Taboos against aggression within the tribe run deep. Lastly, if there are many “barbarians at the gate” – and on the Internet there usually are – clique/community members might gain favor by “manning the walls” and helping to expel unwanted newcomers (thereby exhibiting behavior that would in another context be considered highly undesirable).
Over time, the boundaries between clique and non-clique become self-reinforcing. Members’ actions increase a sense of belonging to a community, which in turn constrains or guides members actions, and so on. What’s interesting is that the clique members might not even realize or admit their own roles in the process, and in fact accusations of cliquishness are often met with hot denial. “If you only knew…” say the members, unaware that their own reluctance to criticize clique members’ behavior – however motivated – contributes organically to the clique’s formation and maintenance. There is no forest, only trees.
At a certain level, this all reduces to game theory, and specifically to the Iterated Prisoners’ Dilemma or IPD. A common observation in the IPD is that groups of cooperating agents within an “ecology” of varying strategies will often come to dominate the landscape. If one were to graph interactions, using blue to represent “cooperation” interactions (acceptance, encouragement, sharing) and red to represent “defection” (exclusion, attack), one would generally see more blue clumps forming than red. This is because cooperation is an attractive force that encourages clumping while defection is a repulsive force that encourages distribution. Cliques are, in a way, the darker side of cooperation.
Another observation from the IPD is that “Tit For Tat” or TFT is a nearly-optimal strategy. This strategy consists of two elements:
- In the first interaction with an individual, cooperate.
- In subsequent interactions, do what they did last time.
TFT seems to strike an eerily effective balance between rewarding cooperation and punishing defection. Perhaps more interestingly, it’s “memory free”; it does not remember anything beyond the last interaction, and therefore embodies no notion of history or reputation. “Opportunistic” strategies that try to take advantage of TFT’s “naivete” generally fail, almost invariably so once a “critical mass” of TFT agents exists cooperating among themselves.
The lessons for online communities should be clear:
- A policy of greeting newcomers with hostility (“defect first” in IPD terms) is sub-optimal.
- Over-reliance on history or reputation to guide one’s reactions is also sub-optimal.
In short, then: cliques and their associated double standards are bad. Participants in online communities, even those who don’t consider themselves to be members of a clique, should be wary of behaviors that contribute to clique formation and maintenance. In particular, they should be ready to give newcomers the benefit of the doubt, and to challenge clique members for their misbehavior, without reference to past history. The current behavior, not the person and their history, should be central.
Some might argue that ignoring reputation is a bad idea, that sophisticated reputation-management and “web of trust” models have been developed and shown to work. My response to that is twofold:
- Those models work for computer systems, but we’re dealing here with people. They’re different.
- A little knowledge is dangerous. While a sophisticated reputation model might indeed work better than the simple suggestions given above, a naive reputation model is what created the clique problem in the first place. Sophisticated models are usually too complicated for people to apply them consistently (see previous point), so we might well be better off without reputation than with a broken model for it.