Lessons from the MUD
March 28, 2017
Anonymity and technology mix badly. While you are required in most countries to pass a test of skill with cars, our most dangerous tool, and even the US licenses and records the identity of firearm owners, any fool can appear on Twitter or Facebook with a fictional or falsified identity and act abusively towards the cyborgs they encounter there. However, eliminating anonymity by forcing the use of public identities is a heavy-handed solution that would almost certainly prove insufficient for eliminating the problem, as Brian Green has carefully outlined. But there are lessons that can be learned from earlier digital public spaces that offered anonymity but had less of a problem with abuse, and this can put a different slant on these kinds of problems.
The Multi-User Dungeons, or MUDs, began as spaces for creative adventures, greatly conditioned by the play of the pivotal tabletop role-playing game Dungeons & Dragons. These imaginary worlds were entirely made of databases of text, arranged to create the impression of connected rooms and spaces, within which players could move around and encounter one another. Players would join a MUD using network protocols from the early days of the internet, creating an account with a name that would become their identity in the shared space of the game world. The MUDs would go on to provide the basis for graphical games like EverQuest and World of Warcraft that would achieve tremendous commercial success.
A player coming to a MUD for the first time was likely to have been invited by someone else, and as such was not strictly alone. Nonetheless, players typically entered the text world as individuals, and since players would connect at different times they were often (if not always) alone. Starting players were always unknown to the existing players, so there was always an element of uncertainty about the arrival of someone new. Nonetheless, the community surrounding each MUD, which was typically a few hundred players or so, generally welcomed newcomers, and there was an air of hospitality extended in most MUD communities. Abusive players, then as with in the larger digital spaces today, were the minority, and would quickly come into conflict with the more responsible players who would report them to the administrators, typically entitled Wizards.
The Wizard system provided legislative, judicial, and executive power within the MUD. While the first Wizards would be those who set up the software and provided the hardware to run the game, many MUDs used a democratic system to elect additional Wizards, who worked as a collective to maintain order and community. Legislative acts concerned the code of conduct that applied, and thus set the boundaries of acceptable behaviour – such matters were always resolved by the Wizards working together, and generally involved consulting the wider community of players as well. Judicial and executive power was expressed by taking action against troublemakers – in many MUDs, miscreants could be ‘toaded’, which reduced a character to a powerless amphibian. Wizards would hold tribunals in this regard to determine the justice of any such punishment meted out. Although I have heard of some instances of ‘corrupt Wizards’, my own experiences showed the Wizard system to be highly effective at minimising abuse in MUDs.
While on the surface, MUDs were play spaces, in practice the division between game and communication system blurred. This was especially so because MUDs provided the first networked text communication system that didn’t require manual delivery, like a telegram. As such, many attracted a community of ‘players’ using them solely as a chat interface. These were the original chatrooms, since players would typically congregate in a room of the MUD’s fictional world to engage in conversation. This occasionally caused tension with other members of the community who were using the game differently, but for the most part it was a fact of life in MUDs that some people were there solely to chat, and facilities to do so were expanded in the code for MUDs as the 1990s progressed.
The MUD was the direct precursor to Facebook and Twitter, which descend from earlier copies of the chatroom concept, such as AOL’s offering, which lacked the fictional world but kept the name. Yet abuse in MUDs was comparatively rare, and rapidly resolved by Wizards whenever it occurred. Anonymity may still have fostered abuse, but the systems were in place in MUDs both to guard against it, and to discourage it from happening in the first place. The most effective deterrent against online abuse is community – and the MUDs fostered this far more than the latest digital public spaces.
Thus while a new MUD player might arrive alone and unknown, they were never unguarded – both in the sense of protected from the abuse of others, and watched for signs of conducting abuse. Conversely, a ‘tweep’ (as a user of Twitter is colloquially termed) is alone, unknown, and essentially unguarded – and these are the optimal conditions for abuse to fester. Twitter has an abuse reporting system, but it is distant and bureaucratic, with no community to manage the warnings and reporting, and no community-engaged Wizards to act as magistrates.
Here we have three different senses of ‘anonymous’, all of which contribute to cyber-disdain, and thus a greater risk of cyber-cruelty. To be alone in a digital public space is to lack a community, and crucially ‘follows’ and ‘friends’ do not mark the authentic social bonds of a community relationship but merely an open communication channel. To be unknown is to be anonymous in the sense of having a concealed identity – a situation that fosters abuse if it is not offset by community relations. Lastly, unguarded marks an invisibility to the systems of justice within a digital public space – a situation worsened by being unknown, and by being alone.
Thus Facebook’s requirement to use conventional identities (to eliminate being unknown) is insufficient to stop abuse, both because its users are mostly alone and unguarded, and also because the size of its membership means that with random encounters, cyborgs are still effectively unknown to each other. This is the fertile soil in which abusive behaviour online grows: as the cybernetic networks increase in scale, community is unsustainable since humans can only sustain viable communities at a scale of hundreds and never at a scale of billions. Two Facebook users, even with public identities, are effectively unknowable to each other – and nothing much can solve this problem short of managing encounters in a way that most would find intolerable. Guarding against problematic behaviour is more tractable when there is a village-scale community to engage, respond, and react – while at planetary-scale even robot-assisted magistrates are rendered impotent by the sheer scope of the network.
Anonymity is the root of online abuse, but there are at least three senses of this term that matter. We tend to focus on unknown anonymity, and thus miss the importance of alone anonymity and unguarded anonymity. My emphasis on being alone may seem misplaced. For instance, in his discussion of the problems of anonymity, Brian reports that “people in groups are more likely to transgress.” I agree with this claim, even though this may seem to run counter to my suggestion that alone anonymity is a key part of the problem. However, Brian’s point concerns ‘mob mentality’, and a mob is not a community in any relevant sense. Indeed, precisely what makes a mob dangerous is that people are alone together when they are a part of it – and this anonymity of the crowd (which also operates fairly innocently in audiences for musicians and so forth) becomes dangerous when the people concerned are also unknown and unguarded, as happens all the time in our digital public spaces.
When Sherry Turkle raises concerns about the way we are alone together online, she is not talking about the mob as such, but her work also emphasises this same concern: the undermining of authentic community by the current design features of the new communication systems. Yet different designs will produce different results. It is notable that blogs, which assign executive power to the blog owner (and thus are close to unguarded), and are ambiguous on the question of being unknown (since it is the blog owner’s choice how they identify) still manage to be less of a locus of abuse than the large-scale digital public spaces since bloggers are never alone. Forums tolerate contributions that are alone and unknown because they are not unguarded, thanks to the presence of moderators who can work effectively because the scale of the network of contributors is manageable. When a moderator ‘knows that such-and-such is a troublemaker’, they mean that particular cyborg is not anonymous in the sense of being unguarded. Different solutions to fostering cyber-respect (and minimising cyber-cruelty) hinge upon the different senses of anonymity.
What does not work – indeed, cannot work – is expecting our robots to plug the gap caused by scaling networks beyond human capacity to form a viable community. Abuse will remain endemic on Facebook and Twitter for as long as their cyborg participants can be functionally alone, effectively unknown, and inadequately guarded. If there are solutions to this problem, and it is not clear that there are, the most important lessons to learn are those revealed by the stories of the MUDs, the pioneering digital public spaces, the first cyborg communities of their kind.
With grateful thanks to Peter Crowther, both for feedback on this piece, and for running UglyMUG, a game that changed both his life and my own. Also, my thanks to Brian Green for his outstanding summary of the relationship between privacy and anonymity, which this piece responds to.