Previous month:
December 2007
Next month:
February 2008

Extreme Protest

Where should we draw the line between protest and violence? Are forms of suicide acceptable means of protest? In short: how far should one be willing to go when protesting?

Let us begin at the least contentious act of extreme protest, and work our way to the most shocking. 

Hunger strike was considered a legitimate form of protest by Gandhi, and indeed he employed several hunger strikes during his time in prison. Although the idea that this action is non-violent is debatable, it remains one of the few forms of protest available to someone who has been imprisoned. The British authorities were anxious not to allow Gandhi to die while in jail, as it would reflect poorly upon them internationally, and create a martyr – and quite possibly a retaliatory uprising.

The goal of a hunger strike or any other extreme act of protest is to draw attention to the cause – to get attention from the media (and hence the world) for what is being protested. In this respect, a hunger strike is an effective means of protest. 

More extreme is self-immolation – the act of setting oneself on fire and burning to death as protest. This is almost as extreme as it gets. While in a hunger strike one is not committing suicide but endangering one’s life, in self-immolation one is trading in one’s own life for the hope of publicity to a cause. In the short term it works: news services do report these horrifying events. But it is questionable whether the cost of a human life is worth a brief dose of attention, especially when less extreme measures (such as public nudity) can bring even more attention.

Consider the following instances of self-immolation: 

Norman Morrison burning himself to death outside the Pentagon in November 1965 to protest the Vietnam War. Roger Allen LaPorte doing the same one week later outside of the UN headquarters.

Jan Palach self-immolating as protest against the Soviet invasion of Czechoslovakia in August 1968.

Sándor Bauer protesting the Communist regime in Budapest in 1969.

Márton Moyses self-immolating in a protest against Communist dictatorship in Hungary in Februrary 1969.

And finally, Malachi Ritscher, a Chicago musician who self-immolated in November 2006 in order to protest the War in Iraq.

How many of these have you heard of? How many do you remember? Do you believe that these people helped their cause?  Could they have done more to help their cause if instead of committing an act of suicide-as-protest they had contributed to their cause in other ways?

The trouble with direct acts of suicide as protest is that while it is certainly shocking, it is not necessarily effective: people tend to see anyone willing to commit suicide in this way as mentally unbalanced, thus undercutting any value in bringing attention to the cause being pursued.

And if self-immolation is of questionable value as an act of protest, what of suicide bombings, and other forms of protest that involve not only suicide but murder? Here, the effectiveness of the action is not just reduced, it has become counter-productive: while committing murder certainly increases the news-worthiness of your protest (suicide bombings make the news tickers almost every day), it decreases any sympathy people may have for your cause. 

Since the goal of extreme acts of protest is to draw attention to your cause, and it is desired that this attention bring more people into support of your position, neither self-immolation nor suicide bombing can be viewed as wise courses of action, although suicide bombing is certainly far worse: not only do you damage your cause, you entrench the culture of those you murder into deeper opposition, rendering it utterly counter-productive.

Neither murder nor suicide (with the possible exception of the threat of suicide from hunger strike while imprisoned) are effective forms of protest. Since they are not effective, they should never be conducted. 

For Suyi.


Civil Disobedience (5): The Velvet Revolution

Velvrev 1989 was an incredible year. In January, the Soviet Union finally began to pull out of Afghanistan, and by February 15th they completed their withdrawal. On March 27th, the first free elections for the Soviet parliament took place. And perhaps most memorably of all, on November 9th, East Berlin opened the roadblocks and allowed its people to cross into West Germany unhindered for the first time in decades. The following day, ecstatic Germans began tearing down the Berlin Wall, signifying the end of the Cold War.

On Friday November 17th, riot police suppressed a peaceful demonstration by students in Prague, then the capital of Czechoslovakia. The students in question were members of the Socialist Union of Youth, and deeply opposed to the Communist Party’s oppressive leadership. However, they had been reluctant to demonstrate for fear of reprisals. The clash with the police ignited vehement opposition to the Communist rule of Czechoslovakia, and a series of strikes followed.

Protests in the streets increased in attendance daily: just three days later, there were half a million protesters when there had been 200,000 the day before. The students went on strike, and were joined first by actors and theatre employees in Prague, then later by the whole of the nation. A two-hour general strike on Monday 27th November was supported by three quarters of the population. By the end of the year, Václav Havel – a former dissident – was appointed as President, and Communist rule of Czechoslovakia came to an end.

Known as the Velvet Revolution, probably in reference to the velvet ropes in the theatres where opposition groups met, it is also known in Slovakia as the Gentle Revolution. Although a complete success in achieving its goals through civil disobedience, it was not without it costs. In particular, the people of the newly freed nation were unable to agree upon a name, a situation which eventually resulted in the dissolution of Czechoslovakia, and the formation instead of two separate states – the Czech and Slovak Republics – in the so-called “Velvet divorce”.

The Velvet Revolution was not the only successful application of civil disobedience in 1989 – in fact, it was in many respects the culmination of a series of revolutions sometimes called the “Autumn of Nations” (in reference to the “Spring of Nations”; a series of revolutions in 1848). This remarkable period of history begins with the Solidarity movement in Poland (beginning in 1981, but culminating in 1989), the Singing Revolution and “living chain” in the Baltic states of Estonia, Lithuania and Latvia, and the bloodless revolution in Bulgaria that removed the Communist government there.

Ten years later, further non-violent revolutions followed. In 2000, the Bulldozer Revolution in Serbia brought an end to the rule of the dictator Slobodan Milošević (a campaign of civil disobedience that began with, and was guided by, a student organisation called Otpor – many of whom were avid players of computer strategy games). Three years later, the Rose Revolution in Georgia peacefully overthrew Eduard Shevardnadze, and the year after that Ukraine rose up in the infamous Orange Revolution, as Ukrainians refused to accept the results of an election that had been thoroughly corrupted by voter intimidation and fraud. Perhaps a million Ukrainians protested in Independence Square, in the centre of the Ukrainian capital of Kiev, despite the freezing December weather.

The effectiveness of the so-called “Colour Revolutions” of the early 2000s even spread to nations outside of Europe. Following the assassination of opposition leader Rafik Hariri in 2005, the people of Lebanon conducted organised civil disobedience to demand the withdrawal of Syrian troops, ending more than thirty years of occupation. This “Cedar Revolution” was the first non-violent revolution to be conducted in a Muslim nation.

Civil disobedience has gone from being a mere idea in the mind of Thoreau, through to being a powerful method for change forged by Gandhi and refined by Martin Luther King, Jr, on to becoming an unstoppable political tool for change, one that effectively stripped away the power and influence of the Communist party, and contributed significantly to the fall of the Soviet Union, as well as offering the possibility of change further afield. When people want to live in a democratic state, civil disobedience provides the tools they need to make it happen.

There are, of course, situations in which civil disobedience would not work. An activist such as Gandhi living in Nazi Germany would have been swiftly eliminated. But this does not give us the answer to the more interesting question: could a person like Gandhi living in Germany in the 1920s and 1930s have prevented the rise of the Nazi party in the first place?

As a tool for change, non-violent resistance is superior to the alternative in a great many ways – not least of which being the capacity to earn the sympathy and support of the world for the goal being sought. Terrorist organisations employing violence and fear to pursue their agenda should seriously consider abandoning such extreme measures, tactics which turn the people of the world resolutely against their cause. Non-violent resistance is more effective than terrorism, and offers our best hope for lasting peace.

Next week, the final part: The Duty to Dissent


Freedom to Fail

Frustration_lg Why must players beat every challenge laid before them to progress within a game? This structure suits many gamer hobbyists, but what about mass market players? Why should they jump through the same hoops as the hobbyists? In short: should we give players the freedom to fail?

The games industry, as we have discussed before, has built itself into its current state by making games which are effective at supplying the emotion of fiero – triumph over adversity. But, as we discussed before in Emotions of Play Revisited, not all players are suited to games of this kind. In fact, I hypothesise that a significant but unknown proportion of players feel disappointment (sadness) when they fail, rather than the anger that motivates the fiero-seeker to throw themselves against challenges until they achieve victory, and the endorphin hit they crave.

A recent study by Dr. Jesper Juul (not currently published – I’ll provide a link when it is) on “hardcore” gamers and failure demonstrates that these players seem to require failure as part of their enjoyment. This result lies well without our expectations – if the player wins without failure, there is little or no sense that a challenge has been overcome, and thus little or no fiero is provided. The report does not repeat the experiment with “casual” gamers and thus it remains an open question whether or not this finding applies across the entire audience for games. My firm suspicion is that it is not the case, but at the current time I cannot prove this contention.

For several years now, my company and I have been experimenting with game structures that provide new ways to balance the needs of the fiero-seeking hobbyist against the less demanding requirements of a mass market audience. The principle structure we have been investigating is the fail-continue structure, which I talked about before in the context of Air Conflicts, and its semi-sequel Attack on Pearl Harbor. The idea here is that the game is organised in such a way that the player does not need to complete a sequence of challenges to progress, they just need to reach a minimal degree of success in order to proceed.

In many respects, the fail-continue structure used in these aforementioned games seems to be a success – no reviewer has marked the games down in this regard, and the needs of obsessive-compulsive players to repeat a challenge until they are victorious is met by a “retry?” option. However, unlike the vast majority of games on the market the player is not forced to retry until they succeed. They can, if they prefer, proceed with the game anyway, the cost of failure being represented in these games by the loss of a plane. (Players who crash constantly run out of planes, and have to retry the campaign, but they are never required to repeat a specific mission until they master it).

However, publishers have resisted this application of fail-continue structure, in one case even claiming that a game using such a structure “wouldn’t even be a game” (despite expressly intending the game in question for a casual audience). We are even getting some resistance from our development partners. There is no real comprehension among people in the games industry as to why such a structure might be beneficial to a wider audience, yet I remain confident that there is a sound commercial benefit to approaches of this kind.

Platform games since Mario 64 have almost uniformly used a structure that requires less than perfect success to continue – following a “70% rule”, checkpoints in game progress require that the player collects some but not all of the tokens available in the game space in order to continue (usually about 7 in 10, hence the “70% rule”). This structure was a huge step forward from earlier platform games which generally required that challenges be completed in strict sequence, and I believe that a fail-continue structure has the potential to take this even further.

For instance, in Air Conflicts, I believe one need only complete about a third of the missions in order to reach the end of each campaign – and ‘completion’ need not be success; the player is also allowed to retreat from a battle in preference to being killed. Gamer hobbyists facing this structure play the game like they would any other: they want to succeed, they want to get fiero, so they push themselves against the challenge in the hope to win. However, mass market players have an option in this structure they would never normally have – facing a challenge they cannot beat, they can just ignore it and carry on. A narrative structure (in this case, the story of World War II) links together the progression; it feels natural to most players that the war goes on, whether or not you succeeded in your assignment.

The only viable argument against fail-continue I have heard thus far is that it would allow a person to complete the game too quickly by just killing themselves. True enough – but such a player would have none of the experience of the play of the game, they would gain nothing from doing so. Furthermore it is possible to set gateway targets for progression between sections of play such that the player does need to reach some degree of performance to continue (as happens in Air Conflicts) or, perhaps even better, that the next section unlocks either from success or from passage of time (when you have played the previous section for more than two hours, you automatically unlock the next section).

Videogames in their current form train the player to perform certain tasks through repetition and rewards for success, in a manner similar to the way one trains an animal to perform. The player is expected to complete the task set, to learn the skills the game is trying to teach. The idea that the game might be organised in such a way as to allow the player to get out of this enforced training – to avoid being made to jump through hoops – is alien to most gamer hobbyists, trapped as they are in the idea of videogames being entirely about the powerful drug-like hit of victory in the face of struggle.

Yet the mass market isn’t about fiero and hoop jumping. If we look at, for instance, Bejewelled, it is an error to say this is simply a remake of Tetris Attack – this older game was build very much like a conventional gamer hobbyist experience, while Bejewelled has been simplified to the point of triviality in the eyes of most hobbyists. It remains compulsive and fun to the mass market players because what this audience is looking for is very different indeed. Consider also the way the Brain Age games (which, let's not forget, have sold more than 8 million units each) unlocks new content based on the number of days the player has been playing, irrespective of their degree of performance.

If games for the hobbyists are designed as training programs for beating specific challenge sequences, games for the mass market need a different paradigm, quite possibly many different paradigms. On the one hand, we have the simple compulsiveness of PopCap’s titles, including Bejewelled, and also the more traditional puzzle designs of something like Brain Age. But I believe there is another paradigm that can be gainfully applied: that of the amusement park.

When someone goes to an amusement park, they expect to be entertained; to be delivered fun experiences with zero challenge beyond the patience to wait in line, and the willingness to face ones fears in the case of certain rollercoaster rides. The global amusement park industry is worth about $25 billion, making it quite similar in size to the videogames industry (worth around $32 billion and still growing). I believe that if the games industry wishes to reach out deeper into the mass market than it currently does (the casual games industry is currently worth only about $2 billion), it needs to look at the amusement park industry for guidance. Combining amusement park sensibilities with compelling virtual worlds (and a little narrative glue to hold it all together) could uncover a whole new market for videogames.

Freedom to fail means allowing players the opportunity to continue in a game whether or not they can beat all the challenges. This can happen in many different ways – through further use of the “70% rule”, or the reduction of the margin to 50% or less; in the use of fail-continue structures requiring only a minimal level of performance in each stage to progress; in the use of game structures which unlock both in response to the completion of challenges, and also in return for the player’s investment of time (a model already deployed on a small scale in Tetsuya Mizuguchi’s Rez).

Perhaps freedom to fail doesn’t take it far enough – perhaps there are players out there who want freedom from failure, who like the patrons of an amusement park are just looking for some fun and excitement in a way that they cannot possibly fail, and which the only challenge is to experience the maximum amount of enjoyment. Without further studies into the mass market players who lie beyond our current awareness we can’t know these players are out there, but there are surely signs that there are players beyond our usual reach, players who perhaps avoid videogames because they see the anger-contorted faces of the fiero-seeker and know that that experience is not for them.

Gamer hobbyists will always find plenty of developers and publishers willing to deliver what they want to play, but it’s about time the games industry looked beyond its habitual obsession with fiero and explored other models for play and entertainment. The money is there to be discovered, as the billion dollar franchise The Sims proves. When are we going to start seriously pursuing it? 

The opening image is Frustration by Laura Walker Scott, which I found here. As ever, no copyright infringement is intended and I will take the image down if asked.


New Poll: Freedom to Fail

This new poll ties in with today's Games post. Most of the people who come here are gamer hobbyists, so I don't expect to find many advocates for Freedom to Fail. It'll certainly be interesting to see how many - if any - choose this option.


The Prime Directive

Abstract_painting In the many Star Trek franchises, the Prime Directive is the guiding principle of the United Federation of Planets. In essence it states that there can be no interference in the internal affairs of other civilisations; that the development of a culture should be permitted to proceed at its own pace, unguided by external intervention. The term was introduced in the original 1960's series, but much of the refinement of the concept took place in the later Star Trek: The Next Generation franchise.

Just as the United Federation of Planets is a thinly veiled analogue of the United Nations, so the Prime Directive corresponds to a concept in political philosophy known as Westphalian sovereignty. The term originates from the 1648 Peace of Westphalia, which many academics assert was the beginning of the modern concept of nation-states. There is much debate among political theorists and historians as to the validity of this claim, but this sideline is unnecessary for discussion of the concept.

There are three elements commonly ascribed to Westphalian sovereignty. Firstly, the principle of the sovereignty of states – that is, the right of political self determination. This establishes the concept of a sovereign state. Secondly, the principle of legal equality between states, which determines that no sovereign state possesses legal ‘high ground’ with respect to another. Finally, the principle of non-intervention of one state in the internal affairs of another. The parallels with the Star Trek Prime Directive are apparent, with civilisations (principally portrayed as other planets) taking the role of the sovereign state, such that we might consider Westphalian sovereignty as “the real world Prime Directive”.

Naturally, people are divided as to the wisdom of maintaining such a principle. Much criticism has been given in recent years to models of international relations which take the Westphalian notion of the nation-state as a unitary actor as an axiomatic principle. Arguments against this view note that treating all nations as separate entities overlooks the cultural enmeshment of the modern age, as well as other vital points of interconnection, such as the fact we all make use of the same environmental system as our “global life support”.

Since the 1990s, the issue of Westphalian sovereignty has come under particular scrutiny in connection with military interventions (both proposed and actual) in Yugoslavia, Afghanistan, Iraq, Sudan and elsewhere. One proposed justification for violating the non-fictional Prime Directive is humanitarian grounds: that the prevention of imminent genocide (for instance) provides a just basis for overriding the principle. One dangerous consequence of this thinking occurs when people reason that the absence of democracy foreshadows future humanitarian crises, and thus claim that the absence of democracy creates a humanitarian justification for intervention. The fallacy of this reasoning can be found in the continued occurrence of humanitarian crises in democratic nations. For example, what was the aftermath of hurricane Katrina in New Orleans but a humanitarian crisis within a democratic state?

Another such criticism relates to the idea of a failed state, a claim usually levied against Afghanistan prior to the NATO invasion of 2001. In such cases, it is often argued that there is no coherent sovereign state and thus intervention is justified both on humanitarian grounds, and on the basis that such failed states represent a threat to both neighbouring countries in particular, and the entire world in general. This represents a slightly less contentious situation: since the first element of the Westphalian concept is political self determination, any situation where this is absent may be a viable cause of action. How the absence of this state of affairs should be determined by people external to the region in question remains a matter of debate.

Even within the context of Star Trek: The Next Generation, the idea that one should not intervene on humanitarian grounds proves difficult to uphold, and many Star Trek fans find the application of the Prime Directive in its more extreme cases to be quite absurd. For instance, in the episode “Homeward” the crew of the Enterprise is expected to stand idly by as a primitive civilisation becomes extinct as a result of an atmospheric disaster – in this situation in particular, fans were vocal in declaring the Prime Directive immoral in its application. 

The desire to bypass Westphalian sovereignty and allow for humanitarian intervention has lead to the development of new political philosophy concepts that would be open to such actions, in particular a newly evolving concept of contingent sovereignty. In this model, nation-states are assumed to have certain obligations, including the protection of its citizens, thus when a government makes war on its own people or conducts genocide it forfeits its claim to non-intervention. The appeal of such a variation – a Subprime Directive, if you will – is apparent.

Kofi Anan, speaking in 1999 in the position he then held as UN Secretary General, advanced a view that there was a developing international norm that “massive and systematic violations of human rights wherever they may take place… should not be allowed to stand”, and furthermore that frontiers should not be considered an absolute defence behind which nation-states could commit crimes against humanity with impunity. In many respects, such a view can be seen as an evolving response to the atrocities of the Holocaust in Nazi Germany, and the commitment the world made at that historic juncture, but has thus far failed to live up to: Never again.

The problem with this emerging concept of contingent sovereignty is in its application, and critics have been quick to point out the degree to which it affords powerful countries, such as the United States and China, too much leeway to invade sovereign states on the basis of their own private judgements of the moral standing of the country in question. It naturally follows that if contingent sovereignty is to be validly developed, it must be applied within the context of a transnational organisation such as the United Nations: no single country can be allowed to make the judgement call that results in the denial of sovereignty to another.

Whether one supports or opposes the idea of the Prime Directive (both in a fictional and a non-fictional context), now is the ideal time for discussion of the application of Westphalian sovereignty or any principle that might replace it, such as contingent sovereignty. Since the latter depends upon violation of human rights as its yardstick, it inescapably follows that renewed discussion of human rights on the global stage is a necessity before attempting to codify new principles of sovereignty in international law. Certainly, we must not accept a hypocritical situation in which one nation-state violates the sovereignty of another while simultaneously committing gross violations of human rights statutes it has publicly ratified. We should take any and all necessary steps to ensure that such gross abuses will not be tolerated. 

The opening image is an oil painting by Iranian artist Farnaz Tahbaz, which I found here. As ever, no copyright infringement is intended, and I will take the image down if asked.


Quick Question

We're rushing headlong towards ethics of war now... I wanted to ask: would people prefer to debate the invasion of Iraq separately from ethics of war in general terms? Or are the two so inextricably linked at this point in time that we would do just as well to discuss ethics of war in the context of Iraq? Let me know your preference!

In other news...

  • Just to confirm what Greg Williams noted in the comments last week, Super Mario Galaxy is structured in such a way that you don't need to suffer very much at all to unlock the final boss. I'll be putting together a critique of this game when my wife and I finish it.
  • Just seen there's a new Mario Kart on its way for the Wii... it's a shame people knock Double Dash so much; I had more fun with that than any Mario Kart except the DS version.
  • Can anyone tell me if Metroid Prime 3 avoids painfully difficult bosses at the end? I enjoyed the first game, but the last two bosses poisoned it for me and made me extremely reluctant to go back to another. Anyone got an opinion on the difficulty of finishing the third one?
  • I still haven't booked my ticket to San Francisco... I don't know why, but this year GDC feels like a chore and not a pleasure.
  • Incidentally, I have some more substantial game material on its way, I just haven't had time to draft it - we've been very busy lately.

Five Worlds

Five_worlds_2 We think of our planet as one world, and it can be beneficial for us to share a sense of solidarity with all our fellow human beings, wherever they might be. But there is a negative aspect to the one-world philosophy – it overlooks differences in culture as important to individuals as our commonalities. There are many ways we might segment our world, all of which will be somewhat arbitrary; here I divide the planet into five worlds, each representing a different set of cultural forces – a different superculture, if you will.

We are quite familiar with thinking about the West as a entity – that is, Europe and North America (the region that philosopher Charles Taylor suggests might be better considered the North Atlantic, although this omits Australasia which arguably belongs in this group). This region has been heavily influenced by Christianity, and indeed could perhaps be considered post-Christendom. Because of the tremendous successes in supporting freedom of belief in these places, I suggest we can consider this the Secular World – not because religion has lost its influence here, but because religion has been allowed to become a private matter here, thus the common experience that is shared is founded on secular principles. This Secular World contains approximately a quarter of the human population.

Only one of the five worlds I propose is larger than the Secular World – the region which (in population terms) is dominated by China and India, and which I shall call the Middle World, in reference to China’s name for itself as “the Middle Kingdom”, but also in reference to the way India is caught between the great history of the Chinese region and the influence of the Secular World. This is a region dominated by Dharmic religions (such as Buddhism, Hinduism and Sikhism), which stress individual duty. The superculture of this world is radically different from the Secular World in many ways, and it contains about half of the human population. 

The next largest world is that of the nations of Africa (and, for convenience, I shall include Oceania here as well). All human life began on the continent of Africa, and only later spread to encompass the planet, and for this reason I shall call this the Original World. Here, more than anywhere else, traditional tribal cultural practices are the major influence on the superculture, something it can be very difficult for people from the Secular World to appreciate (the word ‘primitive’ is too often used in this context). This is the least wealthy of all the worlds, and contains approximately one sixth of the human population.

About half the population of Africa can be found in South America. The term Latin America has been used to describe this region (because the principle languages spoken are the Romance languages Portuguese and Spanish, which directly descend from Latin), and since the primary religion in this part of the planet is Catholic Christianity, I shall term this the Latin World. It contains just one twelfth of the human population.

Finally, the smallest world of all is that of the Muslim nations. Here, there is great affluence thanks to the huge oil supplies that exist below its many deserts, but there is also an enormous disparity of wealth – much of the money is in the hands of rich elites. This Muslim World has become a flashpoint for a number of reasons: both the Secular World and the Middle World have a rapacious need to acquire oil to fuel their economies, and nearly two thirds of the world’s oil can be found here. But additionally, the superculture of this region is greatly unsettled, and still developing. The Muslim faith requires renovation from within in order to recapture the vital message of its teachings and strip away nasty feudal habits such as honour killings (which are non-Qur'anic in origin), or perversions of Islam such as the use of violence against civilians (which is expressly prohibited in the Qur'an). These cultural problems dishonour Islam’s grand history and central teachings of liberation, moderation and peace. This world contains one twenty fourth of the human population.

Each of these worlds represents a distinct superculture, with different problems to address and varied solutions to be found. The future of our planet lies not in an attempt to produce a single global culture, but in the fostering of an attitude of tolerance within and between each of these five worlds.


Civil Disobedience (4): A Theory of Justice

Rawls1990 The success of the civil rights movement in the United States brought Gandhi’s achievements back to the nation that had first suggested this kind of resistance as a means of bringing change, but there was still considerable opposition to this approach. Many viewed the idea of rebelling against the government in this way as both criminal and undemocratic. A notable exception was the political philosopher John Rawls.

Political philosophy had been languishing in neglect for some time before Rawls renewed interest in the subject with the publication of his seminal work A Theory of Justice in 1971. The main thrust of the work was an attempt to reconcile liberty and equality in a principled manner, leading to Rawls’ idea of “justice as fairness”. The book rigorously develops a case for distributive justice (the allocation of wealth), which ultimately fails because of the number of assumptions Rawls uses to develop his case, the most fatal of which is Rawls’ idea that all people could be treated as risk-averse, an assumption that does not bear up to scrutiny.

Despite its flaws, the book renewed interest both in political philosophy in general, and in the application of Immanuel Kant’s work to the field in particular. Prior to Rawls, Kant had been somewhat relegated to an intellectual curiosity, but Rawls’ idea of an “original position” (a thought experiment in which the nature of society is agreed by participants who have no knowledge of their actual status in society) did much to revitalise interest in Kantian ethics.

A Theory of Justice is important to the history of civil disobedience as it was perhaps the first widely read academic book to seriously explore and defend non-compliance as a legitimate democratic option. In the book, Rawls states:

Although one believes in the soundness of a democratic constitution and accepts the duty to support it, the duty to comply with particular laws may be overridden in certain in situations where the collective judgement is sufficiently unjust…

Rawls defined civil disobedience as “a public, non-violent, conscientious yet political act contrary to law usually done with the aim of bringing about a change in the law or policies of the government.” This definition was narrower than the one originally used by Thoreau, and Rawls wanted to be clear that by narrowing the definition he was not suggesting that this was the only form of legitimate dissent; he opted for precision in order to be able to discuss the issues more accurately.

Rawls also made it clear that in being civilly disobedient it wasn’t necessary to break the specific laws being protested – indeed, he noted that in the case of unjust laws for treason, it would be wholly inappropriate to commit treason in order to protest such an injustice! Rather, acts of civil disobedience could violate minor laws (such as traffic ordinances or trespass) in the goal of drawing attention to the greater injustice. He further noted that if all minorities were to pursue their objectives through civil disobedience at the same time it could be deleteriously disruptive to the order of society, and thus suggested that co-operation between minorities with different agendas might improve the effectiveness of acts of civil disobedience by scheduling them at different times. This perhaps offers an overly optimistic view of the realities of such situations, but the opinion is consistent with Rawls general position all the same.

Parallel to his definition of civil disobedience, Rawls also defined conscientious refusal as “non-compliance with a more or less direct legal injunction or administrative order.” It is likely this additional point was made partly in reference to the military draft that had triggered mass protests and widespread conscientious refusal in the Sixties. He cited Thoreau’s refusal to pay taxes as a consequence of his objection to how his money would be used as an example, as well as the example of a pacifist refusing to serve in the armed forces, or a military officer refusing to obey an order that they believe is “manifestly contrary to the moral law as it applies to war.” In the case of conscientious refusal, Rawls notes that in this situation one’s action will be known to the authorities (preferring to call covert infractions conscientious evasion, as in the case of runaway slaves).

It may seem at first glance that Rawls contribution was trivial; merely providing robust definitions and a simple analysis of consequences. However, by including sections on civil disobedience and conscientious refusal in his work on “justice as fairness”, Rawls was explicitly legitimising non-violent resistance in philosophical and political terms. This was perhaps the first time that it was coherently recognised that the establishment of democratic government was not, in and of itself, sufficient for justice or freedom. Rather, it was necessary for individuals to take action and protest in the face of injustice.

Meanwhile, on the other side of the Atlantic, the gradual diminishment of the Cold War was creating bold new opportunities for non-violent revolution to prove itself as a viable agent of change.

Next week: The Velvet Revolution 


Rob Hubbard

This month's Round Table is on the subject of music in games. I found I had precious little to say, until I read what Max of World Maker had contributed to the Round Table. Since then, there's something I haven't been able to get out of my head...

Rob_hubbard I remember fondly my many years playing on the Commodore 64, the home computer system first introduced in 1982. One particular aspect of its design was considerably ahead of its time, namely the SID sound chip. While other machines from this time were constrained to simple beeps and boops, the Commodore 64 could produce three-channel music with a variety of different effects. Yet when I think back to all the great soundtracks in Commodore 64 games, a startling fact emerges: almost all of them were composed by one man: Rob Hubbard.

In 1985, Hubbard approached Gremlin Graphics with samples of his work, and Gremlin commissioned him to write his first game soundtrack: Thing on a Spring. For its time, this platform game was quite remarkable in its own right, but it was its quirky, bouncy theme tune – a blend of violin, electric guitar and playful basslines – that has perhaps remained its most memorable legacy. Even to this day, I can still hear the tune in my head, a situation resulting from the many, many hours that the loop must have played as I made my way through the game.

Over the next three years, Hubbard was to record an incredible sixty six game soundtracks, many of which are widely regarded as the best of their era, including Monty on the Run, Thrust, Sanxion, International Karate and Zoids, the melody of which was based upon a track by obscure synthesizer legend Larry Fast. Like the tune from Thing on a Spring, I can’t escape the music from Zoids – it took me more than eight hours of continuous play to complete the game, all set to the same loop of music. The music was even more memorable than the game.

These days, short musical loops are out of favour. The storage media for games has become so capacious that modern games have vast multi-track soundtracks, rather than just a single tune, and most players wouldn’t tolerate a repetition of the same few minutes of music over and over again. Yet I fondly look back upon this time, both for the inventiveness of the games, and the creativity of the musicians who worked under such limitations and still produced exceptional work.

An incredibly detailed history of Hubbard’s work on the Commodore 64 can be found here.