What makes something a role-playing game? The Essence of RPGs was a serial in three parts running here at ihobo.com that offered an answer to this question by tracing the essence of these games to two sets of player practices, rule-play and role-play . Each of the parts ends with a link to the next one, so to read the entire serial, simply click on the first link below, and then follow the "next" links to read on.
Here are the three parts of The Essence of RPGs, each of which begins with a link to the corresponding part of the source serial:
From the earliest days of the tabletop role-playing game, there were two main camps for how the story-play would operate, two different sets of player practices for role-play neither of which was specified by the game itself. The first, and the one I was involved in right from the start, could be called dramatic role-play, a form that takes its influence from storytelling and mythology – the kind of psychological patterns identified by Joseph Campbell as the heroic monomyth (or ‘hero’s journey’). In dramatic role-play, what is most interesting is how characters inter-relate to one another, and as a result those of us engaged in dramatic role-play very quickly realised that the dice were a liability more than they were an asset. We learned to fudge dice roles for dramatic effect, and never regretted it. Characters in our games still died, but they died as a consequence of their actions, not as a result of mere random chance.
‘Rule-play’ is in essence a focus upon character advancement. This might go to the extremes of ‘min-maxing’ (making decisions solely for the purpose of maximizing benefit) or it might be a more subtle focus upon the pleasures of gaining new powers and capabilities. Indeed, Dungeons & Dragons created a near-infinite array of things to acquire! It is this breadth of options that underpins rule-play, and that can make a game fit the descriptor ‘role-playing game’ even when the elements of role-play are slender. What distinguished tabletop role-playing games as systems from the games that existed prior to Dungeons & Dragons (if we ignore the player practices, and hence what the game actually consists of in play) was the presence of a complex ontology. This philosophical term ‘ontology’ refers to the study of being, but it has acquired a sense in information technology of cataloguing and classifying what exists. In role-playing games that serve rule-play well, what exists is a great many things!
For any given game, its lineage is a network of games (and other artworks) that contribute to the inherent qualities of that game. It is akin to inheritance in biology, except games exchange their constituent elements in a manner more akin to bacteria than to mammals – leading to the impossibility of identifying any strict concepts of genus and species. What’s more, these relationships aren’t best understood in terms of material factors. It may be tempting to relate games by their designs, but this is an abstraction of what is being passed on between games: player practices. Whereas sports are founded upon the principle of conserving such practices over time, most games thrive on experimenting with new combinations of player practices, either by baking them into the artefact by design, or by supporting the creation of new player practices within the accompanying fictional world. Whichever way you look at it, we can’t just treat a game as a sterile material object: an artefactual reading is always an incomplete reading.
You can read the entirety of Part One: Children of TSR over at ihobo.com. This serial is dedicated to anyone who has ever played one of my role-playing games, whether on a computer or on a tabletop. Long may these games thrive!
For some time now, I've been complaining to anyone who will listen that double blind peer review of academic papers is a broken game. It’s broken, because reviewers are anonymous and unaccountable for their reviews, and anyone who has used the internet knows how dangerous it is to create anonymous and unaccountable people with the power to hurt others. Supposedly, we shouldn't worry about the potential abuse because academics are classy human beings. Except they're not, they’re all-too-human and everyone knows it – especially the academics! Indeed, if academics were virtuous people and not petty, narrow-minded, power-trip nerds, it wouldn’t be necessary for double blind peer review because we could trust virtuous academics to do their job well. But they don’t. And blind peer review helps them abuse their position.
In principle, a double blind review prevents bias by making the reviewers unaware of the people being reviewed, and protects the reviewers from potential backlash from the people they have reviewed. The result is that nobody is supposed to know whom anyone is, which is not a situation likely to bring out the best of humanity! In fact, as many have pointed out, it is usually easy to identify the authors of an anonymous paper, and certainly simple to separate research done at well-funded campuses from those at poorer (i.e. ‘foreign’ universities). This rather undermines the supposed benefits of blind peer review. Indeed, rather than eliminating bias, what double blind peer review does is allow reviewers full reign to exercise their personal biases by dismissing papers based solely upon their own prejudice or ignorance, without ever having to be held accountable for it. It is well documented that blind peer review blocks the publication of new research that runs against established dogmas, even when the new research is absolutely correct. (I wrote about this in The Mythology of Evolution).
I’d like to use a concrete example. I had a paper in review for two years at a fairly prestigious game studies periodical (if that isn’t an oxymoron...). When I eventually got the rejection, one reviewer had been dutiful in assessing the paper I had submitted. The other dismissed it out of hand based on his misreading of the paper, with a series of falsely construed judgements about its content. For instance, it was objected that Grand Theft Auto should not have its roots traced to Dungeons & Dragons, as I suggested. In such a situation where a peer review disagrees with an assertion, their obligation is to ask the author to back up the unsupported claim in revision. (The argument for this particular point, incidentally, appears in tomorrow’s post at ihobo, and only takes one sentence). You do not simply reject papers that you disagree with if you are a virtuous scholar. The trouble is – and this is the root of all evil here – there are too few virtuous scholars for blind peer review to be anything other than a nest of vipers. Anyone publishing outside of a small handful of close-knit disciplines will be able to share stories as bad as (or even worse than) what I am mentioning here. Just take a look at Rebecca Schuman’s Slate article from last year.
The fix is easy: make blind peer reviewers accountable. It’s very simple to do so. Each paper receives at least two peer reviews, so just make the other peer reviewer provide a rating for the quality of the second review. Then make aggregated reviewer scores publicly available every year via a central repository, like the ones already being used for reviews (e.g. easyChair for conferences). This means peer reviewers have an obligation to provide quality peer review, which at the moment they mostly do not do. I say this as a peer reviewer who has been frequently praised for the quality of my reviews. But then, the papers I am assigned are reviewed to a high standard because I review every paper in the expectation that I might have to face the author. Indeed, I attach to my peer reviews a statement that I consider blind peer review unethical, and will waive my anonymity if permitted. It never actually happens, but knowing that I am reviewing in a situation that even might put me face-to-face with those I am reviewing encourages me to be a virtuous scholar. Anonymous unaccountable reviewers, as anyone with even a passing experience of the internet can attest, will always be problematic.
In the absence of a change to the rules of blind peer review, it will remain fundamentally unethical and a broken game. We all deserve more than this. But who has the courage to turn against the status quo and help bring about the revolutionary changes to established academic practice that are desperately needed?
Review of Timothy Williamson’s Tetralogue, by Oxford University Books, ISBN 978-0-19-872888-7.
Throughout Western philosophy, the dialogue has served honourably as a means of expressing arguments accessibly whilst still bearing subtleties. One particular use of the dialogue approach has been to lampoon a weak argument by contrasting it to stronger positions. This is the form of Hume’s remarkable Dialogues Concerning Natural Religion, published after his death in 1779, and also of Galileo’s famous 1632 Dialogue, that sets out a (partially correct) argument for a heliocentric cosmos. Hume has Cleanthes express an orthodox position that is artfully dismantled by the subtler Demea and Philo, the latter of whom probably represents Hume’s own views. Similarly, Galileo has Salviati represent his own position, Sagredo as a layman whose role is to be won over, while Simplicio (whose name is often taken to imply ‘simple-minded’) espouses the views of the medieval scholastics. Galileo uses this conversation to argue in favour of Copernicus, which he unfortunately supports with a model of tidal behaviour that transpired to be entirely incorrect.
We forget, sometimes, that Galileo was a philosopher – a natural philosopher – since the concept of a scientist was not invented until several centuries later. His Dialogue is an intriguing example not because of the kinds of philosophical subtleties found in Hume’s final book but because of the different interpretations that have since been placed on its characters. The conventional view, as espoused by Arthur Koestler in 1959, is that Simplicio is “the clown who is kicked in the pants” – a view that helps explain why Galileo’s former ally, Pope Urban VIII, turned upon him. But others, such as Joseph Agassi, argue that this overstates the matter: for a start, Galileo was a faithful Catholic, and was surprised that his fortunes turned sour as a result of this book. Thus from the one text we have competing accounts of the truth: either Galileo was a quarrelsome egoist (as Kostler has him), or he was a noble advocate for clarity of thought with a mere touch of vanity (as Agassi has him). Who is right and who is wrong?
This question, taken in its most general form, is the basis for Timothy Williamson’s short but intriguing text Tetralogue. The marketing department at Oxford University Press even smother the cover with the words “I’m Right, You’re Wrong”, which unfortunately creates the impression that this is the title of the book – which in turn implies that Williamson thinks he’s right, and everyone else is wrong. Indeed, despite the suggestion of the book’s blurb that it “invites readers to make up their own minds about who is right and who is wrong”, it’s very hard to imagine anyone thinking that Tetralogue’s Simplicio is right, or its Salviati is wrong. This is a book, rather like Galileo’s Dialogue, that is out to make a point, and the only ambiguity concerns our conclusions about its author.
The book is expressly stated to offer an accessible introduction to philosophy, for which it is well-suited – although it also claims to pose “serious questions” for “old hands”, which is a harder assertion to fathom. Not that experienced philosophers won’t find interesting material here – far from it! Williamson interjects a great many insightful observations into the conversation. But the rhetorical force of the book as a whole presents a case for epistemic and moral realism that is not going to create any fresh discussions among those who already adhere to these positions, and is pragmatically incapable of converting the reader from rival positions because it never truly takes those alternatives seriously.
The story commences as an argument between two travellers upon a train – Sarah, a hard-headed rational positivist, full of faith in the superiority of the sciences, and Bob, who believes his neighbour is a witch responsible for his garden wall collapsing on his leg. Into the fray steps Zac, the relativist, who attempts to negotiate between the two but instead ends up with philosophical egg on his face. Later, they are joined by the brusque and implacable Roxana, who applies logic to everyone’s statements in order to reveal their internal fallacies. Anyone with even a passing familiarity with Williamson’s work can probably guess how this is going to go: as a first-class logician who is committed to realism, Bob and Zac take the Cleanthes/Simplicio position of clown, while Roxana stands for Williamson’s professional knowledge as a charmless Philo/Salviati, and Sarah as Demea/Sagredo gets to make a few points that the author supports, but is largely there to be rationally persuaded of the truth.
It is often noted that Galileo allowed Simplicio to be charming, presumably so that he would have something positive about him. Bob and Zac are equally afforded likeable temperaments; Zac functions as the buffoon, whose pants have a big red ‘X’ painted upon them, while Bob serves mostly as comic relief. Both characters are significantly problematic because neither shows any sign that the author takes seriously the positions they are supposed to represent. By comparison, Sarah is rather irritating since she is glibly smug about the correctness of her viewpoint (even when her justifications for it fall apart), and Roxana has been given the personality of sandpaper, perhaps because making her pleasant as well as the mouthpiece of an experienced, professional philosopher would have been overkill.
Despite Bob being constructed as a person who believes in witchcraft, there can be little doubt that he stands for theism – or at least the kind of popular theism that the media in the United States (and intellectuals in the UK) delight in parading around as a purported paradigm case for religion. This becomes clear when in the first part of the book the discussion of teaching witchcraft in schools is raised – a sideways reference to Intelligent Design. His belief in witchcraft provides a smokescreen in this respect; since few if any theists believe in witchcraft, Bob allows (Williamson seems to assume) the folly of ‘superstition’ to be revealed in Bob’s naive arguments. The character fails because no theist will recognize themselves in Bob, and coming at these ways of life from a position of epistemic realism it is impossible to adequately understand why anyone could live this way.
But Zac does no better – indeed, he fares far worse, in part because Williamson makes the interesting point that in the disagreement between positivists and theists (Sarah and Bob in the book), at least both sides respect each other enough to say that the other is wrong. Zac, with his pop-philosophy relativism, is presented as a perpetual back-peddler whose position is always qualified with “my point of view” – the post-modern retort that like the realist’s “that’s just your opinion” serves primarily to set up a higher ground that is denied to everyone else. Zac cites Nietzsche and Wittgenstein as quips and interjections in a way that makes it very difficult to believe he has read either, and therein lies the biggest problem with the entire conversation: if Williamson does not understand the purpose of Wittgenstein’s Philosophical Investigations, or the challenges to rationality posed by Nietzsche, his philosophical credentials come into question. But if he does understand their work and simply misrepresents them through Zac’s voice, then his arguments here are fundamentally dishonest.
Tetralogue is first and foremost an attack on relativism, which is painted here as a vaguely reasoned striving for ‘liberation’ dressed up as diplomacy. Yet it is impossible to find the views of any philosopher who has waved that particular banner in what Zac says. Feyerabend, the arch-relativist, is not adequately represented here, and neither is what Bruno Latour termed ‘relative relativism’ anywhere on display, although Williamson would appear to accord with Latour’s claim that Feyerabend-style relativism presupposes an absolute position solely to reject it. Rather, Zac is a woolly-headed post-modernist who seems far more interested in being liked – and ultimately getting laid! – than anything else. Perhaps this is a caricature of common or garden post-modernists, but even if this is so there is something seriously misleading in positioning this against Williamson’s logic skills. There are many great points about the weaknesses of generalized relativist claims in Roxana’s dismantling of Zac, but the arguments thus undermined are not those advanced by any contemporary voice in philosophy. Williamson comes across as ducking the argument.
The result is that the author doesn’t seem to be taking on any serious opponents, and the thrust of the conversation is for Roxana to purify Sarah’s imperfect realism in order to rise above the relativistic bad habits she has unknowingly fostered. Here, Williamson is on much firmer ground: attacking the fact-value distinction and its consequences shows the lunacy involved in positivistic positions that reject ethics as a mere matter of personal choice or try to subjugate morality under some implausible totalizing rubric. On such matters, he has many allies, including both Latour and Mary Midgley, who some forty years ago gave strong reasons for rejecting the arcane split between facts and values. If Williamson’s use of clear logic to make similar points gets this across to more people, that alone would make this book worthwhile, although it should be acknowledged that rejecting strong forms of relativism does not constitute any de facto case for realism.
All of this may make it seem as if Tetralogue is not worth your time – but actually, I found it an irresistible read. From the moment it arrived in my hands I was compelled to push onwards, even though (perhaps especially because!) I have substantial disagreements with Williamson’s positions. This brings us back to Koestler and Agassi’s competing views of Galileo. According to Williamson, at most one of these people can be correct: on any given proposition, someone is right, and someone is wrong. Thus spake classical logic – the fate of truth is tied to the principle of contradiction . But it is possible, indeed plausible, to reject the extreme forms of both relativism and realism and recognize aspects of truth in competing claims, especially since on most topics differences in propositions reflect differences in the practices used to establish their meaning. This is Wittgenstein’s insight, and it is not permitted to enter into Tetralogue’s conversation at all. In its absence, it feels as if the author’s argument, like Galileo’s, goes awry by seeing all too clearly where his opponents go wrong, but failing to perceive the limitations of his own case.
My new laptop arrives this week, and blogging will be disrupted while I configure it for my purposes. In the meantime, a short cautionary tale.
I had originally bought a new laptop over a year ago from Amazon Marketplace. It arrived, and I used it for a month or two before a fault developed in the screen. I returned it to the vendor… and never saw it again. Amazon wouldn’t give us any details of the vendor, and told us to contact them via the email contact forms. We sent these to the vendor every week. Eventually it became apparent he didn’t even have it anymore.
In the UK, when you purchase with a credit card you have additional consumer rights under Section 75 of the Consumer Credit Agreement 1974 that means when you can’t resolve a problem with the vendor, you can claim from the credit card who are ‘jointly and severally liable’. So we claimed via Barclaycard. They paid us some titbits (without ever admitting liability) but ultimately we had to take it to the Financial Services Ombudsman for resolution. The Ombudsman ruled against us last week: buying from Amazon Marketplace did not constitute a creditor-debtor arrangement with the vendor, so our consumer rights were null and void.
Surprised that everything could go so wrong with a purchase from a big name company like Amazon and a big name credit card like Barclaycard, I mentioned it to a contact at the BBC who passed it on to Radio 4’s consumer affairs show You and Yours. They ran my story on Friday. Amazon then ‘generously’ decided to pay us the cost of the laptop. So we did eventually get our money back… but only as PR damage control. I advise everyone to be cautious about big purchases on Amazon Marketplace, as Amazon do not take full responsibility for these transactions, and in the UK they disrupt your consumer rights. As ever, caveat emptor.
A while back, I went to a wedding in Texas. I distinctly remember flying into Dallas, as there was a terrific thunderstorm and from above the clouds I could see the lightning strikes as dazzling patches exploding in the darkness, like some science fiction artillery was bombarding the ground. Yet that is not what I remember most vividly about that trip. The night before the ceremony, I was staying in a motel with various guests for the wedding who had come over from the UK. One of the them must have been in a confessional mood, as he told me all about how he had recently been debilitated for several months with crippling depression. He suggested to me that there needed to be a change in the way people see depression; that it should be taken to be more like a cold, something anyone can get that passes.
I could see he needed support at that time, so I bit my tongue. But what I wanted to say was: “How dare you, who have just had one short bout of depression, come to me who have spent my whole life wrestling with it and suggest that we should be downplaying depression and making it out to be something small and trivial like a cold? You’re just a damn tourist in my homeland and have no respect for what it’s like to live with something like this. Your whole reason for making it out to be ‘like a cold’ is because you recognize there’s a stigma attached to ‘mental illness’ and you want permission to have briefly experienced what those of us who live with it must endure without you having to bear that stigmata.” I let it slide. But I never forgot that conversation.
What presumably motivated him to make his argument was an attempt to push back against the prevalent but never-voiced default rhetoric of depression, indeed, of any kind of variation from the perceived norms of mental function: ‘they’re not right.’ This is the rhetoric that makes depressives, schizophrenics, manic depressives and so forth outsiders, because they aren’t normal, they’re not ‘like us’ (whoever ‘us’ might be). This is the rhetoric that makes admitting to depressive tendencies a career-ending event for some people, that leads others to end their lives, and that can never really be eliminated because it is never really espoused. It is a fear of shadows and the unknown, thriving on ignorance and the erroneous perception that normality is both possible and desirable.
Against this default rhetoric, the wedding guest’s view (which he also connected with some remarks by Prince Charles) was to downplay depression as something less serious. But this ‘like catching a cold’ rhetoric is not really an improvement. Because depression is only anything like a cold for those people whose brush with it is a passing affair. I don’t want to belittle those experiences – depression, as anyone who has gone through it knows, is a serious business. But it’s precisely because it is a serious business that a rhetoric that downplays its seriousness can’t really be an improvement on the mob mentality of the ‘not right’ rhetoric. What deeply offended me about this perspective is that at its heart it is trying to make it okay to suffer depression by splitting off incidents of depression from ‘mental illness’ in general. But this means that it is equally a rhetoric of exclusion: it allows some to escape the claim of ‘not right’, but it still condemns the rest of us to banishment.
There is a third rhetoric of depression that is in common circulation, that of the medical establishment. According to this, depression (or major depressive disorder, or clinical depression) is a mental disorder caused by irregular behaviours of the neurotransmitters serotonin, norepinephrine, and dopamine, sometimes described informally as a ‘chemical imbalance’. It is treated through the application of anti-depressant drugs of various kinds, perhaps with additional counselling or cognitive behavioural therapy. In effect, this ‘mental illness’ rhetoric of depression views those suffering from it as faulty and in need of fixing. What I find striking about this perspective is that it insists in seeing depressive people as broken, just as the other two rhetorics do. Whether it’s because they’re ‘not right’, it’s just ‘like a cold’, or it’s ‘mental illness’, the three rhetorics of depression are united in ensuring that depression, and ‘mental illness’ in general, are set up as something that is not normal. Depressive people are broken people.
But these rhetorics are all misleading. Because depression, despite its unpleasantness, despite its severity, despite its radical effects on people’s lives is normal. It’s completely normal. As normal as falling in love with your high school sweetheart and marrying them, and actually less rare (eight times more common in the United States), whatever the movies may suggest! The problem is, we are still operating with the Eighteenth century mythos of Nature, one that tends to equate ‘natural’ with normalcy, and presupposes that there are natural norms against which everything else is measured. Deviations from the natural are thus judged as wrong in some way. This is the origin of the hermeneutics of ‘sickness’, which we are not obliged to accept for all that we have saturated ourselves with this mythology.
A hermeneutic is a principle of interpretation, and applying this term to medicine is heresy. But I have always fancied myself a heretic, especially in situations where resistance is rare. I am not, however, entirely alone in this matter, and first heard it suggested by the philosopher Charles Taylor, who was drawing on radical ideas by the renegade Catholic priest, Ivan Illich. Illich saw clearly the transformation in our mythology of health and dying in the last few centuries, and set out serious problems with our contemporary perspectives in Medical Nemesis, a book that became required reading for many studying to become medical practitioners. Alas, it does not appear to have in any way halted the trends that it argued against.
The general criticism I am advancing here has nothing to do with the efficacy of contemporary medical techniques, because it is not about treatment at all: it is about diagnosis. The medical establishment’s mission is to find a box to place everyone into. If you go to your doctor with any kind of complaint whatsoever, you will either be assigned an appropriate box for your sickness, or you will be referred to a specialist who can find an even more exacting box for you. But you will be found a box, or you will be condemned to infinite tests to attempt to establish that box. And the number of boxes available has steadily increased, without any question about the merits of categorising people in this way. As a simple empirical observation, our ever-improving medical knowledge has utterly failed to halt depression: it has increased in incidence throughout the last century, and is ten times more prevalent than it was seventy years ago.
There are good reasons to question medicalisation in general, particularly when the commercial practices of pharmaceutical companies are taken into account, but in the case of ‘mental illness’, it should be even clearer that something has gone horribly wrong. People in the autism spectrum or those with Down’s syndrome have unique ways of being in the world, but we are in no way obligated to view these phenomena as sickness, which is always a negative judgement. Recall that as recently as 1974, homosexuality was classified as a ‘mental illness’, and is still claimed as such in some places according to the mythos of Nature that perceives ‘normal’ as some idealised state of being that it is therefore wrong to deviate from. Declassifying homosexuality was a first step, but it cannot end there.
What truly bears a claim to ‘normal’ for life is diversity, variation, and change – the concept of ‘species’ that suggests that every kind of organism can be perfectly standardized is no longer plausible, and has not been for quite a while. We are at the fringes of completely overturning the old mythos of Nature and normalcy, but it cannot be done without substantial revisions to the hermeneutics of ‘sickness’. But here, our deference to doctors and medical science is a barrier in part because, as Mary Midgley observed, they have inherited the powers that used to be ascribed to priests. There is a non-religion of medical power that generates the psychological effects of blasphemy far more reliably than traditional religions these days. Woe betide anyone who challenges the authority of doctors to classify us into categories of sickness!
Yet there is resistance. It comes with the term ‘neurodiversity’ that grew out of adopting the phrase ‘neurotypical’ as a contrast case by autistic people. Similarly, it comes under the banner of ‘mental diversity’, a term I broadly prefer since the prefix ‘neuro-’ is clearly intended to draw against the authority of the sciences and this issue is not a matter of measurement or experiment, but one of ethics and understanding. These alternative rhetorics (and the movements behind them) make it easier for me to talk about depression, although even now I do so with feints and allusions and without much in the way of detail. This is still uncomfortable territory for me, and for reasons that have much more to do with the third rhetoric than the first: those who argue against medical orthodoxy are seldom welcome. Yet the standard explanation of depression that treats it as akin to a mechanical fault cannot suffice as a general explanation since it utterly fails to account for our depression epidemic. We urgently need fresh perspectives.
In my life, experiences of so-called ‘mental illness’ have made me stronger, nourished my creativity, and led me to life-long friendships. It helped that I vowed to reject suicide as an option at an early age because I could see all too clearly how devastating it would be to others. It may also have helped that I rejected pharmaceutical interventions in favour of methods that mostly built my virtues rather than my dependencies. I flatter myself that I have taken control of depression, for all that I still occasionally stumble. But I recognize that what has worked for me will not necessarily work for others. For some, pharmaceuticals have helped them establish a life worth living. For others, drugs have simply incapacitated them for a while. There is no adequate pattern here that is beyond questioning, and no-one – not even depressives themselves, and least of all doctors! – can claim ownership of depression and force everyone into just one box.
This is perhaps the greatest problem with depression: although the experiences we mark with this term have common traits, they do not share common circumstances. They reflect different biologies, different psychologies, different environments, different ways of living and thinking and being. The statistical methods of the sciences are not suited to understanding such variety; they thrive on picking out the common patterns. But boxes built to one size cannot fit all situations, any more than averaging our measurements will produce a garment everyone can wear. The essence of mental diversity is that we are all different, and we cannot be subsumed into generic models of sickness and treatment. Medicine provides us valuable options, but so too does religion, non-religion, and every other source of practices and community. We each have to find our own path. And this is true, regardless of how ‘normal’ we might think ourselves.
The opening image is And I Can Feel Your Pain, by Ruth Batke, which I found here, where it had been on sale. As ever, no copyright infringement is intended and I will take the image down if asked.
It’s been a week since Terry Pratchett passed, and I have still failed to write about it. The news upset me more than I would have anticipated, given that I have not even seen Terry since 1999. I have a closer relationship to Rhianna, his daughter: we’ve made games together, and meet up for a chat whenever we get the chance. The extent of my engagement with Terry was merely that I worked at a company making games in his Discworld universe.
But then, Terry was the first editor I worked with, and also the best of them all. I not only learned a tremendous amount from him, but in many respects he was the foundation of my career as a game designer and writer – if it had not been for the critical success of Discworld Noir, I could not have founded International Hobo. Erasing Terry from my life would be destroying all that I am and leaving only the wispy aspirations and tedious angst of the person I was before I started work at Perfect Entertainment.
Yet, I have failed to write my farewell. At heart, my problem is that I feel like a fake drawing on my repertoire of Pratchett anecdotes to wax lyrical about a man I was privileged to work with but could never actually know. I feel as if writing about these experiences would be the act of someone desperate to garner fame from proximity – it would solidify my position as an H-list celebrity while exposing my insignificance; the shadow, glibly praising itself for its proximity to the flame. I feel certain I will write about my experiences with Terry at some point, but I cannot do it now without feeling worse than I already do.
I will say goodbye to Terry some day, but it seems I cannot do it today. Yet I can already thank him, as I did in the last letter I wrote to him, for having such an extraordinarily positive effect on my life. And I will miss him, even though he was never really there. This, I suppose, is the effect that any great writer has through their work.
Over on ihobo today, my reply to two missives from Jed Pressgrove and Chris Billows examining the role of the tutorial in videogames in terms of different motivating ideals of politeness. Here’s an extract:
The challenge in creating an adequate tutorial is the complete absence of knowledge we possess of the actual people who will be learning to play our game. Pitch the level of detail too low, and there will be players confused by what is expected of them. Provide too much detail and those players who are skilled in figuring things out will be irritated... What is particularly bemusing about creating tutorials is that if you watch a player learn to play a game from over their shoulder, you might not need to say more than a sentence or two in order to put them on the right track. But this is because we have the intelligence to interpret the problems a player encounters and provide appropriate guidance: there is no adequate way to transfer this skill to a computer!
Philosophers almost always start by saying: “I want to see what being is, what reality is. Now, here is a table; what does this table show to me as characteristic of a real being?” No philosopher ever started by saying: “I want to see what being is, what reality is. Now, here is my memory of my dream of last night; what does this show me as characteristic of a real being?” No philosopher ever starts by saying “Let the Requiem of Mozart be a paradigm of being”, and seeing in the physical world a deficient mode of being, instead of looking at things the other way around, instead of seeing in the imaginary, i.e., human mode of existence, a deficient or secondary mode of being.
I cannot claim to have started by placing an artwork like Mozart’s requiem as a ‘paradigm of being’, but I certainly did not begin philosophy by asking about being and reality. The stepping point for my philosophy was nonsense, and specifically the defence of nonsense – perhaps even, as the Discordians affirm, ‘nonsense as salvation’. The origins of this can be seen in my account of Moore’s Paradox and the Belief in False Things (2009): being told, in outright denial, that I did not believe things that I knew were false set me on a certain path. The most recent step on that path was to reformulate knowledge in order to view it as a practice.
How can nonsense possibly provide an adequate foundation for anything? Consider that to gain an authentic understanding of something we must be willing to fully experience it. If we start with conceptions of truth, nonsense is simply the vast array of things we must discard as irrelevant (often, without any serious examination of what we are dismissing). Most people live in precisely this fashion; other ways of being in the world are nonsense by default, and can be ignored or derided. But if we start from nonsense – from a close study of nonsense – we discover how suspicious truth appears when it is taken as a miraculous starting point. Truth is a worthy destination, but we cannot simply expect that it was there fully formed at the point of departure. Beginning at nonsense, on the other hand, allows us to gain a proper appreciation for truth – and indeed, for nonsense itself.
The Castoriades quote also resonated with me by stressing the importance of imagination – which has been the consistent theme in this stage of my philosophy (what I’m tempted to term ‘my immature philosophy’, although I might not live long enough to make fine wine from my own grapes). In this regard, I would like to add another connection to this chain in the form of the British philosopher Alfred North Whitehead, who in a lecture series from 1925 spoke of the relationship between the ‘realm of possibility’ and actuality:
It is the realm of alternative suggestions, whose foothold in actuality transcends each actual occasion. The real relevance of untrue propositions for each actual occasion is disclosed by art, romance, and by criticism in reference to ideals. It is the foundation of the metaphysical position which I am maintaining that the understanding of actuality requires a reference to ideality.
This philosophy – where understanding actuality actively requires thoughtful consideration of possibility – seems to face towards where Castoriades is gesturing when he talks of seeing in the physical world “a deficient mode of being”. Here, I join with these other voices in believing in the unbounded value of striving towards ideals, even if so doing is a kind of nonsense.