Make-Up

Make-upWe would be hard pressed to find a more diverse industrial network than that which manufactures make-up: plant oils, the crushed bodies of insects, mineral dust, ground up herring scales... the make-up cyborgs wear upon their faces a macabre concoction worthy of the most fantastical fairy tale witches brew.

I don’t have that much personal experience with make-up... my wife doesn’t use it, and although I’ve used a little on stage or for fancy dress parties and the like, it’s a small part of my life. Yet I have not failed to notice the number of people who ritualistically ‘put on their face’ in the morning and then, just as laboriously, remove it at night. Next to this practice, the obligations of Islamic prayer (salat) or a Christian monk’s canonical hours seem almost tame!

It would be easy to take a judgemental stance here: why do you need to cover your beauty with paint? Why get anxious to show the face underneath? Yet there is something to the rituals of make-up that people draw strength from - and this is nowhere near as gendered as it may first appear. American football players may not spend as much time crafting their faces, but they still put on ‘warpaint’ before they hit the field! What’s more, unlike that conspicuous prosthesis of the wealthy elites, the suit (cyborg #36), which must be expensive to serve its role, make-up is an affordable option for the majority of people on our planet. Compared to designer clothes and shoes, grotesquely overpowered cars, or jewellery, make-up provides a way to project a persona that is not merely aping the conspicuous excess of the commercial aristocracy. In the simple act of painting a face lies a tool for self-confidence that allows for its own excellences.

Yet I am mindful of Baz Luhrmann‘s admonition: “Do not read beauty magazines, they will only make you feel ugly.” There is truth in this claim, a tyranny of image maintained by elites for their own benefit, and one that bulldozes the inherent beauty of ethnicity with conventions rooted in whiteness. Beyoncé straightened her beautiful hair to conform to the norms of white beauty. Make-up is primarily made for lighter skin tones... those with a darker complexion not only struggle to find suitable make-up (should they wish to use it), they are repeatedly and systematically sidelined in the image industry, reinforcing a set of stereotypes of beauty all at heart celebrating a pre-prescribed image of whiteness.

As make-up cyborgs, your excellences lie in how you choose to present your face to the world. But this act is also a form of subjugation in so much as the cybernetic network distributing images of ‘beauty’ provides the dogma upon which each face finds its normative foundation. In this way, our faces become not our own possessions, but a means of possession by demons beyond our individual control, doubts and anxieties. The cybernetic network of all faces carries the baffling risks of every labyrinth.

A Hundred Cyborgs, #94


Drugs

DrugsWe tend to think about legal vs illegal drugs - forgetting, it seems, the harms caused by (legal) medication, alcohol, or tobacco, and the good that unexpectedly comes from (illegal) intoxicants. One sensible constraint applies to all drugs, medicinal or recreational - moderation. Those drugs that cannot be used in moderation, such as heroin, crack cocaine, or meth, set the interpretive pattern that tars illegal drugs with the same brush. Yet it is legal drugs - paracetamol (acetaminophen) and alcohol - that cause the majority of fatal health problems, mainly via liver damage, and tobacco is not far behind.

Because ‘drugs’ are so broad a category, it can be hard to make general moral or behavioural claims about the many different kinds of drug cyborgs. But as my earlier discussion of pubs (cyborg #45) suggested, the unseen benefits to drugs lie far more in the community they foster and support than in their individual effects - at the level of the individual, we can say nothing more than ‘stick to small doses’ or abstain entirely (again, regardless of whether the drugs in question are legal or illegal). But what interests me about the cybernetic networks surrounding drugs of all kinds is the potential for them to develop into communities of care.

As a small example, in the UK, US, Denmark, France, and the Netherlands I have encountered networks of people bound together by their sharing of modest amounts of marijuana. Sometimes this occurs illegally, as a friendly distribution network, but at the level of those at the bottom of the network there are none of the movie-style organised crime cliches (although gritty stories occasionally drift down the supply chains...). Sometimes, as with Freetown Christiania in Copenhagen, it occurs in a legal grey area, with the residents not only providing hash or bud that is illegal by law but also fairly-priced food and accommodation for about a thousand residents in an ‘anarchist commune’. In the US and the Netherlands I have also seen it from inside the law, with cafes (coffee shops) and retail establishments providing the point of supply, and once again a community of support maintained, although often less intimately than when the community is bound together by the need for a certain discretion in their activities.

No such community of care exists, however, when (legal) drugs are purchased from a supermarket. Rather, a small health warning label on alcohol, tobacco, or painkillers - and in the latter case, warnings rarely or never read, despite widespread negative health effects (in terms of liver damage) dwarfing those of all illegal drugs taken together. I am forced to suggest not only that we should avoid being distracted by the legal histories in the case of the various (legal and illegal) drugs when we wish to assess their moral and behavioural influence, but further to ask whether what we have declared legal is in any way the innocent side of the fence it is so often assumed to be.

A Hundred Cyborgs, #93


Deplatforming

DeplatformingA basic pillar of contemporary social justice campaigning is deplatforming, which entails collectively piling complaints, economic pressure, or threats of non-violent reprisal against a venue that is hosting a speaker whom an interest group opposes. It’s effective, because venues depend upon punters for their livelihood.

Each deplatforming cyborg is a network of concerned individuals brought together to engage in a power struggle against another cyborg - the venue (it’s staff, building, and technology) and an individual, who is being deplatformed. In this match up, the deplatforming cyborg network has the insurmountable advantage on the political battlefield. A brave university official might just stand up for free speech and resist the pressure being mounted; a commercial venue will almost always fold. Deplatforming is thus a powerful tool for silencing individuals we disagree with. Perhaps the question ought to be: should we be silencing those we disagree with?

I have called ‘cybervirtue’ those positive qualities we possess when we become part of a technological network. Alas, the deplatforming cyborg is not and cannot be cybervirtuous (which is not the same as claiming that this course of action is not permitted). The behaviour deplatforming encourages is ‘not listening’ i.e. censorship, and the only reason this even appears just to proponents of deplatforming is that the people being censored are those whose ideas are deemed so offensive that censoring them is judged the only acceptable course of action. The argument in favour of deplatforming is therefore the necessity of halting evil, not that doing it will make us good people. In other words, the ends justify the means - and we ought to be very careful about anything that relies upon this principle, since the goodness of ends cannot justify the immorality of means.

The term ‘fascist’ is bandied about far too liberally these days - pun intended - but few regimes today are as brutal and oppressive as Mussolini’s fascists. A key part of the fascist ideology, however, was the forcible suppression of opposing ideas. It should cause us at least some pause when we realise that deplatforming is in no way guaranteed to be used for causes we deem just, and indeed is tailor-made for the kind of fascism-light that is popular today in the nations that once stood for liberty. I have already witnessed from afar figures from all corners of the political spectrum being deplatformed by those on both the left and the right. If tolerance is a virtue we value, we cannot foster it by deplatforming. We must allow those we disagree with to speak, else how can we challenge those ideas we wish to overcome, such as the ideologies of the bigot in myriad forms, both liberal and conservative? I fear deplatforming has empowered bigotry far more than it has done good in the world, and even if you do not cherish freedom of speech as much as I do, I encourage you to reflect upon whether a world where no outrageous suggestions may be voiced is a good world.

A Hundred Cyborgs, #92


GoogleApple

GoogleAppleA GoogleApple cyborg is a cybernetic organism consisting of a human coupled with either an Android smartphone or an iPhone. The human feels like it is the most significant part of this network, which is odd considering that GoogleApple adds in thousands of robots and humans in order to make this cyborg work. In fact, even though the humans forming GoogleApple cyborgs are by far more numerous than the humans working directly or indirectly for Google or Apple, they are still less numerous than GoogleApple’s robots - there’s at least one of their smartphones per human, plus the rest of their cybernetic network behind them. In other words, as GoogleApple cyborgs, we are a minority component in a vast cybernetic network.

We tend to focus on the choice - do I buy an iPhone or an Android smartphone? - and thus miss the more salient point that it makes very little difference what we choose in this regard. Either way we’re becoming a GoogleApple cyborg, a being that can in seconds transact with the internet and will indeed do so frequently. The smartphone permits us instantaneous escape - and impels us to do so recurrently. The GoogleApple cyborg is thus one of the most distracted creatures that ever lived, although exceeded in this regard by various other combinations of beings and things, such as the heroin needle cyborg.

It is not fashionable to talk of ‘duties’ these days yet it is extremely fashionable to assert random wished for concepts as rights. But rights come from agreements, not emotional whims, and rights are inherently duties. To have a right is to say that everyone has a duty. But in the context of a smartphone you have no rights and consequently GoogleApple has no duties towards you - even if you die because you were distracted by your smartphone, legal responsibility for your death largely falls to you. That’s because you have moral duties towards yourself that are not rights because they do not spring from agreements.

Both Aristotle and Kant suggested that one of our duties towards ourselves is to pursue our own excellences. A smartphone can help with this - I write almost all my philosophy on my pocket robot, runners use theirs to monitor their performance, and dieters track calories to bring their eating under control. But to GoogleApple, activities that pursue your excellences are entirely interchangeable with those that squander your time and intelligence. Algorithms that curate suggestions ‘for you’ are more fairly described as curating ‘for them’ - propagating the money-making apps, the distracting apps.

I don’t imagine we’re heading into a future world without smartphones, but I do fantasise about escaping life as a GoogleApple cyborg. I dream of a cybervirtuous smartphone, a robot that brings out the best in those that partner with it. But I am sceptical that such a thing could come from GoogleApple. I have to wonder: would every cybernetic network that might replace this one fall inevitably into the same traps...?

A Hundred Cyborgs, #91


Game Mechanics vs Player Practices

Earlier this week, I waded in on an interesting discussion that broke out on Twitter about the use of the terms 'game mechanics' and 'game systems', definitely worth a read if you're into discussions around game terminology or have an interest in the history of game design. Here's an extract:

And that's where and why it all goes wrong for everyone trying to 'fix' game mechanic as a term. Because both 'game mechanic' and 'game system' are concepts from tabletop game design where rules are explicated in written form by necessity. It is a 'game mechanic' in D&D that the D20 is used to resolve a percentile hit chance with 5 percentile increments, and this is part of the 'game system' for combat resolution. It is arguably a 'game mechanic' that pluses on weapons add to both to hit and damage – a mechanic consisting of a great many rules, not all of which appear along with the combat system in the rulebooks. In other words, for a tabletop game, everyone saying 'a game system' is made of 'game mechanics' (bonus points if you spot that 'game mechanics' are also made of 'rules') is continuing the practices of tabletop game design that flourished in the 1960s and reached a turning point in the 1970s – just in time for videogames to join the party and make everything much more confusing!

You can read the entirety of Game Mechanics vs Player Practices over on ihobo.com.


Fifteen Years Today!

15 CandlesHard to believe it was fifteen years ago this day that I first began typing my extraneous thoughts into blog posts. It's been quite a journey! In the time since I began, I've published a great many philosophy books that began as posts here on Only a Game. Alas, I have also watched discussion on the blog fall from its peak as a lively forum for exchange of ideas to the current wasteland of comments where a month where I get more than one short comment is exceptional.

So I frequently ask myself: why continue?

The answer is that I have to write, and it feels right that I should share those writings publicly, where at least the possibility exists for discussion. I do long for more discourse and less narrowcast, but it is clear that the golden age of blogging is over, and while I am still a member of the Republic of Bloggers, the exchange of blog-letters is a rare event - although always cherished. My especial love and gratitude to Chris Billows and Jed Pressgrove for their exchanges with me, which have been of inestimable value.

Sometimes it is suggested that I move to Medium. I am wary. I cannot move my blog to a place that is not my own, and the prevailing quality of all social media is that the content is at the mercy of the mob. I have not ruled it out, but the weight of fifteen years of practice keeps me to doing it the way I always have. And I am sceptical that changing platform will resolve the problem with discourse, since the problems with discourse is the core the problem of the early twenty first century.

So, what now?

Firstly, I must complete the epic two year project that is A Hundred Cyborgs, which is now just ten cyborgs away from concluding. (I know what most of the final ten will be, but I've no idea what #100 will be!) After that, I intend to go back to freeform blogging for a while, as I have throughout June. I've been enjoying it. It would be nice to be read as well as to write, but to write is still enough for me. I might have a go at rethinking intellectual property law in the spirit of Kant's rethinking of law in the context of morality. I'm also always up for suggested topics, since when someone suggests a topic it means I am about to engage in discourse. And this is the point of the exercise.

Well there be another fifteen years of Only a Game? Let's find out.

A Hundred Cyborgs begins again, for the final time, next week.


Concerning Your Opinion

The Last Messenger to Democracy.cropIt is perhaps surprising that corporations are so interested in our opinions of their products and services, given that simultaneously we contend that opinions don’t matter. Or at least, other people’s don’t.

We charge ourselves with the capacity to discern the truth, and also to know where our perspective should not expect universal assent. However, we rarely assume that other people have this same, basic power - at least until they show themselves capable of producing similar conclusions to us, and are therefore ‘of like mind’. To say that this reinforces our inflated impression of our skills at discerning truth from opinion is an understatement. It’s engrained. Other people’s opinions are, for the most part, merely something irrelevant we can dismiss.

Yet corporations spend large amounts of money collating opinions - if opinions are worthless, why bother? “Ah,” (we say with confidence), “they gather data and use scientific techniques to derive the truth.” It’s a plausible hypothesis as far as it goes, but it radically overstates the matter. Marketeers just need to know what people will pay money for, and to do that they need opinions, for there is no truth about what you must buy. If you want to know what people will pay for, you need to get at the truth buried in mere opinions.

But what if this description is only part of the truth of the matter? What if, in collecting opinions, the business world had developed a very crude way of successfully distilling the essence of opinions into truth. And what if there was a way for us to come at this truth for something more valuable than mere market knowledge...?

Wheat from Chaff

Over the millennia, a diverse set of philosophers developed our understanding of truth in ways that still influence how we think. Our current views come more-or-less directly from the Enlightenment philosophers, such as Immanual Kant or Mary Wollstonecraft, who examined how it was that we grasped the reality of our existence. The Enlightenment philosophers contrasted thinking subjects to ‘inanimate’ objects, but in later years the subtlety of this philosophy was ground down into a an all-too-simple split between ‘objective knowledge’, which was true, and ‘subjective knowledge’, which wasn’t. From there, it’s just a short hop to tossing all ‘subjective knowledge’ in the trash as ‘mere opinion’.

Oh dear. How spectacularly this philosophy went awry as it fell into common usage and today’s philosophers largely withdrew from life to argue about minutiae with one another... We were left thinking we can effortlessly split objective from subjective, that we can focus on the truth provided by objectivity, as if we merely had to sort the wheat from the chaff. Trouble is, there’s nowhere near as much wheat as we like to think...

The Belgian philosopher and chemist Isabelle Stengers suggests objective knowledge, as the name alludes, is the knowledge of objects. The sciences devise methods of getting objects to reveal their secrets, through experiments. But the grain bins of scientists are not bursting and full, because the kind of knowledge we can tease from objects is rather thin. Atomic weights, boiling points, equations of motion, quantum mechanical relationships... we would be hard pressed to apply this kind of knowledge to anything more than making what we like to call ‘technology’.

Objective knowledge about our technology is similarly thin, and does nothing to warn us of potential dangers or abuses. Researchers can give us nuclear bombs, automobiles (by far our biggest killer technology in practice), mustard gas, social media, armed drones, crack cocaine, gas chambers... not only these, of course, other, less problematic technologies as well - but those involved in inventing our tools are rarely able to discriminate the harmless from the dangerous until it is too late. The same practices that give us new technologies do not provide any means of assessing them except in objective terms, which are generally inadequate for any important assessment. We can measure how much energy a nuclear warhead can give out - but that does not help us to make the important decisions about atomic weapons, like whether we should ever use them.

Objective knowledge is not only thinner and less impressive than we imagine, it is strictly meaningless. That’s because giving meaning is the activity of thinking subjects - and we threw away all that as chaff.

Now we are getting to the madness brought about by monomaniacal focus upon the objective. We were always deceived when we thought objective knowledge must be the sole thing that was important because importance could never be anything so thin and dry as objective knowledge. To determine whether something is important requires subjective knowledge - what knows the grain better than the wheat itself? We forget that before we ground it up, that worthless chaff was the wheat plant itself, the thing that mattered.

We focus on the grain because that’s what’s useful to us, since we can grind it into flour. The chaff does nothing. Yet before we got involved, the chaff was the thing that ought to have mattered - the thing that was alive. The same logic that rendered objective knowledge into all that seemed to matter fooled us into making life the thing we discard. From that one mistake, all the problems of the last century flow.

Importance and Being Earnest

If you knew every possible datum of objective knowledge you would still be entirely clueless about what matters. That’s because making a decision about what does and doesn’t matter requires more than dry facts. It requires the capacity that beings possess to assess what is or is not important, and how important everything is requires context, a world in which these kinds of questions take upon a meaning.

Even a plant can respond to importance; they twist and unfurl to face the sun - since for a plant, sunlight is important. Humans are capable of much more complex assessments of importance, but that doesn’t mean we’re especially good at it. Our intuitions about what is or isn’t important are routinely misguided, we latch onto things that feel important but aren’t as vital as they seem, and we entirely miss things that in hindsight are revealed as important. No amount of scientific research can solve this problem, for the sciences can only ask the questions and develop the methods for getting at objective knowledge, while judging importance is always the knowledge of subjects. Which means that when we assess importance, we are forming opinions, and therefore that opinions must, in fact, matter far more than we tend to think they do.

This inescapable truth, that importance is not and can never be objective knowledge, does not mean that the sciences cannot help us in making good decisions. On the contrary, they can be vital, and are frequently central to the decisions we make. But the worth of research towards providing a foundation for judgement does not change the fact that the judgement itself is never something as precise and flat as objective data. It requires skilled judgement. It requires opinions.

When we focus on the huge difference in degree between those few things we can establish as objective knowledge and the uncertainty of opinion, we become enamoured with the apparent perfection of certainty. But certainty is the wrong measure for judgements. We fret about whether our judgements are true, even though this isn’t something we can ever determine in advance. Even if we retreat to a weaker conception, such as how probable it might be that our judgements will pan out as expected, we’re still playing the same game of trying to assess the knowledge of subjects by the logic of objects. The relationship between our opinions and the truth is real, but we cannot access it directly (a point Kant brilliantly explored). So we need a different standard when we’re assessing opinions to the ones we might use to judge the work of the sciences.

Do we have these tools, these ways of judging our judgements? Yes, we do, and we’ve already seen a crude form of it - when a business surveys their market to establish whether something is going to sell. Their method works because the corporate world has learned to trust that with a suitably proportionate sample of earnest respondents, opinions can be synthesised into fair judgements. What’s more, there’s already a method for rendering political judgements in this way. No, not by referendums, which are merely polls designed to reach pre-defined conclusions (objective logic mistakenly trying to ‘tame’ opinion by its own dry standards), but by letting the governing decisions fall to anyone via the drawing of lots (sortition). Rather than a democracy of politicians who are chosen for their qualifying ability to win elections, this would be a democracy of anyone, even those with no qualifications, who are chosen solely at random. This is what democracy originally meant - rule of everyone and no-one.

If you recoil in horror at the thought of political decisions made this way, pause to consider whether the skills politicians require to win elections are likely to be those that help them render good judgements, and why, if we do not trust the opinions of strangers, we trust people’s lives to a jury of twelve random people when they are on trial. When it comes to assessing the relative importance of arbitrary matters, why should we not expect a suitable random sample to be up to the task? After all, a great many scientific investigations rely upon this very same technique, the collective judgement of a representative sample. It is as if we trust the opinion of others only when we call it ‘science’ but not when we call it ‘government’.

Collective Opinion

This resistance we experience when we confront the idea of democracy leaves us more than a few options for how to respond.

Firstly, we could dismiss democracy as too idealistic, and argue that the kind of rule by elites we have is the best we can do. This choice effectively rejects democracy outright, and sides instead with injustice as an alleged compromise. But there is nothing to compromise with. We have rejected democracy on this path, and chosen to lend our support to feudal capitalism instead.

Alternatively, we can make some attempt to rescue democracy through an imagined intervention that gives more power and influence to intellectual forces (most likely that nebulous catch-all ‘scientists’). And while it is wise to hope for a world where those making the governing decisions take the work of researchers into account, the danger of this compromise is to wish for a grand theocracy of science, where those who make objects speak through remarkable methods are falsely attributed a greater capacity to govern. This is to trade one set of elites for another whom we have no decisive reason to trust will do any better.

Eventually, when all the ways we can compromise and thus betray democracy have been revealed as only slight variations upon our voluntary oppression, we are forced to ask: how could we make democracy work? Only now are we in a strange new world where we can ask questions like ‘how many people is enough to render a fair judgement?’, ‘is twelve a good number for strangers trying to reach agreement?’, or ‘could sets of juries collate more representative decisions than a single group?’ Only then are we future advocates for democracy, capable of facing the truth about opinion... that it is enough, more than enough, to govern amidst uncertainty. Democracy should be at no greater risk of bad decisions in the hands of the unqualified than it is in the clutches of those whose only qualification is their bitter experience of the brutal battleground of opinion we have currently chosen - that is, the elections we have chosen instead of democracy.

Mary Wollstonecraft offered this advice to the reformers of her own time:

Men and women must be educated, in a great degree, by the opinions and manners of the society they live in. In every age there has been a stream of popular opinion that has carried all before it, and given a family character, as it were, to the century. It may then fairly be inferred, that, till society be differently constituted, much cannot be expected from education.

Perhaps all this is too terrifying to contemplate. But I don’t understand why it should be, unless we’ve misunderstood what democracy means, what actually living as equals might mean. The truth is that our opinions matter, and need to matter more than we let them. We need to learn to disagree. We need to learn to live together. We need to remember that we are equal. The truth is important - but then, so are our opinions. Perhaps it’s time to take our opinions a lot more seriously than we have been.

The opening image is a detail from The Last Messenger to Democracy by Aramis Marchetony, which I found here. As ever, no copyright infringement is intended and I will take the image down if asked.


How To Be Yourself

Untitied.KwangHoShinPerhaps the first mistake we all make as individuals is to think that we know how to be ourselves. When we object to someone else that "nobody can be me but me" we're being entirely truthful, but we should not deduce from this that being yourself is easy.

The Danish philosopher, Søren Kierkegaard, puts it beautifully:

There is a fear of letting people loose, a fear that the worst will happen once the individual enjoys carrying on like an individual. Moreover, living as the individual is thought to be the easiest thing of all, and it is the ethical that people must be coerced into becoming. I can share neither this fear nor this opinion, and for the same reason. No person who has learned that to exist as the individual is the most terrifying thing of all will be afraid of saying it is the greatest.

The individual person isn't a loner survivalist cut off from society, but one being among others whom they live amidst. When we angrily desire our individuality, what we are hungering for is an escape from the ties that bind us to these other beings that intersect our lives – but this we cannot achieve except through the self-destructive intervention of breaking these ties one-by-one. Every time you resort to this drastic step, you sever yourself from another piece of your individuality, for it is all these random, circumstantial connections to other beings and things, places and people, that are the raw materials from which your life is built. Without it, you are not an individual you are nothing, both because it is these circumstances that brought you to life and kept you alive ever since, and also because who you are flows from where you are coming from.

Now it is difficult for me to speak about this question of becoming yourself, because I do not want it to sound that I am claiming that I know how to be you better than you do. Obviously, I don't even know who you are as I write this! Rather, what I am trying to do is offer a warning that being yourself is much harder than it sounds. It is always a dangerous game, giving advice, and often disastrous when advice is given in anger or haste, and the last thing I would ever want to do is interfere with anyone's exploration of how to be themselves. Besides, as Kierkegaard warns, whenever we try to tell others how to be themselves we "betray ourselves by our instantly acquired proficiency, and fail to grasp the point that if another individual is to walk the same path, they have to be just as much the individual and can, therefore, be in no need of guidance, least of all from anyone anxious to press their services upon others…"

However, I can see little harm in pointing out that whatever being yourself is going to entail, it might help to understand what you are...

What You Are

We tend to assume we know what kind of thing we are – yet there are many different choices for understanding what you are, all of which can work out for certain people and any of which can lead to disaster when undertaken thoughtlessly.

Take the case of disbelieving in the reality of your existence. If you come to think that you don't really exist because you are just an illusion brought about an elaborate hoax of your biology, then there is no possibility of being yourself because there is no you to be. This seems like a terrible start to any process of self discovery! Yet this self-negating way of understanding what you are could also be illuminating, as it is to Buddhists and Hindus whose conception of appearances as essentially illusionary offers a way of discovering yourself through a denial that your thoughts and desires are the most important part of your existence. In this, as in so much in life, the same assumptions can lead to radically different conclusions.

Most likely, you view yourself as a consciousness inhabiting a body, with the latter generating the former via the biology of neuron connections that grants you free will and powers of imagination. In which case, your view is not terribly different from that of people who lived hundreds or even thousands of years ago, apart from the name given to the kind of thing you are. As the British philosopher Mary Midgley made clear:

When the sages of the Enlightenment deposed god and demystified Mother Nature, they did not leave us without an object of reverence. The human soul, renamed as the individual – free, autonomous, and creative – succeeded to that post, and has been confirmed in it with increased confidence ever since. Though it is not now considered immortal, it is still our pearl of great price.

The danger in buying into a purely individual conception of who you are is that it will make your existence appear to be something emanating solely from inside your mind. But that's not the case – who you are and what you are may have its locus of experience inside your mind, but it is constituted and sustained by the network of connections and situations I mentioned above, the raw materials from which you make yourself. We take great risks with our selfhood, therefore, if we think of what we are as something wholly sealed inside our heads.

Inside Out

Whatever way you settle upon for understanding what you are, you then have to negotiate the tension between what is apparently inside (your mind, your memories) and what is apparently outside (your social connections, your lived environment). Psychologists have finally started to come around to the idea that your mind is partly constituted by this exterior environment. Compelling recent concepts like 'enactivism' and 'embodied cognition' explore a path cleared by philosophers, especially the German philosopher Martin Heidegger. Heidegger saw our situation as one of being thrown into a world, the circumstances we are born into being the very condition for discovering what we mean by ourselves.

But how do we distinguish between inside and outside? Many teenagers try to break ties with their family or the traditions of their birth culture as an act of asserting their individuality... but the rejection of these relationships becomes in itself an act of participation, participating in exile, if you will. Active rejection of family or tradition still defines the inner self in these cases precisely by that rejection. Rather than severing that connection, we simply take upon a different form of connection – that of opposition or withdrawal.

To navigate this problem requires that we have access to some concept of what is good or right for us, but this cannot simply be to act on our hunches – that would risk removing ourselves from any viable standards of judgement. Our ability to make accurate judgements depends, after all, upon our tools for thinking (our languages and terminology) which are sustained by communities of practice. It is for this reason than the Canadian philosopher Charles Taylor explored an "ethic of authenticity" that emerged in the last century or so:

To know who I am is a species of knowing where I stand. My identity is defined by the commitments and identifications which provide the frame or horizon within which I can try to determine from case to case what is good, or valuable, or what ought to be done, or what I endorse or oppose. In other words, it is the horizon within which I am capable of taking a stand.

This is part of the reason why encounters with new communities of practice can be so transformative – whether it is a religious tradition from outside of our prior experience, a community of care based around a sexuality or gender identity we had not previously considered as applying to us, a medical diagnosis that connects you to other people with whom you share a commonality of experience, or a political faction that speaks to you from outside of your prior assumptions, the discovery of who you are frequently involves a voyage outside of your mind and into revelatory new connections with others.

Yet each encounter of this kind also risks deceiving us – especially when we have actively broken ties to our previous communities. The discovery of a new network of care that we can see ourselves belonging to is alluring, because as social creatures we crave belonging even though other humans fundamentally annoy us (as the Prussian philosopher Immanuel Kant remarked, we are "sociably unsociable"). But this inherent appeal of belonging to something cannot resolve the question of whether the identity we are trying on is an authentic solution to the problem of ourselves. But by the same token, nobody watching 'from the outside' is going to be able to decisively determine what is and isn't authentic on our behalf. We are all inside and outside the same boats in this regard.

The danger of treating the dizzying array of possible identities presented to us as merely a buffet or a shopping catalogue to chose from is the risk of failing to notice how each encounter with every possibility of understanding ourselves is going to have an effect on who we are becoming. If we think of who we are as just a single identity where we simply have to browse the shelves until we find "the right one", we will end up reducing ourselves to a mere caricature of who we could be if we took the time to discover authentic connections with all the many facets of who we are and might be.

Paradoxically, discovering how to be yourself requires other people, both as examples to understand, and as a sounding board as we work through the challenges of understanding how the different shards of who we are fit together into a coherent whole. Even if you were "born this way", you still needed to learn about 'this way' by seeing these possibilities for existence acted out in others. Identities are sustained by their communities – and counter-intuitively, they are strengthened by the opposition of other communities that deny their legitimacy, for we are never bolder than when we feel threatened.

The problem of being yourself has no quick fix, and certainly cannot be solved by ordering your new self online. It requires you to do the work, thinking and feeling through your existing connections and communities, taking on new potential aspects of yourself with care and not rushing the process of discovery by letting our enthusiasm for the new lure us away from parts of who we are that are far more important than their humdrum familiarity might suggest.

How do you discover how to be yourself? The same way we learn anything: you watch other people become themselves, and then try to make some of what you encounter work for yourself. Sometimes it will. Sometimes it won't. Sometimes it will seem impossible that this could be you, but you may still later come to see how it all fits together. It's a mystery to solve, and only you can solve it – but you will have a much greater chance of success the more you listen to others and recognise that you can only be yourself with others. Alone, you are trapped 'inside' with your fears and your anger – only together can we find ourselves.

Prepare yourself for the adventure of a lifetime.

The opening image is an untitled painting by KwangHo Shin, which I found here. As ever, no copyright infringement is intended and I will take the image down if asked.


Doctor Who and the Cancellation Monster

NuWhoIs Doctor Who in danger of being cancelled? Well, first let's make the obvious point that it was already cancelled, back in 1989. So the question probably ought to be: is it in danger of being cancelled again.

To find out, join me on a fascinating journey through the last decade and a half of New Who...

 

The Ratings of Doom

If you've been keeping your ears to the Doctor Who rumour machine recently, you will doubtless have stumbled upon something declaring that current showrunner Chris Chibnall has "killed the show", and that doom is imminent. These rumours of impending disaster aren't actually that new – British newspaper The Sun has been reporting "lowest viewing figures ever" for Doctor Who for about a decade now… but that's a publication that loves to make titillating scandal out of anything and everything, and it's wise to take its stories with a grain of salt.

The truth is: Doctor Who's ratings have fallen steadily over the last decade or so. As indeed have the ratings for all television shows. A key reason for this is that broadcast television is now facing increasing competition from streaming services such as Netflix, and the battle for eyeballs has never been fiercer. Take a look at the following chart which shows the number of viewers (in millions) for another British television institution, the gritty soap Eastenders, versus the numbers for Doctor Who. Both show decline in both the highest and the lowest viewing figures in each year 2005-2019, with a number of spikes in the Who ratings corresponding to special events like the 50th Anniversary in 2013 or the debut of the first female Doctor in 2018. If we look at the ratings, it's fairly clear that Doctor Who is actually holding off the rot at least as well (if not better) than Eastenders.

Eastenders vs Doctor Who

But viewing figures are not the most interesting metric when it comes to judging the 'health' of a BBC show with respect to cancellation. That's because Auntie Beeb, as a public broadcaster, is willing to concede to the idea that not all of its programmes need to be as successful as Doctor Who at attracting viewers. For the last forty years, the BBC has commissioned reports on Audience Appreciation, which is presented as a number out of 100 known as the Appreciation Index (AI) that is calculated by getting people to rate shows out of ten, average the scores, and then multiplying the mean value by 10. Take a look at a graph of New Who's AI scores from 2005 to present:

Doctor Who AI Scores

The BBC doesn't like to share these figures for individual shows… but they do for Doctor Who because, well, the AI scores are consistently good. If you look at the image above, it looks like a steady decline, which indeed it has been (more on this below), but that's because I condensed the x-axis on the chart so we could see the changes clearly. Here's the same data shown with a zero point of origin on the vertical axis.

Doctor Who AI Scores.with origin

It's almost completely flat.

That's because New Who has never scored below 75 on its Audience Appreciation - even Love and Monsters managed to score a 76 (indicating an average audience review score of 7.6), which is below BBC One's target average of 81 but is still a very respectable score for any TV show to get.

Some quick disclaimers… the BBC changed data supplier in 2012, switching to online surveys and increasing the sample size from 1,000 to 20,000. This presumably means the later data is more accurate than the earlier data (in so much as any thermometer for aesthetic judgement can be accurate!), but there's no discernible impact from this particular change. Also, these ratings are based on people who watched the shows as they were broadcast and were exposed to all the surrounding media fuss, they were not taken in a controlled environment (neither would there be any point in doing so). This means, as we'll see shortly, there's some fascinating hype effects on the AI ratings for Doctor Who.

How low would AI have to drop before the show would be at risk of cancellation? Well, it's widely discussed that the start of Sylvester McCoy's time as the seventh Doctor in 1988 the classic show pulled in Appreciation Index scores of 60, with Bonny Langford's character of Mel being singled out for particular dislike. As the BBC report stated:

56% of respondents who answered a questionnaire on the Paradise Towers story wished – as seemed likely at one point during the course of this adventure – that she had been eaten.

As a start, then, we can eliminate any risk of the BBC cancelling a programme that is still pulling in average AI's of 80+, especially one that is (in the wake of what happened to Top Gear) their only remaining flagship brand for export. The scores would have to drop by 20 points or so for this to be a risk, and that's not even remotely close to what's happening.

But there's a lot more we can glean from trawling through the AI scores with a curious eye…

Arc of Entertainment

The annotated version of the blue chart (below) gives us an intriguing peek into what the audience for this show has thought about individual episodes, and allows us to take a closer look at what we might call the Arc of Entertainment for New Who. After some digging, I have several hypotheses as to the behaviour of the AI scores that might be interesting to mull.

Doctor Who AI Scores.Annotated2

Let's start with the lowest points. These are all during Russel T. Davies and Julie Gardner's run. One is the oft-unpopular Love and Monsters, with its played-for-laughs monster, the Abzorbaloff. Honestly, while it may seem quite amazing that this one still pulled in a 76 this is a hugely inventive episode, and the genesis of the 'Doctor-lite' episodes that soon after give us the outstanding Blink. In many respects, this is a perfect example of Davies strengths and weaknesses – it's wildly creative (giving us an entirely new format for a Doctor Who episode), while also being hugely indulgent (it's sometimes rumoured that the monster is modelled on real-life Doctor Who superfan Ian Levine...). It divides fans, and the score of 76 reflects that divide – scores of 8 and 9 are being averaged with much more dismal values.

But then we get the other two episodes to score 76: Rose and The End of the World. Now among New Who fans, these are simply not stories that provoke obvious ire, and it seems to stand in some need of explanation as to why Rose in particular could rival Love and Monsters as allegedly the worst New Who episode according to Appreciation Index scores. However, there is a highly likely explanation for this. When New Who began to air, its audience included a great many classic Doctor Who fans, along with a (larger) number of newcomers with less or no experience of the franchise. Dedicated Whovians did not, on the whole, like Rose… it felt like a weak rehash of Spearhead from Space. Similarly, The End of the World took some flak for seemingly writing over some parts of the backstory (although, let's be honest, the Whoniverse has always been self-rewriting, as Moffat arguably parodies in The Big Bang). My suspicion is that these low ratings from (some) classic fans dragged these early episodes' AI scores down – but after that, the detractors simply stopped watching, and the AI scores begin to rise.

Another thing the AI scores reveal about Russell T. Davies stint as showrunner is that Davies mastered the build. I personally never liked his slightly slapdash way of building up to a climax at the end of each season, but I was weaned on Chris Claremont stories where the building up was far more textured (it's far easier to plan long-term stories in comics than in TV shows). The bottom line is, Davies method worked, and it did so despite largely hermetically sealed episodes, which is an incredibly difficult balancing act. The Parting of the Ways (series 1 finale), Doomsday (series 2 finale) and The Stolen Earth/Journey's End (series 4 finale) all show huge spikes that speak of the immense satisfaction viewers felt as Davies arc stories paid off. (The series 3 finale also peaked above the rest of its series, but only just.)

But hang about – The Stolen Earth/Journey's End aren't just an end of series spike, they're the highest rated episodes of New Who ever at 91 approval. How is that possible? Call me a humbug, but those stories are pretty weak (at least some other Whovians agree with me on this). Yet they have two enormous advantages. Firstly, it was highly publicised that David Tennant was leaving, and Tennant was (and is) so popular that this hype train left its impact (even though it would be a year and a half before Davis and Tennant would actually bow out). Plus, bringing back both Captain Jack and Sarah Jane was an honest-to-goodness crowd-pleaser, and even more so for fans of Davies and Gardner's spin-off shows. Crossovers often do well in the short term. Hindsight doesn't always look back upon them kindly, though.

State of Gradual Decay

And so to Steven Moffat, a much more consistent showrunner than Davies in many respects. His only AI dud is Mark Gatiss' Sleep No More at 78 (just above Davies three barrel-scrapers with 76). It's another format-breaker like Love and Monsters, and it also divides fans. (I don't much like it, but I probably wouldn't single it out over other lacklustre moments in Capaldi's tenure). Moffat proved weaker with his series arc plotting than Davies, though, and his only end of series spike is his first one, The Big Bang. Look at the crash immediately afterwards with A Christmas Carol. Ouch. Loveable Matt Smith can't rescue this one from being a bummer. Moffat does, however, achieve something that Davies never quite managed – a spike for a series opener, Asylum of the Daleks. Another crowd-pleaser with a great story and tons of fan service (yes, you can buy my love with a Special Weapons Dalek), and the back door debut of Jenna Coleman to boot. Moffat's 50th Anniversary specials The Name of the Doctor/The Day of the Doctor are a huge spike at the end of series 7 - although we're 'only' talking 88, here, which is an utterly fantastic AI score that other shows would kill for, but amazingly not the 91 that Davies' fake-out exit achieved. And this is with David Tennant reprising his role as the Doctor, of course…

That failure to crest above the wave, despite frankly blowing all of Davies finales out of the water in terms of writing and production quality, is a sign of something that is going to seriously afflict Moffat's time as showrunner… audience fatigue. The AI scores are in a trajectory of consistent decline from the moment Moffat takes over – don't be fooled by the story that its Chris Chibnall and Jodie Whittaker that kicked off a downward turn, the slowdown has been a decade in the making, and it's at its most tangible with Moffat's second Doctor, Peter Capaldi. Deep Breath, Capaldi's debut, is a notable dip from where the show had been previously (admittedly the anniversary was a tough act to follow!), and Capaldi's only readily apparent peak is World Enough and Time (which incidentally, I love), and that wasn't the finale for series 10, but merely the set up. The final two Capaldi episodes drop 2 points apiece, bowing out at just below where he came in. It is also during this period that Doctor Who merchandise sales drop, although to be fair, they peaked in Tennant's run and never recovered, just like Dalek toy sales in the sixties.

Doctor Who AI Scores.Annotated2

Finally, Chibnall and Whittaker. (I've duplicated the annotated chart here so you don't have to scroll so far to check it.) Here, the AI scores start to become seriously up-and-down, but the highs of 83 are directly in line with the fatigue effect that began with Matt Smith and made itself a comfortable rut during Capaldi's time in the TARDIS. Jodie Whittaker's debut in The Woman Who Fell to Earth is very healthy in AI terms, on par with stronger Capaldi/Moffat episodes certainly, and there's another bump with Spyfall (which personally I'm less fond of) – and then there's the gobsmacking Fugitive of the Judoon, which weirdly scores the same 83 approval as the first Whittaker story despite wildly greater love from the supportive parts of the fan-base. Something's definitely up here – what?

There are at least two factors that can help explain this pattern. Firstly, the contrasting down-strokes. The Tsuranga Conundrum, The Battle of Ranskoor Av Kolos ("Worst. Finale. Ever." - although it's not actually a bad episode), Orphan 55, Praxeus, and Can You Hear Me? all dip notably into the high 70s, although none go below Orphan 55's disappointing 77 (justified in my view). When a show can't consistently satisfy the audience there's none of the hype boost that Tennant/Davies secured, and it means that better episodes score lower than if they were surrounded by stronger work.

The other problem is the fan revolt. I don't know what proportion of the Whovians at large is involved, certainly not enough to make the AI scores do anything other than preserve the general trend that began with Moffat's ascension to the throne, but as with those 76 scores for the first two New Who episodes, it inevitably drags down your AI scores down when you offend the fanbase. And whereas classic fans gave it up in two episodes, Whittaker/Chibnall haters apparently keep on watching, even though they're not enjoying themselves. But if this is a factor, it's a reminder that the disgruntled fans are in the minority, otherwise we really would be facing cancellation, instead of a continuation of the same general pattern since Tennant, Davies and Gardner left. Also, if we acknowledge this factor, we have to wonder what the scores would be if the rebels just stopped watching...

Who vs Who

Finally, I want to share this intriguing chart which shows all the episodes by each Doctor in order, even though taken out of context this will seem like priceless ammunition for the anti-fans. Frankly, we're all free to tell whatever story we wish, but what we see here offers a much more intriguing tale than just the long-running trend of decline.

Doctor vs Doctor AI Scores

Obviously, Whittaker is below Capaldi who is below Smith who is below Tennant. That's the audience fatigue effect I was talking about as much as anything (go back and check the blue charts above to confirm this). But we can look at this story very differently, not as a competition but as a means of drawing more general conclusions about the AI patterns of New Who.

  • Christopher Ecclestone fights from 76 up to a whopping 89, and goes out with a genuine bang. He and Davies (and Julie Gardner – she really ought to get more credit than she does) literally save the show here with this fight from "should be on BBC Two" to "centrepiece of Saturday night telly".
  • Tennant is wildly variable at the start, dipping to his 76 low in his first year in the role, but then wins the battle for hearts and minds and climbs all the way to his triumphant exit.
  • Matt Smith comes in higher than Tennant (he's riding on the love for Tennant, and the expectations that Moffat's episodes under Davies/Gardner had justifiably created), but it's all gently downhill from here, the show understandably unable to get back the national moment of hype created by Tennant's departure, even though the 50th anniversary gives it a great shot.
  • Capaldi's pattern is very much like Smith's, but further deflated by the fatigue effect – he too gets his exit peak, but it's slightly premature, his final two appearances all sliding downwards.
  • Jodie Whittaker's trace is nowhere near as stable as Capaldi's, but it's about the same jaggedness as Tennant and Smith at the same point, taking the overall trend in AI decline into account. It's almost as if – and stop me if you've heard this before – the show just went through a major transition and is still feeling out the consequences of that change...

I know I won't convince the anti-fans with any of this analysis, but that isn't really why I undertook this investigation. I was merely curious as to what the Audience Appreciate Index values would reveal if I examined them with a little data visualisation. Having shown you the results, let me finish with three claims that neatly bookend everything discussed above:

  1. Doctor Who is in no danger of cancellation. If you're hoping for this out of spite, I'd suggest stopping watching is a better strategy, because then you'll be genuinely in control of your own experience of Doctor Who going forward. (Big Finish would love your support if you do ditch the TV show.)
  2. New Who audience scores have steadily fallen since David Tennant announced his departure. The slide in AI scores began with the 'Year of Specials' (2009) and continued throughout Moffat's run and beyond. However, the falls are merely a few points in size, and the overall pattern remains essentially flat.
  3. Whittaker/Chibnall have not killed the show, and their next series together could potentially outperform Capaldi/Moffat at the equivalent point in their run, but the heavy-handed moralising will need to be handled more elegantly (all the low points in series 12 featured prominent 'lessons').

This is all rather exciting to me – I would rather a showrunner who is willing to take risks with new writers and suffers a few duds than someone who was consistent with their quality but whose shows were unambitious. The funny thing is, it seems like all the New Who showrunners fit that description, one way or another. Whatever happens next, I'm definitely not bored of the show… I'm ready for more.

What about you?

All data quoted belong to the BBC.


How to Disagree

Klint.Svanen.croppedSo you want to learn to disagree... congratulations! There are very few of us interested in mastering this skill rather than, say, learning how to be right or how to be rich. Learning how to disagree is one of the most challenging undertakings any human faces - rocket science is easy by comparison (it's just medium difficulty algebra), and brain surgery is a doddle (you just learn to forget that what you're manipulating is a person). 

Now let me make it clear that phrases like "let's agree to disagree" are not any part of learning to disagree. That particular statement is more about avoiding disagreement, that is, not talking about disagreement. This is actually a great strategy because it's easier to avoid talking about contentious subjects than to learn to talk when you disagree, and a vast number of cities around the world have won centuries of peace from marking certain conversations as off limits. Our trouble is, we've rigged everyone up to a half-baked hive mind called variously 'the internet' or 'social media' (or some such) and consequently made 'not talking' practically impossible. When you can't avoid talking about things, you have to learn to disagree. But we haven't.

Why is disagreement so challenging? It's because everyone must proceed from the basis that they are correct about at least some proportion of what they think, or else be paralysed with uncertainty and anxiety. We learn to defer on some topics (e.g. few astrophysicists master plumbing) and to trust our own judgement on others. To put it another way, we develop a sense of where we can find truth and where we can trust someone else to do so. When we undermine our ability to deal with the truth, then, we also tend to get anxious - or else furious, since as mammals we can always fend off ours fears with anger. So the truth, no matter what it might be, is very important to our well-being.

You may have heard it said that we are now in a post-truth world. What rubbish! Truth is exactly as it always was: complex, unwavering, glimpsed only in shadows. What characterises our current situation is not an absence of truth, but an inability to disagree about it - and everyone is afflicted with this problem. That you can point to obvious examples of lies and deception is rather beside the point, as it only shows that (as always) ambiguities can be exploited by opportunists. If you're sick of the 'post-truth world', what's needed is a path through the cacophony of disagreement - not by dreaming that 'once everyone acknowledges the truth' (i.e. agrees with you) everything will be miraculously resolved, but by trying something rather more radical instead.

Three Principles of Dissensus

Back in December 2019, I posted a very short tweet that I always wanted to get back to. The tweet read:

Three Principles of Dissensus

1. You are allowed to disagree.
2. The truth you find in your position is not necessarily evidence of the falsehood of those you disagree with.
3. Truth emerges through resolving discrepancies, and never from insisting on a single interpretation.

A consensus is when there is widespread agreement, while a dissensus is when there is widespread disagreement. The French philosopher Jacques Rancière suggests dissensus is essential to anything we might call democracy, and I wholeheartedly concur. Indeed, I don't see any authentic way to claim democracy while also claiming that no disagreement is permitted. As a result, learning to disagree is not only essential for our understanding of truth, it is foundational to any conception of democracy, even the laughable kinds we're currently stuck with.

First Principle: "You are allowed to disagree"

This sounds so obvious you'd think it wasn't worth stating. But it's absolutely crucial, the cornerstone of disagreement, and it's very, very hard to accept this in practice. There will either be numerous topics upon which you cannot bear disagreement (abortion, war, taxes, gender/sex, declawing, meat etc.) or you will have already thrown in the towel on the truth and be unable to accept that any robust agreement is possible in any context - a problem Nietzsche warned us was coming, but that we didn't really understand, and still don't.

The biggest problem with learning to disagree is what psychologists call 'cognitive dissonance', the vehement rejection of something outside of our understanding, since our minds will work to reimagine those who disagree with us in ways that allows us to ignore their humanity (especially when we don't speak to these people face-to-face). Take any of the big political divides of our time (it won't matter which country you live in) and you'll find two main sides positioning themselves against each other, backed up with rhetoric that either claims their side of the non-debate is of especial importance, or denigrates the other side as somehow subhuman. 

Allowing people to disagree with you is difficult. I very much doubt that you are as good at it as you think you are - I've had to work for decades to get even half-way good at it, and I still feel like an amateur.

"But what about such-and-such" you say. Yes, you can still disagree about this. The Dalek wants to exterminate you and all non-Daleks, and while you certainly won't let that happen without a fight you can allow them to hold their wildly xenophobic views while still insisting, quite rightly, that they are not entitled to kill you or others, not to mention preventing them from doing so (although ideally not by exterminating them...). Even a Dalek could be part of a democracy, provided it could lay down its weapons. And that's what most of us are lacking: that willingness to disarm, to let other people disagree.

Second Principle: "The truth you find in your position is not necessarily evidence of the falsehood of those you disagree with."

Provided you can accept that other people are holding (at least some) valid-but-different views to you, the next problem is recognising that you both could be right but in different ways. We all find truth in a unique and unrepeatable fashion, but we don't necessarily need to doubt that we are all finding (some) truth. The trouble is, we consistently act as if a great deal of disagreements have a single answer... working from that premise, wherever you strongly feel the truth of your own position, anyone who doesn't hold that view must be wrong.

But that's not necessarily the case. Even someone who denies that 2+2=4 might be correct - for instance, while thinking about numbers in base 3, where 2+2=11.

Beware of assumptions founded on mathematics and logic, though - in what's called Boolean logic something must be either true or false, but that's solely because that's how those terms are defined in that particular mathematical system. But the situations facing us are never that simple, and even when we can model part of the world through numbers, getting to the truth of what those numbers mean is more complex than merely stating the mathematics. Even the algebra of physics involves interpretation, as the great twentieth century researchers discovered, and conversely even statistics, the most interpretative domain in mathematics, can successfully reveal aspects of the truth when it is deployed with care.

You almost certainly have part of the truth. What you logically cannot have is those other parts of the truth you haven't yet considered - and these are denied to you until you can allow for the possibility of discovering fragments of truth in viewpoints that appear wildly misguided to you. 

Third Principle: "Truth emerges through resolving discrepancies, and never from insisting on a single interpretation."

Both the Sufi and the Hindus tell a story of blind people describing an elephant differently based on the part of the animal they're touching - there is more wisdom in that story than we like to admit, but there's also a risk of us taking this tale too literally. We assume, for instance, that since we can find no way of reconciling rival claims to whatever we hold to be true, there is no conceivable elephant to be found. But imagine that what was being described is not an elephant but, say, a rainbow at a waterfall. This will provoke just as varied responses from those using touch and sound to encounter it - "a rainbow is wet", "a rainbow roars", "a rainbow is slippery". From the point of view of someone who sees the rainbow, it's impossible to understand why anyone would make such ludicrous claims, yet the people speaking are only mistaken about the name they are using for the thing they're describing.

Whenever we insist upon only one valid interpretation of a situation, we are shutting down the possibility of finding the truth. Instead, we are gambling that the interpretation we have already chosen cannot be improved... even though we have no way of knowing how much of the puzzle we have managed to assemble. By learning to find the truth held in other people's perspectives, we can resolve discrepancies that at first we will not even notice...

First, we have to be able to disagree, then we have to allow for the possibility of truth coming down apparently incompatible paths. Only then can we start to put it all together... and at no point can we be certain that there is not something else - something not yet revealed to us - that we will still need to take into consideration.

Learning to Disagree

Now if you've taken any of this advice seriously, you might be worried about the awesome challenge of being able to disagree with everyone. Fortunately, you don't need to disagree with everyone directly (in fact, you cannot, simply because there's so many of us). Besides, learning to disagree is a community endeavour, since everyone you can successfully disagree with becomes a part of your own community, now matter how distant their connection to you.

The challenge as I see it is to find those arguments where people cannot manage to disagree, and try to find new ways of approaching the conflict. Those arguing probably won't want to stop and listen to what others have to say - they'll likely be too busy trying to exterminate their own Daleks, or at the very least discredit them, and that will keep them very busy. Also, you probably won't know how you can help (and if you know with certainty how to help, you definitely need to reconsider the second principle, above).

So... just listen. And keep practising how to disagree until we all finally master it.

The opening image is a detail from Hilma af Klint's Svanen (The Swan).