Sustaining the Present

Silent RunningWhat approach to technological development gives humanity a chance of reaching the Tenth Millennium? This is a serious and significant question, and no greater challenge faces the imagined ascenturians than the problem of technology. The ascenturian is bound to hold ideals that make possible the continuation of present diversity into the one hundredth century after civilisation's founding. That requires a far more critical relationship with our tools (and the networks that embed them) than almost any that can be found today.

I have said that the ascenturians have opponents but not enemies, and one of their most prominent opponents are the accelerationists. This term, coined by science fiction writer Roger Zelazny, describes anyone who thinks it is in humanity's best interests to accelerate technological development, whether they believe that high tech offers us salvation or, as in some Marxist variants, a catalyst for social collapse and thus revolution. Either way, this is an absurdly reckless philosophy. To assume that accelerating technological expansion is the means for humanity to survive (much less thrive) for another 48 centuries is to adopt an existential version of the Martingale gambling strategy - a foolhardy succession of 'double or nothing' bets pursued in desperation until (by chance) you win and cover your losses. That anyone would think this a sensible way to approach our shared world is really quite astonishing, and entails every farcical implication that this metaphor alludes to: gambling the future of both our species and our planet.

In The Virtuous Cyborg I suggested that we will misunderstand who and what we are today if we don't acknowledge that we are cyborgs embedded in the networks of technology required for us to live as these hybrid beings. It is the networks, rather than the technology, that makes the difference here. A hammer, once made, requires nothing further to be an effective tool. A computer, conversely, is worthless without the networks of electricity, programming, and communication, that are required to make such a device appear useful - and that conceal, with effortless denial, the tremendous collective labour required to maintain these networks. There is no such thing as an individual cyborg: there are only the networks, and the cyborgs ensnared within those networks.

I do not think it possible, plausible, or desirable to expect us to simply give up this state of being to which we are simultaneously addicted and in denial of, although we can certainly admire the Amazonian tribe who has not yet been incorporated into our industrial world, or the Amish communities that manage to exist alongside it. Yet we still need ways to think differently about our relationships with technology, since the ones we have are colossally destructive in almost every dimension. It is both fascinating and disturbing that we are so swept up in this passion for the technological that we fail to see the ongoing catastrophe attached to it. Writing in 1993, Raimon Pannikar named this disaster 'technocracy', and saw clearly that acceleration was right at the heart of the crisis, boldly comparing it to cancer, the uncontrolled growth that destroys an organism.

The 'energy crisis' is a helpful way of exposing the problem. Supposedly there is a 'crisis' because we are not able to make enough 'energy'. Thus, some who lean towards accelerationism, like British physicist Brian Cox (who was in the year ahead of me when I studied Astrophysics at Manchester University) suggest that 'the energy crisis is over if you want it to be' - meaning, if we invested sufficiently in nuclear fusion research, we could 'solve' the energy crisis. Firstly, this is another bet: we do not know how fusion pans out as a power source on the scale of fifty years, let alone on the scale of fifty centuries. Secondly, and more importantly, having greater access to 'cheap' energy is not likely to solve any crisis we are currently facing. It is far more likely to facilitate even greater environmental damage once the limits on power supply that currently constrain industrial growth are removed. If there is an 'energy crisis' it is that we want too much energy, not that we cannot make enough.

Technocracy transforms our tools from craftsmanship to dominion, and this analysis is not specific to this one problem, 'energy', it is the essential problem of technology. We cannot know the consequences of developing new tools until the problems have already manifested (and in some cases, not until it is too late to prevent them). Furthermore, we assume every crisis can be solved by throwing more technology at it, ignoring how the creation of ever more varieties of technical solutions reproduces the problematic circumstances we are attempting to resolve. Plastic-eating fungus sounds like a wonderful application of technology to an environmental problem we created - I wonder what we will end up creating to eat the fungus...?

I keep returning to Martin Heidegger's reflections on the problem of technology, because the German philosopher's thinking successfully exposes this issue at its conceptual root. Once we adopt 'technology' as our understanding of how we relate to the world around us, once we surrender to technocracy, everything is reduced to standing reserves, resources to be exploited. The 'energy crisis' came about because we developed new tools to burn fossil fuels, which were limited resources. Yet what made this 'limit' was the technological mindset - I might even say the accelerationist mindset. It is all very well saying we have plenty of deuterium for nuclear fusion, but it is still a limited resource no matter how plentiful it may seem now. In twenty centuries time, who knows what degree of energy supply might be needed to power the projects an accelerated technology wishes then to pursue...?

The term 'sustainable development' is bandied around so much today that we don't seem to recognise that it is a contradiction in terms. 'Development', meaning industrial development, is by definition unsustainable, for it is wed to the idea of standing reserves to be co-opted to our purposes. It is no good reversing the concept either - 'sustainable regression' is hardly plausible either! Yet our fictional ascenturians require a principle of the sustainable in some form or another, because clearly we cannot expect to reach the Tenth Millennium without attaining some kind of relationship with our world that can be maintained over the long term.

Accelerationists sometimes fool themselves with the mythology of fleeing the planet - 'we can't make it work here, so we must escape into space!'. What a bizarre proposition that is... since we cannot manage to devise any form of sustainable living arrangement for the planet we have, abandoning it is self-evidently a doomed strategy. Lyn Margulis is far from the only scientist to have recognised that if we cannot solve the problems of continued human existence here on Earth, there is no possibility at all of surviving in space, let alone reaching other worlds. This delusion is merely the brutal colonial arrogance of the Age of Sail wed to the devastating industrial ignorance of today. How dreadful to imagine that all we have to do is blast off into space and find other pristine life worlds to barbarise, even assuming that these planets are out there to be found by creatures like us, with the resources we might be able to squander on such a project. I love science fiction stories, but I will not pretend that they are realistic when it comes to colonising the universe.

Accepting that most of us cannot live like the Amish, and accepting also that a latter day 'technology freeze' is just as unthinkable (there will be no Neo-Amish movement for more reasons than one), we are left with the question of what ideal we can offer to cyborgs such as ourselves. We are now so dependent upon our electrically-powered network of devices that a former colleague of mine could suggest in all sincerity that losing internet service was akin to being 'sent back to the Stone Age'. To escape the trap of technocratic dependency, I suggest for our fictional ascenturians something like this principle of sustainability is required:

Reject accelerating technology for perfectible techniques

'Perfectible' is the vaguest of ideas here, the blank canvas onto which new visions can be projected. I will not claim to know all this means or implies, I merely suggest that some kind of ideal of 'perfectible techniques' is within reach for us cyborgs. With it, we might stand some small chance of avoiding the risks of accelerating technological development, and the colonial ransacking of our world that comes with it.

Why bring up colonialism in the context of technology? There is a continuity here we tend to overlook. For instance, when we think of Africa we tend to forget that a great many of the problems of that continent were caused by enforcing European-style economies onto places that had previously lived in a sustainable fashion. Once the possibility of selling natural resources abroad had been established through colonisation, these nations could no more kick the habit of exploiting their standing reserves than Europe, and the United States is only the most successful European colony, and in no way an exception to this pattern. Technocracy - the ideology of technology - is the science fiction heir to colonial expansion.

Yet even if the imaginary ascenturians are somehow able to escape the finger trap of technological acceleration, even if they are able to reimagine their tools as perfectible rather than infinitely expanding, they still have to face the problem of population. On this front, the environmental movements have run aground, for once again the 'left' has the insight to identify problems but is utterly incompetent at negotiating solutions. Meanwhile, the 'right' are baffled and amused by the way that prophets of environmental disaster make strange statements around the idea of 'not bringing a child into this world', meaning both 'I will sacrifice my reproductive rights for my environmental ideals' and also 'the degraded environment of the science fiction dystopia in my mind is not a fit place to bring a child'. What madness this seems from the perspective of the 'right', who alone seem to understand that the family is not going anywhere! In refusing to acknowledge this, all attempts at environmentalism grind to a nearly hopeless halt.

As already pointed out, it will not do to imagine that a population crisis can be solved by fleeing the planet so that we can enthusiastically overpopulate other worlds, or become refugees in space as in the gloriously gloomy 1972 movie Silent Running (pictured above), or as a kind of Battlestar Galactica in reverse. My best estimate of the number of humans that can live on Mars without support from Earth is zero, so the availability of millions of other planets 'out there' doesn't have a hope of resolving this issue. Even if this is an underestimate, a strategy that depends upon overcoming all the disadvantages entailed in living in environments hostile to our kind of life cannot possibly be saner than defending all the advantages our home planet already provides us.

Colonising other planets appeals because it provides a fantastical scope for future growth, yet this expansion is precisely the problem. “If there aren’t enough people for Earth, then there definitely won’t be enough for Mars,” laments Elon Musk (father of six), using 'enough' in just about the opposite sense usually applied to our vast population. All but unseen behind all these population issues is the centrality of the family to the human experience. The 'left' has traditionally opposed the concept of 'family' because of ideological commitments that, as usual, are well-intentioned but hopelessly impractical. The Communist attempt to neutralise the family as a political entity (which has unfortunate roots in Plato's political philosophy) failed utterly, and if we can just be honest about our temperament as a species for a moment it is not hard to appreciate why.

It follows that in addition to a principle to guide sustainable tools, we need a principle to guide sustainable families, which I might call a principle of normalisation, in the sense of 'stabilising' and also in the sense of 'making normal', i.e. making acceptable. Once again, since our ascenturians are purely imaginary beings, science fiction people for a future we can imagine but not truly envisage, this normalisation principle does not have to actually answer the problems, it only has to sketch the ideal that is somehow to be achieved by an ascenturian-influenced collection of societies. Something like the following, vague though it may be, might suffice:

Achieve neutral population growth without abandoning families

While a great many objections could be raised to this principle, it is important to remember that the founding concept of the ascenturians is not just that we imagine a people capable of reaching a century of centuries of civilisation, but also that these people would seek to ensure that the diversity of human experience today might survive and thrive into the Tenth Millennium in some form. The family is undeniably part of that diverse experience, and the rage of those who fell out with their parents and joined culture bubbles out of spite for their upbringing has no bearing on how this problem is to be resolved. Any number of non-familial cultures can co-exist with the human family as classically understood, provided that across all of the societies and bubble cultures being collected together, neutral population growth can be attained.

I am often accused of idealism, and I am indeed absurdly proud to be still able to muster any idealism at all in this bleak and ignorant world we have made together. But I do not think the ideal of neutral population growth is fanciful, and neither do I think the traditional concept of a family cannot be squared with it. If we consider Europe as just a single example, it attained a neutral population growth almost effortlessly, without any extreme eugenic interventions or governmental interference. Neither is it necessary to evoke draconian legal restraints such as 'maximum two children per family' or the like... not every person wants to have any children, and it is this factor more than anything else that has allowed Europe to reach neutral population growth. Not a limit on 'children per person', but a balance point between those making families, and those choosing a different path.

If we talk about 'sustaining the present', it can sound as if we are asking for stasis, or the misleading 'right' ideal of preserving tradition, which as I have already pointed out really means remaking those traditions in a sequence of new forms. But sustaining the present is one of the most imaginative science fiction thought experiments we can take up, as Dune already exposed. To imagine that humanity in all its current diversity is still thriving after another 48 centuries is nothing short of incredible - and such a fictional world need not and should not be closed to new ways of being, new bubble cultures and societies we cannot imagine now, and that we do not need to think about in order to undertake this challenge. We don't need to understand the worlds to come to prepare for them, which is fortunate, because the future lies forever beyond any hope of accurate predictions.

What 'sustaining the present' means is solving the problem that the accelerationists bungled with their reckless Martingale strategy of hoping that the next technology is the one that pays for all the social, economic, and environmental harms of the previous set of tools. It means coming at these problems from a new angle, one that need not mean giving up our ingenuity or our familial affiliations. It just requires us to stop seeing the future as a place we are racing to get to as fast as we possibly can... We will get there at one second per second as we always have. Yet to get there at all, we cannot continue treating our world as a standing reserve to be exploited, nor keep pretending that new techniques are morally or environmentally neutral options in an toolbox we are expanding 'for the benefit of all humanity'. Surely we know now that industrial technology was anything but beneficial for 'all humanity', let alone for all life on our world? Singling out the technological 'wins' while wilfully ignoring all the accompanying technocratic 'losses' is simply dishonest.

Our imaginary ascenturians know how to sustain the present, and thus know how they can reach the future. All we might have to do to help them come into being is abandon our utterly naïve dependence upon 'the next big thing' to save us. You cannot save cyborgs by further imprisoning them in their networks of dependency. And the first free cyborg, if such a thing can ever come to pass, has every hope of becoming the first ascenturian.

Next week: Restoring the Commons


Assembling the Future

Galactic Senate.Ep IIWhat kind of societies have any hope of reaching the Tenth Millennium? This is perhaps the first question facing any future ascenturian, an adherent to a wholly fictitious philosophy that seeks to ensure that humanity, in all its contemporary diversity, survives to the one hundredth century after civilisation began. More than a thought experiment, ascenturian philosophy is a collective science fiction story that we can discover how to write only by sharing our fragmentary ideas and fantasies about what might be entailed in the next 4,800 years of human history.

From the very premise of this imaginary movement, one thing should be clear: an ascenturian has to be able to think differently about time. Since the return of the ancient ideal of democracy during the Enlightenment, we have been divided into two prevailing political relationships towards time: on the one hand, those rooted in tradition (the political right) act in the memory of times past; while on the other side, those seeking to build a better world (the political left) act in the hope of a time to come where the social problems of today have been transcended. The last couple of centuries have been characterised by this apparent conflict between two political stances that are, in principle at least, wholly compatible.

Indeed, if we take this left-right divide as originating in 1789, when the French national assembly seated delegates in favour of change on the left, and those in favour of traditional order on the right, we will find that for the first century following this new arrangement, it was broadly recognised as beneficial. Writing in 1859, John Stuart Mill remarked:

In politics... it is almost a commonplace, that a party of order or stability, and a party of progress or reform, are both necessary elements of a healthy state of political life... Each of these modes of thinking derives its utility from the deficiencies of the other; but it is in a great measure the opposition of the other that keeps each within the limits of reason and sanity.

Why did this arrangement break down? To a fair approximation, we can say that it could not survive the collapse of dialogue between these opposing political stances. As long as debate between these diametrically opposed positions was possible, their tension provided a workable engine for balancing change with stability. But once partisan politics gave up the process of debate, it crumbled into a new system whereby rather than a balancing co-operation, the party divide became ever more adversarial, with each faction seeking nothing from their opponents but defeat in the next election. We have seen this most clearly in the United States, where bipartisan discussions have become rarer and rarer, and the country lurches between vastly incompatible political visions that alternate between enacting and dismantling, as each administration bulldozes whatever the previous one tried to build.

What our fictional ascenturians need from us isn't necessarily a path back to the balance of power maintained by debate between opposing political sensibilities. Nonetheless, they perhaps will need a principle whereby recovering that equilibrium is possible - and ideally, a principle that leaves open the possibility that there are other ways of maintaining both a link to the past and a path to the future. All that might be needed is a temporal principle, something that captures what was once taken for granted in the left-right political divide without shackling these imaginary people to a state of affairs bereft of opportunities to innovate or adapt. This principle need offer nothing more than a synthesis of the temporal relationships of the old left and right:

Act in the memory of time past and the knowledge of time to come

This feels too simplistic to do any work at all! And yet it also rules out organising the entirety of society in such a way that either the past or the future can be ignored or forgotten. If accepted, this is a principle that negates the most unhinged aspects of contemporary political life by revealing them to be either destructive to the continuity of knowledge by angrily razing the past to ashes, or unable to think about the future as anything but a continuation of the past.

The traditional right always suffered a certain preposterousness in its way of insisting upon the timelessness of traditions that had, upon any reasonable historical examination, constantly renewed and renovated themselves. If we look at the twenty centuries of Christianity, the twenty six centuries of Buddhism, or the forty centuries of the Hindu traditions, we will not find anything like a consistent set of practices running across that entire span of time. But what these religions (and indeed, all other traditional religions) achieved was a capacity to reimagine their practices in a way that preserved key aspects of their community of practice over time.

The traditional left, however, is perpetually at risk of a revisionism that sees all traditions as either obsolete or outright evil. As a result, it drifts into viewing religion as something that must be eliminated. True, certain 'outsider' religions have been nominally supported as a matter of diversity, but the left has remained resolute in its opposition to traditional Christianity, viewed under inescapable suspicion as the 'majority religion' and thus a principal political foe. Yet contrary to the ubiquitous knee-jerk critiques of religions, it is precisely the adaptability of established religions that has preserved their practices over time. This is not something we should seek to downplay.

Our imagined ascenturians need commit to no religion as such, yet neither do they require any prohibition against religion. Indeed, given the adaptability and longevity of religious practice, we should expect religion to be some part of what the fictional ascenturians live with. We do not need to resolve any of the details of how this might work (it could work in countless different ways!) as long as we accept that this openness to different practices for living would need to be part of any ascenturian society. Furthermore, we should be able to see that it is not enough for there to be an acceptance of 'a religion', the door needs to be kept open for any and all religions capable of being sustained in an ascenturian world.

It is not clear that any religious community need be excluded on this basis. The Amish Mennonites form what might be called a 'bubble culture', separated from the conditions of life experienced by almost everyone else living around them. But their chosen isolation and maintenance of seemingly 'older' forms of religious practice does not exclude them from being part of the wider social collectives of the United States, the national framework they live within. To our imaginary ascenturians, the Amish are yet another source of adaptability and diversity - their apparent displacement in time from a contemporary perspective is neither here nor there. Neither is their apparent rejection of 'the future' problematic. On the contrary: reflect upon the situation of the Amish carefully and it becomes clear that they do not reject their future at all. They have chosen to preserve it. It is we 'English' who have chosen a way of living that is anathematic to having a future.

This example hopefully illuminates an important point about the suggested temporal principle: it is not a problem for people to form their own bubble cultures inside collective society, but it would be disastrous to impose such a bubble culture as the entirety of social existence. The recent tendency of young protestors to seek to tear down certain statues, to reject and eliminate certain books, and to generally attempt to sanitise the past is exceptionally problematic from the point of view of creating an adaptable 'future-proof' culture. It is one thing to form a bubble culture for your own purposes, as the Amish do. It is quite another to attempt to enforce this bubble culture on everyone, especially if the attempt entails severing our links to the past. This expressly violates the temporal principle proposed above, and thus cannot be part of any imagined ascenturian philosophy as I am laying it out.

Numerous objections might be anticipated at this point, justifications for why the sins of the past must be washed away in the purge of the present. But it is simply unnecessary to deal with the past in this manner. The circumstances of other cultures (past or present) are frequently disturbing or horrific to those who dwell within different sensibilities. But a knowledge of the future must draw against the memory of times past or else risk making the same mistakes over and over again. We can allow anyone who wishes to reject encounters with the past to form a 'bubble culture' that isolates them from engaging in unpleasant confrontation with what once was, perhaps. But we must not make the error of attempting to deny our histories to whoever wants or needs them.

The present is more than just a way station to the future, where we can tear up the tracks we have already travelled upon as extraneous... the present is the only vantage point we have to view the landscape of human time, and the relative visibility of the past when compared to the inherent unknowability of the future makes it a valuable source of adaptability and diversity that defends the future through the very act of remembering. This remembrance cannot be sanitised without the risk of forgetting the lessons of history... yet that history is never fixed, never static. Like religion, history is always capable of being adapted to the present, and this does not require us to cast out what was previously said or thought. On the contrary, each successive version of history within any given culture contributes to the greater whole of our collective understandings of the past.

The proposed temporal principle is not enough to complete the science fiction story of the ascenturians. They need to be empowered to accept diversity, for it is in that diversity that adaptability thrives. But this cannot be achieved by creating a catalogue of identities that are henceforth approved and permitted, and then demonising dissent from this (the 'intolerant tolerance' I criticised in Chaos Ethics). On the contrary, even our understandings of diversity must be diverse. To hold together any fictional society, something more than a coalition of bubble cultures is required. There must be some capacity for a social framework between those cultures or, equivalently, there must be broader, more inclusive bubble cultures that can span between them.

This suggests the basis for a social principle that supports the temporal principle, and the most necessary aspect of this principle must be that it is ambiguous as to what kind of cultural bubbles can be collected together. What we require is a plurality of collectives, founded on the reciprocal relationships between their members. The members of any such collective belong to one another, rather than being merely a bag of individuals unrelated by anything except identity characteristics. This reciprocity is implied in the whole concept of a 'collective' anyway, but it does not hurt to make it clear that a collective of unrelated individuals is inherently contradictory.

These collectives need also to be collected, and as a result we can surmise that at least some proportion of them must either behave in an inclusive manner to other collectives, or else be able to stand in solidarity with them despite their disagreements. Just as the temporal principle can be seen as descending from the historical 'left-right' split, this social principle can also draw against a political legacy: the rightful condition (Recht in German philosophy), the foundation of contemporary democracy, where citizens share in legal rights that secure their mutual freedom. Our promises to uphold 'human rights' and defend 'civil rights' began as a rethinking of the rule of monarchy by the Enlightenment thinkers, and was perhaps their greatest success. Sadly, after less than a century, these rights have ended up being trashed by the 'right' (who do not wish to extend rights to those declared their military enemies) and are now also dismissed by the 'left' (who do not wish to extend rights to those declared their medical enemies).

Although the least popular of the Star Wars movies, the prequels do an excellent job of exploring the fragility of our commitments to democracy and the rightful condition, and the dangers of falling into imperial dogmatism. Who can have missed the biting satire entailed in the future Emperor making his 'emergency powers' speech to the Galactic Senate in Episode II: Attack of the Clones (pictured above)? Palpatine declares with mock-modesty:

It is with great reluctance that I have agreed to this calling. I love democracy... I love the Republic. But I am mild by nature, and I do not desire to see the destruction of democracy. The power you give me I will lay down when this crisis has abated, I promise you.

Lucas was lampooning the military imperialism of the 'right', but this critique applies just as well to the medical imperialism of the 'left' - and to much more besides. Whenever our democracy fails, we are all eligible to share in the guilt. Yet rather than seeking to hide from our chosen side's failure to uphold civil rights, rather than resorting to excuses about what was militarily or medically 'necessary', perhaps we can take this opportunity to recognise that when it comes to defending rights, we all failed together. It is all too easy to let the invocation of a crisis weaken the foundations of civil society when fear distracts us from our ideals. As Lucas recognised, and before him Asimov and Herbert too, the decline and fall of the Roman Empire still holds important lessons for us today.

In the challenge of upholding our promises of human rights, we failed. But that doesn't mean we cannot try again. We do not necessarily need to restore the rights agreements of the mid-twentieth century - more inclusive promises could be made instead - but the ascenturians need some framework for promising solidarity. What is required is some principle that permits us to assemble different cultures, different values, different histories, different medicines, different sciences into a collective, or a set of collectives. This may look shockingly like giving up on truth, but this free expression of difference entails no such thing: it is merely an invitation to recognise that our access to truth is conditioned by where we stand - and clearly we do not all stand in the same place.

We should reject 'alternative facts' - a weak excuse for inventing fables of convenience - and abandon 'post-truth' fatalism, which implies no access to the truth whatsoever. We simply need to acknowledge that since you can only stand in one place, you can only see the truth from one angle. We thus require discourse with others stood elsewhere in order to assemble the truth with any degree of accuracy. The sciences are not some magical exception to this crucible of viewpoints: every successful scientific investigation has depended upon the participation of community perspectives for its validation. It is only by engaging with different interpretations of the evidence that the truth can be investigated, and we dogmatically enforce any scientific claim at our peril.

As an imaginary principle of assembly for ascenturians I suggest this modest proposition:

Assemble a plurality of reciprocal collectives of any viable kind.

Perhaps this principle also needs to entail a requirement of inclusiveness, but such inclusiveness is hard to codify without it risking collapsing the plurality required to ensure diversity and adaptability. Rather, it seems as if the purpose of 'assembling a plurality' already expects that some element of inclusion will be required, without having to set out the conditions under which it must occur. Precisely the problem with the movements that have followed in the wake of 'the left' is that the conditions of inclusion have been set as mandatory. We do not need and should not want this kind of narrow-minded 'inclusion', which is always an invitation to hate those excluded because they do not include as we do. Rather, the forms of inclusion our fictional ascenturians require can be of 'any viable kind'. It is not for us to dictate to them the conditions by which their society of societies and culture of cultures can be assembled.

These two principles - the temporal principle, and the principle of assembly - provide a starting point for the fictional world of the ascenturians, a fictitious people capable of undertaking the imaginary second leg of the journey of human civilisation towards the Tenth Millennium. But to give them any hope of completing that adventure, they need more than this. To have some hope of success, they must be capable of defending themselves from those threats they make for themselves, and that requires something that is utterly unthinkable for us humans today: sustainable technology.

Next week: Sustaining the Present


The Tenth Millennium

Dune - SandwormWith Denis Villeneuve's adaptation of Dune having once again revived interest in the world's highest-selling science fiction novel, it is a worthy time to ponder the question of whether and how humanity might survive long enough to enjoy any kind of 'far future'. What kind of future life do we want? How might we prepare a path towards such a future? Which principles might we need to reach a worthwhile future together? All these questions can benefit from reflection upon the themes and significance of Frank Herbert's extraordinary science-fiction saga.

The story debuted in Analog magazine's December 1963 issue as the three-part serial "Dune World", and concluded the following year with the three-part "The Prophet of Dune" in the same publication. The collected edition - then entitled Dune - would go on to win the inaugural Nebula, and share the 1966 Hugo award with Roger Zelazny's "...And Call Me Conrad" (also later republished as a novel, This Immortal). Dune is a remarkable novel on many fronts, but considered from the perspective of world building it is especially notable for the creative manner in which Herbert ensures that human politics can provide the core to his saga by performing the opposite operation to most sci-fi writers: he imagines new technology in order to limit technology.

Dune is set in the year 10,191 AG, the tenth millennium after the founding of the Spacing Guild, which is generally taken to be roughly 20,000 years into our future. Herbert's interests in these stories are political, humanitarian, and ecological - but to tell the kind of tale he wishes to tell, he needed to ensure that a tale set this far into our imaginary future could still be grounded on humanity as we understand it today. This is a common theme in far future science fiction, but it is no means a given: H.G. Wells' 1895 novella The Time Machine is probably the most famous of the early flights of fancy to imagine a future for humanity that is not grounded in contemporary human experience. Wells projected the social circumstances of his time through the lens of evolutionary theory, and presents a future world where the child-like Eloi, descended from the Victorian upper classes, are preyed upon by the savagely bestial Morlocks, descended from the working classes. Political themes from Wells' day are transformed via scientific models into a new fictional setting.

Herbert takes the opposite tack to Wells, creating in his imagination a mechanism to preserve humanity as it is, so that his story can be about today's humanity for all that it is set in the far future. Pivotal to this process is the backstory of the Butlerian Jihad (not, unless Dune was even more prophetic than anyone previously thought, anything to do with Judith Butler). The Butlerian Jihad is a thousand year war against "thinking machines" leading to the zealously protected maxim that 'humanity must not be replaced'. This effectively removes computers, robots, and artificial intelligence from the world of Dune and ensures humans occupy all the key roles in the resultant society. The Mentat, or 'human computer', replaces the strategic role of computers. The Guild Navigators take over the complexity of space travel from computers. The Bene Gesserit take over social control from computers. In all cases, humanity is given centre stage, which means in all cases the political landscape is manipulated by the power-hungry.

Along parallel lines, the inclusion of the Holtzman Shield is a technological riposte intended to foil futuristic weaponry. High velocity projectiles cannot penetrate the shimmering blur of these shield generators (rendering conventional firearms obsolete) while futuristic weapons (lasguns in Herbert's world) cause a devastating nuclear reaction when fired upon shields - something that indeed happens during the escape from Arrakeen in the book, but that is elided in Villeneuve's adaptation. The result is that this imagined technology allows Herbert to set up a Feudal future universe where the vast armies of interstellar powers can still be required to engage with swords and knives, simply because a technological invention that the author has imagined serves the world building role of limiting the significance of future technology.

Dune should not be seen as an unprecedented change in the flow of science fiction, but rather a pre-meditated response to the previous generation of science fiction authors, often called the Golden Age of Science Fiction. Herbert's saga in this sense can be understood as a calculated reaction against Isaac Asimov's Foundation series, with both inspired by and underpinned by Edward Gibbon's 18th century history The Decline and Fall of the Roman Empire. The Foundation stories had also been first published in the very same magazine as "Dune World", but under its earlier title of Astounding - originally and revealingly titled Astounding Stories of Super-Science. Asimov's tales are the first to focus upon 'Galactic Empire' as a sci-fi setting, and the author consistently revelled in the unlimited power of scientific technique, especially by imagining a future social psychology so accurate that it can predict the future - at least until a mutant 'Mule' disrupts its presuppositions.

This imagined capacity of the sciences as future-tracking, as I have frequently pointed out, is not something plausibly possessed by any authentic scientific methodology - so much so that I might impishly dub such faith 'sci-dolatry'. This unwarranted fantastical power attributed to scientists has been stoked by science fiction for much of its two centuries of practice, and the trend is even more widespread today. The last few years have added a tragic irony to this unfortunate tendency, since nothing disrupts the ability of scientific work to inform decisions quite as much as premature certainty that the answers have already been revealed. It would be wrong to assume that scientific research didn't or shouldn't possess a pivotal role in informing political decisions - but it remains exceptionally dangerous to presume that politics can take scientific topics into its partisan realm without utterly corrupting them. Every attempt to do so destroys a scientific field's unique and collective capacity to assemble an objective description, which is to say, to uncover the truth of objects.

In its own way, this corrupting influence of politics on truth is at the heart of the parables central to Dune, and it is against Asimov's sci-dolatry that Herbert seeks to wage metaphysical jihad. Tim O'Reilly's 1981 book on Herbert's work quotes the author himself as he directly marks out his criticism of Asimov's Foundation:

History… is manipulated for larger ends and for the greater good as determined by a scientific aristocracy. It is assumed, then, that the scientist-shamans know best which course humankind should take… While surprises may appear in these stories (e.g., the Mule mutant), it is assumed that no surprise will be too great or too unexpected to overcome the firm grasp of science upon human destiny. This is essentially the assumption that science can produce a surprise-free future for humankind.

Throwing his gauntlet into the face of the Golden Age science fiction authors and their heirs, Herbert invites us to into an entirely different kind of science fiction, one that is much better suited to the needs of the present than its earlier rival. Far from Astounding Stories of Super-Science, Herbert's invitation is to ponder both humanity and non-humanity through clear scientific understandings without falling into the kind of decadent self-aggrandisement that marks the fall of both the historical Roman Empire and the future-historical Padishah Empire of Dune.

Nowhere is this invitation to think beyond the narrowed vision of sci-dolatry clearer throughout the Dune saga than in the role of ecology in the wider sweep of the books. The new field of ecology was gaining prominence at the time that Herbert was writing, and he had been commissioned to write articles about the subject that were instrumental in forming his thinking about Arrakis. On the one hand, Herbert realised that ecology entails the understanding of the interconnectivity of everything in any given environment. On the other, he recognised that the human attempt to steer processes for their own benefit comes with inevitable unseen costs. Thus, in the later Dune novels, the sandworms become endangered and even skirt the brink of extinction because the terraforming of Arrakis (the attempt to make it more Earth-like) destroys the desert and therefore disrupts the ecological balance.

Facing the challenge that Herbert's novels open up to us means thinking about the future differently. Whereas the world of Dune is conditioned by the narrative requirements of the story Herbert wanted to tell (as with any exercise in world building), our imaginary visions of our own far future are conditioned solely by the requirements we hold for those unrealised possible worlds. We can therefore confront ourselves with a question that flows inexorably from the story logic of Dune: what would we need to imagine in order for humanity to reach the Tenth Millennium?

If we say that civilisation (whether understood as cities, or as writing, or as both) dates back to roughly 3,200 BC, that would place our current time into the Fifth Millennium, some 52 centuries since civilisation begins. We are thus a little over halfway towards the one hundred centuries - a century century - that would bring us to the Tenth Millennium. What might we need to ensure that happens...?

This is a science fiction premise that bears some semblance of hope for a significant impact on the way we confront our political problems today. We face not only a rising factional distrust that threatens to overwhelm the narratives of human solidarity that inspired us just half a century earlier, but parallel threats to the very conditions of knowledge brought about on the one hand by the undermining of the open practices of scientific investigation, and on the other by a technocratic imperium that is laughably under-diagnosed whether we emblazon its banner with the name 'Big Tech', 'Capitalism', 'the West', 'the Global North', or indeed 'Sci-dolatry'. Arguing over the fault lines in contemporary thought is only making matters worse: we need to try to imagine a different way of coming at our collective problems.

To take on this sci-fi storytelling summons at its fullest extent is to ask more than just 'what if humanity's descendants survive the other half of this arbitrary temporal journey?' Such a vision is only to wonder whether our biological kin have another 48 centuries left in them, and in this regard even The Time Machine has already been there. But what if we up the ante. What if we are asked to imagine not only the survival of descendants of our own species, but the actual survival of this specific species, homo sapiens. What if we further ask how we might help ensure that this happens while keeping all the diversity of human existence we know today?

In other words, what if we were to try to philosophically unravel the world building exercise of Dune with an eye towards turning our imagination towards that least exercised muscle of the science fiction anatomy: utopianism. There is a danger in this word, 'utopia', in so much as it attempts to mark a rarely imagined 'good future'. What we need right now is not so much the fine details of any projected perfection, but rather a set of ideals that might guide us towards any possibility of future perfection. It does not even have to be perfect (it surely never will be). It would be enough to find some way of securing what is good now, such that we might still find that good in our imagined Tenth Millennium.

To begin with, our philosophical world building might benefit from establishing an opponent to spar against, just as Foundation sets the reactionary position of Dune. We have at least two options. Firstly, there are what we might call the extinctionists, who express the modest view that human extinction is inevitable. I'm sure that proposition is correct, but I join with the late Mary Midgley in suggesting that this inevitability is not some deep revelation but rather the most trivial of tautologies that has nothing to offer in respect of the important questions about how we should live. The extinctionists are not the enemies of whatever we might build in our imagination, though, they are merely realism's doom-appeasing offspring. They are people like German film-maker Werner Herzog who sees extinction as nearly unavoidable and somewhat imminent, without letting those beliefs undercut a desire to live for today.

Alternatively, we have the accelerationists. This term is often accused of crawling out of the dark corners Marxist thought, and there certainly is a history to tell there. However, I prefer to anchor this tangentially back to Dune via Zelazny's 1967 Lord of Light, where the term 'Accelerationist' first appears. There are so many fragmentary movements bearing this name, and nobody seems to agree what they are asking for, so I will simply take Zelazny's revolutionaries as the template: an accelerationist seeks to raise the pace of development for new technologies. They may do this because they seek a bizarrely mystical future salvation (such as the Singularity), or because they have fallen into some other kind of sci-dolatry, or they could be that subset of Marxists who schizophrenically believe that capitalism will certainly fail if we keep going like this, and therefore we should accelerate the collapse of civilisation in order to foster revolution. I don't find any of these positions convincing or appealing, but again, even the most reckless fools under this banner are not the enemies of what we might try to imagine, so much as they are Foundation to our Dune. Their mistakes might yet inspire us.

I propose to try to conceive of a third movement, a movement that does not exist and that need never exist, but that might through our imaginings give us another way to think about these problems. Following on from the Dune-inspired image of a Tenth Millennium, a century of centuries, I propose to call this phantasmal movement ascenturians, combining the theme of a hundred from 'centurion' and the fantasy of 'ascent' in the satisfyingly ludicrous sense appropriated by Jacob Bronowski in The Ascent of Man. The is pure teleology, that is, the ascription of purposes rather than causes, which is forbidden today. But our fictional ascenturians are invited to embrace this mythology for the same reasons that Mary Midley was forced to remark that the idea of purpose is not as dispensable as many have tried to insist. Arguing against Richard Dawkins' anti-teleological view of life, she wrote:

Dawkins’s claim that the universe contains “at bottom, no design, no purpose, no evil and no good” cannot be right. For it is obvious that our own planet - which is certainly part of the universe - is riddled with purpose. It is full of organisms, beings which all steadily pursue their own characteristic ways of life, beings that can only be understood by grasping the distinctive thing that each of them is trying to be and do.

The fictional credo of the ascenturians, these purely imaginary future people, is to seek to ensure that humanity - in all its diversity - will experience the Tenth Millennium together. What philosophical principles would such people want to consider? What attitudes to society and technology would they seek to foster? What images of equality or disparity might they evoke? These are the themes of ascenturian philosophy, utopian only in the sense that - quite unlike the extinctionists - they envision a future for humanity, and also - quite unlike the accelerationists - they do not place their bets upon provoking disaster in order to seek salvation.

This vision is dedicated to Frank Herbert's Dune, and is beautifully expressed in Tim O'Reilly's encapsulation of the science fiction author's guiding philosophy:

In the kind of universe Herbert sees, where there are no final answers, and no absolute security, adaptability in all its forms - from engineering improvisation to social mobility to genetic variability- is essential. Improvisation is the only security. It is not an absolute security, but relative. Life is always changing and demanding new adaptations.

Welcome to the limitless fantasy of the ascenturians, a joyous, optimistic, utterly unrealistic fiction of humanity arriving at the Tenth Millennium together.

Next week: Assembling the Future


Roger Moore’s Dangerous Teenager

A blog-letter to Jed Pressgrove of Film Quarantine as part of the Republic of Bloggers.

Roger MooreDear Jed,
A short while ago, whilst working through all the James Bond movies, you declared that you were coming to the conclusion that was no such thing as a good Roger Moore Bond film. But I have quite a different take: there’s no such thing as a bad Roger Moore Bond movie - only different ways to appreciate the brilliance of Roger Moore Bond movies. Yes, they are sexist, but markedly less so than Sean Connery Bond movies. Yes, they have content that if filmed today would be outrageously racist, but they were not filmed today and the cringes of hindsight do not undo the gains for cultural inclusion these films may strangely have achieved. Indeed, so much do I rate the late Roger Moore’s stint as Bond that for our first family movie night experience, my wife and I choose these films for my three sons to share with us. Are we mad? Probably. But there is definitely method to our madness and I should like to share that with you without any attempt to persuade you that your perception of these films is mistaken. It is not. I rather suspect you just haven’t the prior experience required to enjoy these particular (very particular!) movies.

My wife is from Tennessee like you (unless I’m mistaken) and comes to Bond on my suggestion having really loved the first (and only the first) Austin Powers film. As such, the Sean Connery Bond movies were a Where’s Waldo? extravaganza for her! “It’s Doctor Evil!” she exclaimed upon seeing Blofeld for the first time because, well, of course it undeniably is. When we finished watching the first Roger Moore outing, Live and Let Die, she declared “I don’t know if that was the best movie I’ve ever seen or the worst.” That is the greatest description - and highest praise! - of Moore’s Bond films I can imagine. For you must be able to enjoy bad movies for what they are good at to love Moore as Bond. The 1981 Clash of the Titans is quite the same; it’s a masterpiece. It’s also a cinematic dumpster fire with LA Law’s Harry Hamlin totally unable to anchor his own action movie and upstaged quite inevitably by Ray Harryhausen’s stop motion menagerie.

This brings me to the first reason to love these films: Derek Meddings. A special effects genius at a time when such things required immense practical skill, Meddings is best known for his amazing work with Sylvia and Gerry Anderson on their incredible Supermarionation shows like Thunderbirds and Captain Scarlet. My boys and I are working through these on Saturday mornings (along with classic Doctor Who), and are currently enjoying Stingray. Meddings contributed model work to five of the seven Moore Bond films, and was Oscar-nominated for Moonraker. You can spot a Meddings model shot from a mile away, although I do wonder if you have to have watched those classic 1960s sci-fi puppet shows to truly appreciate the craft involved. Appreciation flows from our prior experience; I never appreciated shot composition until I watched Seven Samurai, still my favourite film of all time. But Kurosawa movies are brilliant in almost every way. That’s not what Moore’s tenure as Bond is about. Meddings work carries a lot of appeal for me, holding the same joy as a beautiful matte painting, which is so much more wonderful than anything you can do in CGI to my eyes. I’m so delighted Meddings won an Oscar for his work on the 1978 Superman film. He was to miniature shots what Harryhausen was to stop-motion: a legend.

Neither is Meddings the only such mythic cinematic contributor to these films. John Barry, perhaps the greatest and most influential orchestral film composer Britain has produced, does some of his best work during Moore’s run, although his work with Shirley Bassey is more striking in the earlier Bond films and his magnum opus is arguably Louis Armstrong’s "All the Time in the World" from On Her Majesty’s Secret Service (which I believe we both rate highly as a Bond film). I think, on balance, his score for that movie and for You Only Live Twice are a head and shoulders above his work for Roger Moore, but the British Film Institute did pick up on the score for Moonraker as one of Barry's ten best. I personally think videogame orchestral scores almost always draw from Barry when they are not instead stealing from John Williams. But the significantly insignificant difference here is that John Barry is British.

This British connection is important. Unlike my wife, I’m British, quite the mongrel actually - half English, quarter Scottish, with Italian and Belgian bloodlines also in my family history too. Roger Moore is the most British of all the Bonds, and his movies are so intimately caught up in British culture that comedian Steve Coogan could write a comedy scene in which his most enduring character (Alan Partridge from The Day to Day) recites verbally the entire opening sequence to The Spy Who Loved Me - including those lurid Maurice Binder titles - in an utterly hilarious irritable deadpan. It's worth noting, then, that Moore was the first English Bond. Connery? Scottish. Lazenby? Australian (not British). And afterwards: Dalton? Welsh. Brosnan? Irish (not British). It's only when we get to Craig that we get English again. And what a step down that is, from Moore to Craig - although presumably not for you!

Britain, of course, has an extremely chequered history from its time as a world power, which peaked in the nineteenth century, just as the United States' empire is peaking seems to be peaking in the twenty first. In 1973, when Live and Let Die arrived, Britons (especially the English, but not only...) were rather struggling to get to grips with the reality that whiteness is not Britishness. This was especially the case with respect to the burgeoning West Indian population - half a million arrived between 1948 and 1970 seeking jobs, which they were expressly invited to emigrate for but whose welcome was not always (or indeed often) warm. But there were still vanishingly few black actors on TV in the 70s. Doctor Who is one of a rather short list of shows to have had multiple black actors in key roles by Moore's debut. Britons were simply not used to watching black people in 1973. And then here is Live and Let Die - a suave, black supervillain, multiple black henchmen all with great charm - and none more so than dancer Geoffrey Holder as the quite literally marvellous Baron Samadhi. And black allies who are there for something more than just being killed! The message to spellbound Brits watching was that black people can be spies and criminal masterminds, just like white people. Yes, there’s massive influence from Blaxploitation films at work here. But the benefits for British cultural integration should not be underestimated. 

So too with Vijay Amritraj and Kabir Bedi in Octopussy. Okay, we have to endure every cringe-inducing Indian cultural stereotype imaginable - but at a time when the Indian population of Great Britain were almost entirely invisible on recorded media, here is a film saying Hindus and Silkhs can be spies and superpowered villains too. The location shots from Udaipur are among the greatest in the entire Bond movie run, although as with the miniatures shots I mentioned above it takes a certain kind of film appreciator to enjoy location shots independently of their role in the narrative. Still, watching Amritraj pal up with Moore sends a clear message that Indian people can be superspies too - and that counts for something. Please do not underestimate these gains because they are tied up with casual racism... acceptance that Britishness need not entail whiteness begins with films like these, and while I do not know what black and Asian people in the 1970s made of them, the predominantly white audience for the movies here in the UK were, I suggest, subtly and positively affected by the inclusion of heroes and villains of colour. Even if these actors were not themselves British, they opened doors in the media industries for black and Asian actors who were.

What of Moore himself? Here we cannot tell any story without first acknowledging the centrality of Sean Connery to the Bond mythos. He embodies the phrase that was ironically said (by film critic Raymond Mortimer) in connection with the first Eon Productions Bond movie without Connery: "James Bond is what every man would like to be, and what every woman would like between her sheets." This is of course a problematic claim unless it is preceded with the phrase “in the imagination of men...” Which men? Why, 1960s stereotype men of course who, on the basis of Connery’s Bond, fantasise about striking women across the face so that they will then want to have sex with them - something Connery’s Bond does with embarrassing frequency.

But not so Roger Moore’s Bond. Whilst still sexist by contemporary standards, his version of the iconic character is markedly more respectful of women in that his technique for attracting women isn't to physically abuse them. Clearly, Bond is still at heart an adolescent power fantasy - but what action hero is not? More than that, Moore’s Bond isn’t just a fantasy for teenage boys, he is emotionally a teenage boy - with his distinguishing feature being that unlike any actual teenager he is written with the skills, gadgets, and sheer luck to actually succeed at everything instead of merely falsely believing that they would do so. Moore’s Bond is an absurdly dangerous teenage boy in a man’s body, who is always inches away from death by misadventure but is repeatedly saved by script immunity or, more often as not, by the magical science provided by Q’s gadgets.

Moore’s casting was not any kind of accident. His quasi-predecessor, George Lazenby, had the fatal flaw of not being Sean Connery, while Moore had the immense benefit of not being George Lazenby. Moore was chosen precisely because he had already shown himself more than capable of playing a gentleman spy, having done so as Leslie Charteris' 1920s hero Simon Templar in the TV show of The Saint, which aired from 1962 to 1969. Templar is a thief not a secret agent as such, but he is still very much part of the spy thriller genre broadly construed. And like Moore’s Templar, Moore’s Bond is impossibly skilled, implausibly righteous (yet never quite good, per se), and bucks authority with a glint in his eye, an impish grin, and more than a few raised eyebrows. Transplanting Moore into the Albert R. Brocolli film series was a safety play - and boy, did it work! The movie series’ success grew substantially during Moore’s tenure - he even got to ‘win’ against Connery in the much publicized ‘Bond vs Bond’ box office duel of 1983, when Octopussy outgrossed Never Say Never Again.

What I love most about Moore’s dangerous teenager is that quite unlike the brutal, emotionally stunted Bond of Daniel Craig, or the woman-beating Bond of Connery, Moore’s Bond is always respectful to those serving in the military (but never entirely to the civil command, which Bernard Lee's and Judi Dench's M represent) and largely avoids being a murderer - except for two instances, which apparently Moore himself was vehemently opposed to. Yes, enemies are killed, but largely in self-defence. Moore’s Bond is a warrior with honour, something quite unthinkable in contemporary cinema without transplanting the story back in time more than a hundred years. In the twenty first century, our spies and military are now permitted to murder even our own citizens with unquestioned yet utterly questionable impunity. But Moore’s Bond has an ethic to his spycraft that is as unrealistic as the magical science of his gadgets, but that makes him far easier to love because we somehow want to believe that spies could be this noble, even though we know they are not.

As I said at the outset, it’s not my intent to convert you to Moore, but rather to show how Moore’s Bond is tied up with British culture in a way that Connery’s Bond really isn’t (although some of his filthiest puns - penned by children's author Roald Dahl for Your Only Live Twice - require a grounding in British schoolboy humour to appreciate). Connery (Scottish) and Brosnan (Irish) are the most Americanized Bonds - and very enjoyable for it! But Moore is quintessentially English, his Britishness rooted in Oxbridge, the Officers’ Training Corps, and London gentlemen’s clubs (by which I do not mean strip clubs!). As problematic as this may be in retrospect - the false equation of Britishness with Englishness being a papering over of the aforementioned whiteness problem - it has an inherent charm that is also part of the appeal of Sherlock Holmes, another quintessentially English hero with magical science at his disposal.

I love Moore’s Bond, and I’ve only just scratched the surface of why in this short missive - why, I haven't even mentioned how they let the always astonishing Grace Jones design her own wardrobe in 1985's A View to a Kill, which must surely be the greatest costumes ever seen in a franchise known for its outlandish clothing. There's so much to adore in these films once you let them beguile you, but I think appreciating Moore as Bond requires either an openness to archaic Englishness as an aspect of Britishness (which is also helpful for appreciating classic Doctor Who), or an ability to enjoy an action movie purely as a pulp romp and not as cinema, per se. The Moore Bond movies may indeed be bad films, but they are among the greatest bad films ever made. It has been a pleasure sharing them with my three young boys, and I hope in writing this letter that I can give you at least a glimpse of why that might be so.

Please continue to be the good and excellent person you are, and to write about films, games, and whatever else you choose to discuss. If you should find the time to reply, I would love to hear your thoughts on any of this, or indeed on the 1980 film The Blues Brothers, which I personally view in quite similar ways, as allowing a vast raft of phenomenal black musical talent a cinematic spotlight they could never have had at that time without teaming up with white comedians. 

With love and respect,

Chris.

Comments and further blog-letters are always welcome!


Magical Science

Elementary Dear DataArthur C. Clarke famously suggested that any sufficiently advanced technology would be indistinguishable from magic. This suggests another maxim: any insufficiency developed philosophy of science is incapable of distinguishing between science and magic.

We all have our own philosophy of science, our conceptual framework for understanding scientific topics. In the best case, our personal philosophy of science informs us of the limitations of scientific knowledge, allows us to put research into a wider context, and ensures we remember that the work of the sciences is still at heart an entirely human endeavour. Alas, few of us have such a clear view of the sciences. Far more widespread is a kind of pervasive mythos we might call ‘magical science’, which affords to the image of science unlimited future power, and to scientists an awesome capacity to divine the truth through singular experiments, like a Roman haruspex reading animal entrails to predict the future.

Magical science has the dubious honour of being the only superstition widely encouraged today. We are all too frequently adamant that science has all the answers, science is the royal road to truth, that we can trust in the science... I notice that even the British Prime Minister has taken to invoking magical science in his speeches these days to validate his increasingly dubious actions. At heart, magical science may seem harmless, a mere rose-tinted vision of the work of scientists, one that tries to account for all the successes of our various research networks without any attempt at balance or insight. We typically overlook this kind of naive enthusiasm for scientific achievement on the basis that it's at least ‘supporting the right team’. Yet it becomes increasingly clear that blind support for science can manifest in ugly ways, even in ways that can prevent the sciences from working, plunging research into the debilitating condition of pseudoscience, as previously discussed.

The perceived infallibility of the sciences as truth-seeking procedures clashes worryingly with the necessity of scientists making mistakes, and thus magical science leads to anger at scientists when the actual scientific work is not as wondrous as it is imagined it should be (as with the ugly 2009 L'Aquila trial, where terrible earthquakes in Italy were not successfully predicted and the scientists blamed), or when any scientist speaks out against a claim that has been proclaimed unshakably true by its advocates. It is precisely because magical science is incapable of distinguishing science from magic that it represents a far greater danger to scientific endeavours than other philosophies, perhaps even so-called ‘anti-science’ philosophies. What deceives us here, what elevates scientists to their misguided role as flawless augurs rather than researchers struggling with ambiguous data, are the bad habits we have learned from the manifestations of science in fiction, where magical science is the norm. If we wish to see the work of the sciences with clearer eyes, we may have to start by putting some of the most iconic characters in fiction on philosophical trial.

Sherlock Holmes and the Flawless Investigation

It is sometimes remarked that in creating Sherlock Holmes, Sir Arthur Conan Doyle produced the first hero of ‘the scientific age’. The Victorians were the ones who coined the term ‘scientist’ and it was their obsession with the sciences that set the scene for the unfolding technological transformation of the world over the next century and a half. We tend to treat the character of Holmes as significant mainly for crime fiction, as the archetype from which all whodunits descend - but Holmes, quite unlike a Raymond Chandler or Agatha Christie detective, is always a practitioner of magical science. Partly, this proceeds from the inherent parsimony of storytelling whereby all questions will eventually be answered because everything is there by the author’s design. Partly, however, it proceeds from Holmes’ essential power - which upon closer inspection is not deductive reasoning at all, but rather the infinite convenience possible solely in literature.

Doyle gives Holmes a quite impossible access to every conceivable fact as a starting point, such that a berry stain or the smell of a particular tobacco can certainly be identified, and then (to pile on the absurdity) Holmes by purest chance always encounters a set of circumstances that allow for only one viable interpretation. This particular brand of tobacco, for instance, is sold in exactly one place in London... We thus end up admiring Holmes purportedly scientific form of investigation while what we ought to admire is the way Doyle effortlessly conceals the magical science entailed in this depiction by making it seem as if all of Sherlock’s deductions (and inductions) were strictly logical. Doyle has contrived a set of circumstances that Holmes, with his unlimited catalogue of facts, can be certain to solve. This makes Holmes a disastrous role model for scientists (or indeed, detectives!) since it is only through the meticulous construction of literary contrivance that he possesses any investigative power at all. This becomes clearest when Holmes relies upon facts we know are false - such as the ludicrous snake plot device in The Speckled Band, which entails behaviour implausible to coax out of any reptile. Holmes’ claims to be a man of science are rather fraudulent behind the scenes: he is simply the locus of a mythic depiction of magical science.

Neither is Holmes the only such character. Both Spock and Data in the worlds of Star Trek share this power of magical science - also manifested in these shows by the tricorder, which like Holmes spits out every required fact on demand and without error. Or consider Doctor Who from the third Doctor onwards: anything necessary is certainly known by the Time Lord, except when the story requires a convenient (and often temporary) amnesia for dramatic effect. That both Data and the Doctor had a spin at being Baker Street’s most eligible bachelor is not accidental, nor perhaps is Stephen Moffat’s concurrent time as showrunner for both Doctor Who and Sherlock... Magical science heroes seem to reaffirm our faith in the power of scientific knowledge, while also playfully exposing the quirky personalities of scientists. House, The Big Bang Theory, and much more besides all participate in a literary tradition that stems from the Sherlock Holmes tales, and is now seemingly dominated by his science fiction proteges. 

Yet these are not scientific heroes, but magical science heroes. They have exactly the facts and the circumstances to answer perfectly every time, without ever having to confront the ambiguity, indeterminacy, and incompleteness of an authentic scientific problem. They are to science what Superman is to police officers: naively idealized caricatures. They find the answers solely because they live in stories where uncovering the truth is possible by design. This is a wildly misleading template for scientific truth, and although we know these are ‘just’ stories, we somehow import our wilder beliefs about the sciences into our everyday thinking unless we are extremely careful. If we are to break this spell, we need a philosophy capable of distinguishing science and magic - and for this, we need a clearer understanding of ‘scientific truth’.

Desperately Seeking Truth

Even if we start with the acknowledgement that the sciences are capable of discovering or affirming truth, the question of what might qualify as a ‘scientific truth’ is far trickier than it seems. As the preceding discussion on pseudoscience made clear, we cannot simply append ‘scientific’ to known truths without distorting the essential ambiguities of the research process where we cannot in practice know if the apparent truth of a researched claim will hold in the future. In fact, we have a choice. We could align ‘scientific truth’ with the unshakeable deep truth of reality and thus admit that the claims asserted by scientists cannot be known as truth at all (effectively contracting the domain of scientific truth to concluded research programmes like optics). Or else we can align scientific truth with the body of beliefs held by scientists, with the inevitable consequence that such truths can be later revealed as false - or even abominable. We don’t even have to go back a century to find all manner of racist, sexist nonsense asserted as truth by those who identified as scientists.

Now those who buy into magical science have an easier job here, but only by being wildly dishonest about both truth and scientific methods. According to magical science, scientists uncover truth infallibly so all claims asserted by scientists are scientific truth. Thus if and when the circumstances shift we can ‘debunk’ or ‘discredit’ those responsible and say they were not really scientists at all, or even exclude their claims from consideration in the first place! This is where ‘pseudoscience’ has been used as a label, although as I have argued previously it is not a terribly viable way of using the term. Babette Babich has made even stronger - and oft misunderstood - claims about the way the discrediting associated with the term ‘pseudoscience’ serves as a dogmatic attempt to demarcate legitimate science, while all too frequently preventing any scientific enquiry from even beginning. Thus when this particular word comes out, it narrows scientific knowledge by declaring certain topics forbidden and out of bounds - and woe betide the researcher who goes on to try to report experimental results from such verboten fields...

The highly problematic implication of every attempt to discredit and thus demarcate ‘science’ from ‘pseudoscience’ must be that we cannot know when scientists assert a claim whether it will later need to be ‘debunked’. Thus faith in magical science is inevitably a distortion of the truth - for things we will say are scientific truths on this philosophy may later be ‘discredited’, or even discredited before they are considered at all. The alleged truths of magical science are thus only defended by ignoring the inevitable consequences of the inherent revisionism of scientific practice and pretending that the current consensus among researchers is ‘more true’ than it was yesterday and thus that now (and by implication, only now) we can trust everything scientists say as long as we are standing guard for those pernicious pseudoscientists who ruin it for everyone. To say that this is dangerous nonsense is easy; to replace it with a more sound philosophy of science will be much harder.

There might be a way out of this maze, but it would require us to think differently about the relationship between truth and the sciences. Part of what deceives us here is our desire to understand the truth in terms of a set of valid statements. Since we can point to scientific concepts we abandoned, like phlogiston (which was a hypothetical substance that made combustion possible), we want to assert a gradual improvement in the accuracy or scope of our ‘book of facts’. “We would not be fooled by phlogiston today,” we might think. Yet phlogiston was an important - and arguably entirely scientific - proposal that was merely discarded when our understanding of chemistry shifted such that combustion could be thought of in terms of a chemical reaction with oxygen.

The brutal truth of the ‘book of facts’ is that such a collection of statements today would theoretically contain far more ultimately false claims than it would in the 1770s, simply because the number of scientists and the diversity of research fields has increased dramatically we are now paradoxically more wrong than researchers in the 18th century (in terms of sheer numbers of errors made) - the inescapable consequence of asking both more and more difficult questions. What makes it feel as if we are now more right is knowing that phlogiston was to become replaced by a new understanding of chemical reactions and thus combustion and so forth. But this is largely an illusion caused by examining successful research programmes in hindsight.

Similarly, when I say phlogiston was ‘scientific’, I am projecting with hindsight since the term ‘scientist’ was not coined until 1834... researchers in the 1770s would not have described anything they were doing as ‘scientific’ - it is our desire to paint the sciences as something with a history of more than two centuries that makes us ‘claim’ both phlogiston and oxygen (not to mention Copernicus, Galileo, Newton and so forth) as part of the story of ‘science’, rather than the natural philosophy that those involved would have stated they were pursuing. Thus our ‘book of facts’ not only contains more errors than our predecessors two and a half centuries ago, it is not even entirely honest about its relationship with its own past. Add to this the unavoidable truth that this imagined ‘book of facts’ does not exist (for all that encyclopedias and their successors have wished to fulfil this role) and it begins to feel uncomfortably like we are deceiving ourselves - as if we have all fallen for the seductive confusions of magical science.

Legitimate Practices

We want to defend our intuitive impression of the sciences as truth-seeking, and also (in some nebulous sense) successful at doing so. How do we do it?

One option we can consider is that which I proposed in Wikipedia Knows Nothing: to switch our focus from facts (true statements) to practices (skills and equipment). To know how to use something - a polymerase chain reaction, an interferometer, a fractional distillator - is more a matter of knowing what to do than it is a ‘book of facts’, even though that knowledge also produces facts related to the equipment used (and any theories deployed to give a context to the reading of the instruments). Thus an astronomer armed with geometric theorems can use an interferometer to measure the diameter of stars, while an engineer can use an interferometer and the wave theories of light to measure very small objects precisely. The practices associated with both the equipment (the interferometer) and the theories associated with each specific usage give rise to facts - in this case, distances. The difference lies in what legitimizes the activity in question: on the usual conception of knowledge, if you had the facts you had legitimate knowledge if those facts were true and the reasons for justifying them were correct - which actually provides no means of knowing what is or is not legitimate since our criteria for legitimacy requires an appeal to something beyond the situation (the truth) that we cannot access directly. Conversely, when we view knowledge as a practice, what makes the facts legitimate is that we are using the tools correctly. In this context, we have recourse to everyone with the relevant knowledge of the tools entailed to verify the legitimacy of the practices used and hence the facts reported.

On this understanding of knowledge, unlike an appeal to the truth, we can construct a viable understanding of ‘scientific truth’, since certain equipment, certain theories can be uncontroversially attributed to the sciences, and their correct usage can be judged by anyone else with access to the same knowledge practices. On this path we can therefore distinguish between scientific truth (facts emerging from legitimate research practices) and errors, provided we allow the disagreements to be properly explored in any given research community. However, as Babich warns, this cannot happen if we rush in with a dogmatic cry of ‘pseudoscience’, since every attempt to discredit something a priori entails an outright refusal to think about a given topic at all. Ironically, such attempts to discredit effectively cause an outbreak of the condition of pseudoscience, in my sense (a state of disrupted communication where scientific work can no longer be pursued), since whomsoever speaks this word with the intent to discredit (and thus ignore something) signals the very breakdown of legitimate scientific disagreement required to understand whatever is (not) being discussed.

The deeper problem we encounter when we look more clearly at how scientists discover or verify truths is that the claims that are asserted soon exceed simple assertions of facts. Once they do, it requires another set of knowledge practices to disentangle the relationships between facts and conclusions - and these are not strictly scientific at all, for all that scientists engage (unknowingly) in these kind of interpretative philosophical practices every time they assert anything but the most trivial of claims. Indeed, precisely the crisis of contemporary sciences is that their application is not a scientific practice, but a philosophical one - and Einstein’s generation may have been the last where scientists spanned these disciplines rather than retreating behind specializations that narrow, rather than widen, the scope of our collective understanding.

It is small wonder that we seem to have arrived in a “post-truth” world: the attempt to make the only acceptable truths those that flow from scientific endeavours renders a great many of the truths that matter impossible to adequately discuss, precisely because the important truths (those that pertain to what we ought to do, for instance) could never be scientific and thus cannot be established solely by an appeal to the facts. Yet we keep looking to scientists to give us a certainty that is not in any way available through scientific methods - and as the L'Aquila trial in Italy demonstrated, we will turn upon those who do not live up to our insanely unrealistic expectations and even accuse them of committing crimes when they, inevitably, make mistakes. But it is we that have failed, by falling for such an impoverished understanding of the complexity of scientific research as that of magical science.

Breaking the Spell

The needs of a narrative require magical science for the very same role as arcane magic - as a plot device limited solely by our imagination - and the two are (in more ways than we tend to acknowledge) equivalent, exactly as Clarke foreshadowed. The problem is, the actual work of the sciences, the global cybernetic collaboration of scientists that began under that name in the 1800s and continues today, is magical solely in its lustre and not in its details. Yes, the collective technological achievements facilitated by the work of countless scientists is now already indistinguishable from magic in a great many situations. But the work of scientists is not magic, and is certainly nothing like the magical science of a Sherlock Holmes fable. When we mistake the two, when we treat a human who conducts scientific work as someone wielding all the sorcery of magical science to know, automatically, everything that needs to be known, we are not supporting scientific truth-finding at all, but making it far, far harder, and in the worst cases, rendering it entirely impossible.

I will not say we must stop enjoying the fantasy of magical science in our stories - escapism is mostly harmless, after all, even if it is not entirely blameless - but is it not perhaps about time we stopped pretending that our scientists are superheroes with magical powers to determine truth? Scientific truths are extremely specific, and much narrower than we want them to be - they are at their most precise precisely when their claims are most limited. The heroism of actual researchers is of a patient, humble kind, that requires time and substantial disagreements to bring about. It is neither as spell-binding as Holmes’ contrived deductions, nor as charmingly detached from human fallibility as Data or Spock’s inhuman resourcefulness suggest. Neither has any living scientist access to the unquenchable moral certainty of the later incarnations of the iconic Time Lord to guide them either. These role models all imply a role that is impossible to bring to life: we should be careful not to buy too deeply into such implausible exemplars, without dismissing entirely the hopes and ideals that they embody.

Actual scientific practice is amazing, but it is neither miraculous nor supernatural. It is rather mundane in its details, which never entail perfectly prophetic experiments, and always require a great deal more arguing about the possible interpretations of the facts than literature has ever depicted. When we cannot distinguish science from magic, we obscure scientific truth and the immense and heroic efforts required to produce and understand it. We do all our scientists a disservice when we mistake them for sorceresses and wizards, and we entirely dishonour the scientific traditions when we censor or revile researchers for not living up to our hopelessly elevated expectations of their truth-discovering powers.

If we cannot distinguish science from magic, we need to either improve our philosophy of science or else remain silent on scientific topics. As Holmes remarks: the grand gift of silence makes Watson quite invaluable as a companion, for scientists, much like Holmes, often need us to pay close attention to their work and their disagreements, so that together we can eventually reveal true claims about our world. When we work to silence and discredit others we disagree with, rather than remaining silent so we might hear those disagreements we are denying, we have destroyed the very conditions for any kind of legitimate scientific investigation to occur. If we truly wish to be friends of the sciences, perhaps we too ought to know how to hold our tongue and try to listen to the quiet whispers of the truth when the game is afoot.

Comments always welcome, especially the polite ones!


Doctor Who and the Cancellation Monster

NuWhoIs Doctor Who in danger of being cancelled? Well, first let's make the obvious point that it was already cancelled, back in 1989. So the question probably ought to be: is it in danger of being cancelled again.

To find out, join me on a fascinating journey through the last decade and a half of New Who...

 

The Ratings of Doom

If you've been keeping your ears to the Doctor Who rumour machine recently, you will doubtless have stumbled upon something declaring that current showrunner Chris Chibnall has "killed the show", and that doom is imminent. These rumours of impending disaster aren't actually that new – British newspaper The Sun has been reporting "lowest viewing figures ever" for Doctor Who for about a decade now… but that's a publication that loves to make titillating scandal out of anything and everything, and it's wise to take its stories with a grain of salt.

The truth is: Doctor Who's ratings have fallen steadily over the last decade or so. As indeed have the ratings for all television shows. A key reason for this is that broadcast television is now facing increasing competition from streaming services such as Netflix, and the battle for eyeballs has never been fiercer. Take a look at the following chart which shows the number of viewers (in millions) for another British television institution, the gritty soap Eastenders, versus the numbers for Doctor Who. Both show decline in both the highest and the lowest viewing figures in each year 2005-2019, with a number of spikes in the Who ratings corresponding to special events like the 50th Anniversary in 2013 or the debut of the first female Doctor in 2018. If we look at the ratings, it's fairly clear that Doctor Who is actually holding off the rot at least as well (if not better) than Eastenders.

Eastenders vs Doctor Who

But viewing figures are not the most interesting metric when it comes to judging the 'health' of a BBC show with respect to cancellation. That's because Auntie Beeb, as a public broadcaster, is willing to concede to the idea that not all of its programmes need to be as successful as Doctor Who at attracting viewers. For the last forty years, the BBC has commissioned reports on Audience Appreciation, which is presented as a number out of 100 known as the Appreciation Index (AI) that is calculated by getting people to rate shows out of ten, average the scores, and then multiplying the mean value by 10. Take a look at a graph of New Who's AI scores from 2005 to present:

Doctor Who AI Scores

The BBC doesn't like to share these figures for individual shows… but they do for Doctor Who because, well, the AI scores are consistently good. If you look at the image above, it looks like a steady decline, which indeed it has been (more on this below), but that's because I condensed the x-axis on the chart so we could see the changes clearly. Here's the same data shown with a zero point of origin on the vertical axis.

Doctor Who AI Scores.with origin

It's almost completely flat.

That's because New Who has never scored below 75 on its Audience Appreciation - even Love and Monsters managed to score a 76 (indicating an average audience review score of 7.6), which is below BBC One's target average of 81 but is still a very respectable score for any TV show to get.

Some quick disclaimers… the BBC changed data supplier in 2012, switching to online surveys and increasing the sample size from 1,000 to 20,000. This presumably means the later data is more accurate than the earlier data (in so much as any thermometer for aesthetic judgement can be accurate!), but there's no discernible impact from this particular change. Also, these ratings are based on people who watched the shows as they were broadcast and were exposed to all the surrounding media fuss, they were not taken in a controlled environment (neither would there be any point in doing so). This means, as we'll see shortly, there's some fascinating hype effects on the AI ratings for Doctor Who.

How low would AI have to drop before the show would be at risk of cancellation? Well, it's widely discussed that the start of Sylvester McCoy's time as the seventh Doctor in 1988 the classic show pulled in Appreciation Index scores of 60, with Bonny Langford's character of Mel being singled out for particular dislike. As the BBC report stated:

56% of respondents who answered a questionnaire on the Paradise Towers story wished – as seemed likely at one point during the course of this adventure – that she had been eaten.

As a start, then, we can eliminate any risk of the BBC cancelling a programme that is still pulling in average AI's of 80+, especially one that is (in the wake of what happened to Top Gear) their only remaining flagship brand for export. The scores would have to drop by 20 points or so for this to be a risk, and that's not even remotely close to what's happening.

But there's a lot more we can glean from trawling through the AI scores with a curious eye…

Arc of Entertainment

The annotated version of the blue chart (below) gives us an intriguing peek into what the audience for this show has thought about individual episodes, and allows us to take a closer look at what we might call the Arc of Entertainment for New Who. After some digging, I have several hypotheses as to the behaviour of the AI scores that might be interesting to mull.

Doctor Who AI Scores.Annotated2

Let's start with the lowest points. These are all during Russel T. Davies and Julie Gardner's run. One is the oft-unpopular Love and Monsters, with its played-for-laughs monster, the Abzorbaloff. Honestly, while it may seem quite amazing that this one still pulled in a 76 this is a hugely inventive episode, and the genesis of the 'Doctor-lite' episodes that soon after give us the outstanding Blink. In many respects, this is a perfect example of Davies strengths and weaknesses – it's wildly creative (giving us an entirely new format for a Doctor Who episode), while also being hugely indulgent (it's sometimes rumoured that the monster is modelled on real-life Doctor Who superfan Ian Levine...). It divides fans, and the score of 76 reflects that divide – scores of 8 and 9 are being averaged with much more dismal values.

But then we get the other two episodes to score 76: Rose and The End of the World. Now among New Who fans, these are simply not stories that provoke obvious ire, and it seems to stand in some need of explanation as to why Rose in particular could rival Love and Monsters as allegedly the worst New Who episode according to Appreciation Index scores. However, there is a highly likely explanation for this. When New Who began to air, its audience included a great many classic Doctor Who fans, along with a (larger) number of newcomers with less or no experience of the franchise. Dedicated Whovians did not, on the whole, like Rose… it felt like a weak rehash of Spearhead from Space. Similarly, The End of the World took some flak for seemingly writing over some parts of the backstory (although, let's be honest, the Whoniverse has always been self-rewriting, as Moffat arguably parodies in The Big Bang). My suspicion is that these low ratings from (some) classic fans dragged these early episodes' AI scores down – but after that, the detractors simply stopped watching, and the AI scores begin to rise.

Another thing the AI scores reveal about Russell T. Davies stint as showrunner is that Davies mastered the build. I personally never liked his slightly slapdash way of building up to a climax at the end of each season, but I was weaned on Chris Claremont stories where the building up was far more textured (it's far easier to plan long-term stories in comics than in TV shows). The bottom line is, Davies method worked, and it did so despite largely hermetically sealed episodes, which is an incredibly difficult balancing act. The Parting of the Ways (series 1 finale), Doomsday (series 2 finale) and The Stolen Earth/Journey's End (series 4 finale) all show huge spikes that speak of the immense satisfaction viewers felt as Davies arc stories paid off. (The series 3 finale also peaked above the rest of its series, but only just.)

But hang about – The Stolen Earth/Journey's End aren't just an end of series spike, they're the highest rated episodes of New Who ever at 91 approval. How is that possible? Call me a humbug, but those stories are pretty weak (at least some other Whovians agree with me on this). Yet they have two enormous advantages. Firstly, it was highly publicised that David Tennant was leaving, and Tennant was (and is) so popular that this hype train left its impact (even though it would be a year and a half before Davis and Tennant would actually bow out). Plus, bringing back both Captain Jack and Sarah Jane was an honest-to-goodness crowd-pleaser, and even more so for fans of Davies and Gardner's spin-off shows. Crossovers often do well in the short term. Hindsight doesn't always look back upon them kindly, though.

State of Gradual Decay

And so to Steven Moffat, a much more consistent showrunner than Davies in many respects. His only AI dud is Mark Gatiss' Sleep No More at 78 (just above Davies three barrel-scrapers with 76). It's another format-breaker like Love and Monsters, and it also divides fans. (I don't much like it, but I probably wouldn't single it out over other lacklustre moments in Capaldi's tenure). Moffat proved weaker with his series arc plotting than Davies, though, and his only end of series spike is his first one, The Big Bang. Look at the crash immediately afterwards with A Christmas Carol. Ouch. Loveable Matt Smith can't rescue this one from being a bummer. Moffat does, however, achieve something that Davies never quite managed – a spike for a series opener, Asylum of the Daleks. Another crowd-pleaser with a great story and tons of fan service (yes, you can buy my love with a Special Weapons Dalek), and the back door debut of Jenna Coleman to boot. Moffat's 50th Anniversary specials The Name of the Doctor/The Day of the Doctor are a huge spike at the end of series 7 - although we're 'only' talking 88, here, which is an utterly fantastic AI score that other shows would kill for, but amazingly not the 91 that Davies' fake-out exit achieved. And this is with David Tennant reprising his role as the Doctor, of course…

That failure to crest above the wave, despite frankly blowing all of Davies finales out of the water in terms of writing and production quality, is a sign of something that is going to seriously afflict Moffat's time as showrunner… audience fatigue. The AI scores are in a trajectory of consistent decline from the moment Moffat takes over – don't be fooled by the story that its Chris Chibnall and Jodie Whittaker that kicked off a downward turn, the slowdown has been a decade in the making, and it's at its most tangible with Moffat's second Doctor, Peter Capaldi. Deep Breath, Capaldi's debut, is a notable dip from where the show had been previously (admittedly the anniversary was a tough act to follow!), and Capaldi's only readily apparent peak is World Enough and Time (which incidentally, I love), and that wasn't the finale for series 10, but merely the set up. The final two Capaldi episodes drop 2 points apiece, bowing out at just below where he came in. It is also during this period that Doctor Who merchandise sales drop, although to be fair, they peaked in Tennant's run and never recovered, just like Dalek toy sales in the sixties.

Doctor Who AI Scores.Annotated2

Finally, Chibnall and Whittaker. (I've duplicated the annotated chart here so you don't have to scroll so far to check it.) Here, the AI scores start to become seriously up-and-down, but the highs of 83 are directly in line with the fatigue effect that began with Matt Smith and made itself a comfortable rut during Capaldi's time in the TARDIS. Jodie Whittaker's debut in The Woman Who Fell to Earth is very healthy in AI terms, on par with stronger Capaldi/Moffat episodes certainly, and there's another bump with Spyfall (which personally I'm less fond of) – and then there's the gobsmacking Fugitive of the Judoon, which weirdly scores the same 83 approval as the first Whittaker story despite wildly greater love from the supportive parts of the fan-base. Something's definitely up here – what?

There are at least two factors that can help explain this pattern. Firstly, the contrasting down-strokes. The Tsuranga Conundrum, The Battle of Ranskoor Av Kolos ("Worst. Finale. Ever." - although it's not actually a bad episode), Orphan 55, Praxeus, and Can You Hear Me? all dip notably into the high 70s, although none go below Orphan 55's disappointing 77 (justified in my view). When a show can't consistently satisfy the audience there's none of the hype boost that Tennant/Davies secured, and it means that better episodes score lower than if they were surrounded by stronger work.

The other problem is the fan revolt. I don't know what proportion of the Whovians at large is involved, certainly not enough to make the AI scores do anything other than preserve the general trend that began with Moffat's ascension to the throne, but as with those 76 scores for the first two New Who episodes, it inevitably drags down your AI scores down when you offend the fanbase. And whereas classic fans gave it up in two episodes, Whittaker/Chibnall haters apparently keep on watching, even though they're not enjoying themselves. But if this is a factor, it's a reminder that the disgruntled fans are in the minority, otherwise we really would be facing cancellation, instead of a continuation of the same general pattern since Tennant, Davies and Gardner left. Also, if we acknowledge this factor, we have to wonder what the scores would be if the rebels just stopped watching...

Who vs Who

Finally, I want to share this intriguing chart which shows all the episodes by each Doctor in order, even though taken out of context this will seem like priceless ammunition for the anti-fans. Frankly, we're all free to tell whatever story we wish, but what we see here offers a much more intriguing tale than just the long-running trend of decline.

Doctor vs Doctor AI Scores

Obviously, Whittaker is below Capaldi who is below Smith who is below Tennant. That's the audience fatigue effect I was talking about as much as anything (go back and check the blue charts above to confirm this). But we can look at this story very differently, not as a competition but as a means of drawing more general conclusions about the AI patterns of New Who.

  • Christopher Ecclestone fights from 76 up to a whopping 89, and goes out with a genuine bang. He and Davies (and Julie Gardner – she really ought to get more credit than she does) literally save the show here with this fight from "should be on BBC Two" to "centrepiece of Saturday night telly".
  • Tennant is wildly variable at the start, dipping to his 76 low in his first year in the role, but then wins the battle for hearts and minds and climbs all the way to his triumphant exit.
  • Matt Smith comes in higher than Tennant (he's riding on the love for Tennant, and the expectations that Moffat's episodes under Davies/Gardner had justifiably created), but it's all gently downhill from here, the show understandably unable to get back the national moment of hype created by Tennant's departure, even though the 50th anniversary gives it a great shot.
  • Capaldi's pattern is very much like Smith's, but further deflated by the fatigue effect – he too gets his exit peak, but it's slightly premature, his final two appearances all sliding downwards.
  • Jodie Whittaker's trace is nowhere near as stable as Capaldi's, but it's about the same jaggedness as Tennant and Smith at the same point, taking the overall trend in AI decline into account. It's almost as if – and stop me if you've heard this before – the show just went through a major transition and is still feeling out the consequences of that change...

I know I won't convince the anti-fans with any of this analysis, but that isn't really why I undertook this investigation. I was merely curious as to what the Audience Appreciate Index values would reveal if I examined them with a little data visualisation. Having shown you the results, let me finish with three claims that neatly bookend everything discussed above:

  1. Doctor Who is in no danger of cancellation. If you're hoping for this out of spite, I'd suggest stopping watching is a better strategy, because then you'll be genuinely in control of your own experience of Doctor Who going forward. (Big Finish would love your support if you do ditch the TV show.)
  2. New Who audience scores have steadily fallen since David Tennant announced his departure. The slide in AI scores began with the 'Year of Specials' (2009) and continued throughout Moffat's run and beyond. However, the falls are merely a few points in size, and the overall pattern remains essentially flat.
  3. Whittaker/Chibnall have not killed the show, and their next series together could potentially outperform Capaldi/Moffat at the equivalent point in their run, but the heavy-handed moralising will need to be handled more elegantly (all the low points in series 12 featured prominent 'lessons').

This is all rather exciting to me – I would rather a showrunner who is willing to take risks with new writers and suffers a few duds than someone who was consistent with their quality but whose shows were unambitious. The funny thing is, it seems like all the New Who showrunners fit that description, one way or another. Whatever happens next, I'm definitely not bored of the show… I'm ready for more.

What about you?

All data quoted belong to the BBC.


The Chibnall Gambit

Chibnall and Lambert.2-1.blendedWhen I last worked with Tom Baker, on the voice recordings for the recent Heretic Kingdoms games, he confessed to being rather disappointed with the scene Stephen Moffat had written for him in the fiftieth anniversary episode, The Day of the Doctor. It seems he would have preferred to appear directly as the Doctor rather than as some sideways reference to his time in the role. Which just goes to show that no matter how hard a writer works on their Doctor Who script, how many fans you satisfy with your continuity kisses, and how much you can shock and delight with a ‘canon’ ball as huge as a secret incarnation, you still can’t please everyone. It’s something that the newest showrunner, Chris Chibnall, knows all too well.
 
Now that it’s all over, there’s a lot to take in from the aftermath of Chibnall’s second series at the helm - both on and off the screen. I have an enjoyably wide perspective in this regard since my kids and I have been simultaneously watching both the new episodes of the show and the classic Hinchcliffe and Holmes episodes - while my boys are also watching New Who from the beginning on the iPlayer, and I’m revisiting the black and white episodes and reconstructions in the cracks of my time. I’d love to do a tribute to the incredible Adventures with the Wife in Space with my children’s reactions - but it’s a lot more fun reading Sue Perryman’s bitchy honesty than it would be discovering that what my kids love more than anything is a really good monster. What I have gained, though, is a delightfully fractured perspective on three different yet closely related periods in the show’s history - the beginning, the revival, and now - and with it, something of a skeleton key to the mysterious world of Chibnall’s Who.
 
We were watching the last Hinchcliffe-Holmes story, The Talons of Weng Chiang, when Spyfall, Part 1 aired. Knowing that Robert Holmes had originally intended the villain in his Sherlock Holmes tribute to be the Master has very slightly blunted my enjoyment of it... it seems so obvious in retrospect. But Hinchcliffe was surely right to avoid having a double Master reveal in one season - and Spyfall was something of a let down for me precisely because I didn’t need yet another Master reveal, even with such a great casting as Sacha Dhawan. I mean, we’ve now had three in the past five series alone! Besides, they couldn’t possibly reveal the late Roger Delgado, and every Master since is inevitably something of a disappointment for me, with the possible exception of Derek Jacobi. I’m afraid I was never much of a fan of John Simm’s manic Master, and while Michelle Gomez is fabulous in everything, Missy was not quite enough to make Peter Capaldi’s episodes with her anything more than watchable diversions. Speaking of which: did Moffat plan that regeneration just so he could have one Doctor with a side of Master and one without (half rice, half chips)...? I don’t know, but I do know the fanbase wasn’t half as up in arms about that casting as Jodie Whittaker’s, despite Moffat laboriously foreshadowing this inevitability from 2013’s The Night of the Doctor onwards.
 
The official story in the newspapers is wall-to-wall outrage, of course. But it’s quite hard to discern how much of the alleged controversy around this year’s series was inflated by the tabloids’ insatiable need for muck to rake, and how much of it comes from the ever-irritable Whovian fanbase having more public means to bitch than ever before. In this respect, Paul Kirkley’s The Diehards of Doom (written during series 12, just a few weeks ago) might be the most insightful newspaper article on the show ever written. Frankly, the social media hashtag #NotMyDoctor feels darkly amusing me, because I thought it was common knowledge that “your Doctor” is the one you fell in love with as a child. It was Tom Baker for me, so I stopped watching the Peter Davison episodes in disgust because he was “not my Doctor”. But I still came to love these episodes when I returned to them, many years later. Besides, you don't have to like everything in a franchise to be a fan of it. I still like George Lucas' Star Wars movies (Caravan of Courage not withstanding), even though I don't want to pay Disney to watch their big budget fan fiction.
 
So here I am, a fan of these adventures in space and time since hiding behind the sofa for Genesis of the Daleks back in 1975’s season (sic) 12, and for the first time since Russell Davies’ sophomore outing as showrunner back in 2006’s series 2, or possibly Moffat's first run in 2010, I’m thoroughly enjoying the show rather than patiently waiting for that one truly great episode a year that we might just be lucky enough to get. And it’s not because new showrunner Chris Chibnall has fixed all the flaws in the show - this will never happen, thankfully! - it’s because he’s chosen a different set of flaws to everyone before him, a template for his ‘era’ that pays greater tribute to the earlier episodes than ever before while also shuffling the deck of narrative possibilities rather more than anyone else has dared. It’s a gamble, to be sure, just as every major change to a beloved franchise has to be. But it’s a calculated risk - one rooted in an evident affection for parts of the show’s history that don’t always get the love they deserve.
 
Chibnall’s tenure on New Who has been by far the closest in form to the episodes made under the very first producer, Verity Lambert (pictured above), even if he didn’t quite have the brass to make one of those purely historical stories that were a staple of William Hartnell's Doctor. Russell Davies dreamt up his template for the revived show by taking influence from the Fourth, Seventh, and (to a lesser extent) Third Doctors and blending it inexplicably (yet successfully) with British soap opera Eastenders. Stephen Moffat kept most of the foundation that Davies had laid (hell, he helped him lay it!), but mixed in a little of “his Doctor”, Peter Davidson (Number 5), specifically a lot of high concept psychodrama. I’ve already compared these two showrunners once before, the key point being that Davies had Moffat as his one exceptional writer (like Barry Letts and Philip Hinchcliffe had Robert Holmes), but Moffat could not find a genius writer to apprentice under him as a future replacement, for all that he and Mark Gattiss had (and still have) a lot of fun working together. But I’m afraid no head writer on a sci-fi show ever managed a consistent flow of dynamite episodes without working with another writer - and before you suggest Dalek creator Terry Nation on Blake's 7, he had formidable assistance from fellow former Who writer Chris Boucher as script editor to bail him out... Chibnall, despite his unique strengths as showrunner, could still use some reliable back up behind him, for all that I am enjoying the writers he is choosing to work with, Vinay Patel in particular. (In earlier times, incidentally, the BBC would not let senior production crew also write episodes without special dispensation - I sometimes wish they’d go back to that policy).
 
What aspects of Chibnall’s regime hark back to the show’s first (and only female) producer? Well, as the very first Chibnall-helmed episode hinted at, the dismantling of the hermetically sealed episode. Oh sure, Davies and Moffat made some two parters, but in the tenure of the first two Doctors, one serial flowed into the next as a continuous narrative. The Daleks ended on a cliffhanger that set up The Edge of Destruction; the resolution to The Enemy of the World was in the first few minutes of The Web of Fear. While The Woman Who Fell To Earth did have an unexpected cliffhanger ending that is perfectly in keeping with the structure of these early Who stories (The Mind Robber inevitably invites comparison!), series 12 has mostly settled for a less ambitious compromise, mostly consisting of setting up the next story at the end of each episode, something not really done (regenerations notwithstanding) since the climax of The Hand of Fear teased The Deadly Assassin (which incidentally was perhaps the biggest single influence on Chibnall's series 12 finale, The Timeless Children).
 
The more significant echo of Lambert and those who immediately followed her is the return of the TARDIS family. This was integral to the show’s format right with the first episode, An Unearthly Child, which also featured three companions: Susan (the Doctor's granddaughter, swept under the continuity rug since 1983's The Five Doctors except in the semi-canonical Big Finish Eighth Doctor stories), and her school teachers at Coal Hill, Ian and Barbara. Multiple companions remained the norm until the Third Doctor appeared, when a single female companion became more-or-less standard. While the Fifth (under producer John Nathan Turner) tried the larger gang again with Tegan, Nyssa, and Adric, New Who only came as far as Moffat’s two and a half Ponds until Chibnall brought us back to the full crew’s complement of four, counting the Doctor herself.
 
Certain newspaper critics have consistently railed against the three newest companions, Ryan, Graham, and Yaz, whinging that it’s too many people and there’s not enough for them all to do. This somewhat misses the point. Davies’ biggest change in the show’s DNA was to make the companions the actual focus of the wider storylines, which Moffat continued during his reign. Chibnall goes back to the narrative being more often focused upon the events and characters introduced within the current episode. More companions thus allow both a larger cast of  secondary characters (The Haunting of Villa Diodati would not have worked as well in the earlier New Who series with a single companion), more choices in how to pair characters, and slower burning character development. In other words, it doesn’t matter that we had to wait until Can You Hear Me? to get solid character backstory for Yaz because committing to the TARDIS family format works precisely because you don’t need everyone to have an equal share of every story. TV shows used to know this; I’m at a loss to put a finger as to when everyone apparently forgot that an ensemble cast inherently thrives on its permutations. But for Doctor Who, certainly, that happened when Davies (boldly) wanted his companions to also be the arc stories, which Moffat kept running with even though his arc plotting was quite often the weakest part of his otherwise excellent writing. Fun, yes; surprising, yes; logical... not very often.
 
Chibnall seems willing to allow the slow burn. This misfired in series 11 in part because the climax was such a damp squib, and we were so used to the fellow at the control column pulling out all the stops and turning the nonsense up to full blast in the final episode. For all the problems in the previous set of episodes, though, I blame the transition - Moffat, after all, was already on the shop floor when he was handed the keys. Chibnall hadn’t written for Who in six years. Moffat had to go to him to discuss the job offer, because unlike Davies he simply didn’t have a writer he could hand over to with any confidence (Gattiss always had too many irons in the fire for Auntie Beeb to consider him a contender). And it wasn’t so much Chibnall’s previous Doctor Who scripts that got him the gig, frankly, but the fact he had successfully helmed Broadchurch and thus knew how to run a show - no small task!
 
Another thing Chibnall went ‘back to basics’ for, and another source of critical ire: the ‘lesson’. Chibnall does like the Doctor to indulge in a spot of “what did we learn this week”, alas. Yet Doctor Who was originally conceived as an educational show, with time travel being intended as a facilitator for the sharing of historical knowledge - an amazing suggestion for 1963! However, you would be hard pressed during the Gothic monster golden age of Hinchcliffe and Holmes (1975 to 1977) to deduce an educational angle to any of those dark and utterly fabulous stories, and under Davies and Moffat the closest thing to a ‘lesson’ was sceptical rhetoric endlessly rammed down our throats - about which, I note, neither critic nor fan complained about, myself not withstanding (see the Religion in Science Fiction serial for a deep dive on this point). I can completely understand why some fans and critics think the lesson is out of place... but I cannot agree.
 
Chibnall has gone back to seeing the show as having an educational remit. Sometimes this works rather well (RosaDemons of the Punjab, The Witchfinders, Nikola Tesla’s Night of Terror, The Haunting of Villa Diodati). Sometimes we get the clunky preaching of early Star Trek: The Next Generation (Arachnids in the UK, Orphan 55, Praxeus). The failed lessons usually crash and burn around environmental issues, since plenty of people know that something is terribly wrong in this regard, but no-one has a handle on anything close to cogent advice for dealing with the growing catastrophe, making attempts to provide lessons on the topic rather empty. I would hardly call anything in the current New Who ‘woke’ though, as if the liberally-minded were actually engaged in something new and not merely cherry picking which traditions to cherish - just like conservatives, actually! Besides, I don’t see that much difference between Chibnall's environmentalism and what Barry Letts green-lit in The Green Death back in 1973, or even Tom Baker’s Doctor initially refusing to help the Brigadier in 1975’s Terror of the Zygons because “it’s about time the people who run this planet of yours realised that to be dependent upon a mineral slime just doesn’t make sense.”
 
What Chibnall is willing to do, though, which neither Davies nor Moffat could stomach, is to genuinely offer respect to everyone rather than undermining that intended message by singling out certain beliefs as worthy of contempt. No professional critic mentioned it, but I notice fantasy and sci-fi writer (and conservative Christian) Kyle Robert Schultz picked up on the same thing that leapt out at me from the astonishing Fugitive of the Judoon - and no, I don’t mean the rather spiffing ‘canon’ ball of inventing another new Doctor we didn’t know about! When Jodie Whittaker’s Doctor chastises the Judoon by saying “this is a place of worship, show some respect!” it is the first time the New Who Doctor has stood up for religious people rather than merely pitying them (The Satan Pit, Gridlock) or being outright venomous (The Parting of the Ways, Army of Ghosts, and so many more). You most likely don’t care about this issue, but it means a lot to those of us who have had to endure asinine ‘lessons’ under Moffat and Davies about what intelligent people ought to believe, as if tolerance should be bounded by acceptable belief and not by acceptable behaviour. The Doctor cannot love humanity and yet harbour prejudice against the majority of the species - it makes no sense at all. Chibnall has put the character back to respecting all cultures, and even let the Doctor invite a practicing Muslim to join the TARDIS team. It’s about time indeed.
 
Doctor Who has been hugely influential in my writing. I borrowed from Genesis of the Daleks when writing the showdown between Lewton (Rob Brydon) and the villainous cult leader Satrap (Nigel Planer) in Discworld Noir, thinking that nobody would notice because I genuinely underestimated the depth of the geeky love for Tom Baker as the Doctor. Then, knowing for certain that Tom’s fans ran deep and wide among the audience for videogames, I cast him twice in Heretic Kingdoms games along with other Doctor Who and Blake's 7 veterans like Robery Ashby, Stephen Greif, and Sally Knyvette, who I hope to work with again soon. In my philosophy books, Chaos Ethics begins with a direct reference to Doctor Who, and when I came to write The Virtuous Cyborg I was struck by the realisation that the only feasible way of understanding the Doctor's personal moral code is in terms of virtue. A recurring theme of the show is the idea that the right thing to do is never simply a matter of numbers. Jodie Whittaker's Doctor reaffirms this in The Haunting of Villa Diodati when she refuses to sacrifice Percy Shelley, even to prevent the Cybermen from rising again and saving billions of lives. The same theme occurs in that pivotal moment of Genesis of the Daleks when the Doctor rejects the Time Lord's injunction to inflict pre-emptive genocide upon the Daleks. There is a continuity of character here that matters to the identity of the show.
 
Maybe I’m the only person on the planet who really doesn’t care one way or the other whether we have a female Doctor incarnation - or two, for that matter! - all I care about is that the BBC keep making the show, and that some of the new episodes will be worth watching. Chibnall’s already achieved this, and in bringing in influences from earlier in the show’s history, he’s also tickled my fan bone rather magnificently. When the delightfully malevolent Zellin name-checks not just the Eternals and the Guardians but the Celestial Toymaker as well, that’s not just a kiss with continuity it’s a veritable snog with tongues! And even if you don’t like the new format, there’s surely no denying that the episodes have never looked or sounded so great. The production values are now so high that even the corridors everyone is running down are shot on location! And while Murray Gold’s scores were always up to the task at hand, Segun Akinola’s work is spectacular and his new arrangement of the theme tune is so much closer to the incredible Delia Derbyshire original than anything since - fitting, then, that the title sequence also reverts from the 1970’s vortex back to something more akin to the kaleidoscopic swirl of the original title sequence.
 
If you’re one of those younger Whovians who only knows New Who, you could be forgiven for thinking all this connectivity to the early days of the show is irrelevant. But everyone who has held the reigns of this beast of a franchise - a show so resilient it survives all changes and has even regenerated itself from death! - does so because of their love for a mythos that goes back to its earliest incarnation back in 1963, and must find ways to both inherit and reinvent its past and its lore in an endless (and often thankless) balancing act. Moffat knew it, Davies knew it, and Chibnall’s gambit is founded upon it. When he has Jo Martin’s Doctor (the zeroth Doctor? -1th?) say in the finale “Have you ever been limited by who you were before...?” it is not just part of the climax of the story, it is a gauntlet laid down to those truculent fans who deny the legitimacy of these episodes and these Doctors. Chibnall has put in the time as a fan, he knows his stuff, he has earned his time at the reigns of the show - and he’s at least as good as his predecessors at keeping it going.
 
It's no good making the show just for the diehards; that’s what Big Finish does with the Doctor Who audio adventures, and it must necessarily do so on a much lower budget... the TV show has to bring in a new and younger audience, secure a regular adult audience, and somehow satisfy the existing fanbase - despite the utter impossibility of achieving all of this! Yet this is a universe where impossibility is a negotiable concept, thanks primarily to people like Verity Lambert, Chris Chibnall, and everyone in between who has been willing to take on the inestimable challenges of a television programme like no other. If you want a show that never changes, go watch boxed sets of something that’s already finished. That way, you can know for certain what you’re getting yourself in for. Doctor Who, on the other hand, is - and always has been - joyously, chaotically, uncontrollably alive. I shall continue to relish it, from all its times and spaces, as long as I am too.
 
Love technology but don't want to end up like the Cybermen? Check out The Virtuous Cyborg - paperback and ebook out now!

Top Ten Cybergs

Purple Cybernetic FlightEvery purposeful network of beings and things forms a cyberg, where (like an iceberg) we only see a fraction of the entailed network and the rest lurks beyond our awareness most of the time. The complete inventory of beings and things entailed within each of these cybernetic networks would be challenging to enumerate, but we can approximate the scale of each cyberg by counting just the number of one kind of entity within it e.g. the number of humans, the number of computers.

To qualify as a cyberg at all, we must be dealing with a network that spans its entire breadth with some kind of active relation, even if merely one of potential. A nation is a good example: not every citizen knows every other citizen yet they are linked by a shared bureaucracy that integrates them all into one functional network. It is not enough for there to have been a common network of production – no matter how many people own a penknife, penknife-wielders do not have any ongoing relationship. Conversely, the exchange of media effectively links television stations and thus viewers such that while individual TV stations are modestly sized cybergs by contemporary standards, they aggregate into something far more substantial. (Religions are something of a borderline case in this regard, but I shall set these aside for now.)

In the list that follows, cybergs are listed in order of the size of a single indexed entity, either humans or devices. Everything listed is a gigacyberg, with no fewer than a billion entities embroiled in its network. This list is not intended to be definitive but merely indicative – it points to the general situation today from a perspective we would not normally consider.

Runners Up

A number of megacybergs narrowly missed the top ten, including the European Union (743 million), movies (about 800 million), and guns (875 million). More than 360,000 people die each year as a result of the gun cyberg, but this is by no means the most fatal of our cybernetic networks. If this list included religions, Christianity would be the number three gigacyberg (2.3 billion), Islam would be ranked jointly with Microsoft (1.5 billion), and the Hindu traditions would be a close runner up (900 million).

Joint 9th: Tencent and Google (1 billion)

Chinese internet giant Tencent and search colossus Google both have about a billion humans in their cyberg. Whereas Tencent does not lead Chinese search (that honour goes to Baidu) it has a tremendously diverse network of internet services, including the wildly successful competitive game service League of Legends. Google dominates search globally – but even this only allows it to squeak into the world’s biggest cybergs if we take its quoted figures as accurately gauging its scale. Pragmatically, the reach of the Google cyberg is probably greater than this conservative estimate – but it feels somehow fitting to show this young upstart beginning its climb towards the top of the heap...

8th: Cars (1.2 billion)

It is possible to drive completely around the world thanks to the extent that the car-human cyborg has emerged as the dominant lifeform on our planet. We have completely changed the ecology of almost every ecological biome by installing the infrastructure required to make cars a viable form of transportation. This is the world’s deadliest cyberg, taking more that 1.25 million human lives annually, and that figure does not include war deaths some would attribute to the oil industry that feeds this network.

7th and 6th: India and China (1.3 and 1.4 billion)

The only nations to qualify for this top ten list, India and China each have more than four times the population of the United States, and nearly twice the population of the European Union. China is the wealthier cyberg, with an economy four times the size of India’s, but both wield significant destructive power via their hundreds of nuclear weapons. However, they have less than 2.5% of the world’s nuclear stockpile, since the US and the Russian Federation hold 45% and 48% of the world’s nuclear weapons, a quantity far beyond any rational consideration.

5th: Microsoft (1.5 billion)

Despite no longer being the centre of attention in technology circles, Microsoft’s cyberg is 50% bigger than the certifiable size of Google’s, thanks to the continuing dominance of Windows, which has a 90% market share in desktops and laptops. That said, these are now only 20% of the robot market, which is dominated by smartphones (where Google enjoys 87% of the market). Microsoft is a cyberg in decline, unable to adequately break into the pocket robot marketplace, but jealously guarding its hold over other industrial cybergs.

4th: Television (1.6 billion)

That television enjoys only a marginal numerical advantage over Microsoft is a sign of how completely the computer has has positioned itself as the cybernetic successor to the notorious boob tube. Yet there is another lesson here: the television is not ubiquitous, being a cyberg that extends through only 20% of the planet’s population.

3rd: Facebook (2 billion)

Here again we get a sense of the power of the digital cybergs... it has taken a little over a decade for Facebook to become the first definitive 2 billion human cyberg owned by one corporate entity. By leveraging human social instincts – and largely by accident, for it was not originally designed to operate as a surrogate for relationships – Facebook has aggregated more humans into one walled garden than anything else.

2nd: The Internet (3.5 billion)

It is distributed, beyond outright control (but certainly open to influence) and is the largest electronic cyberg on our planet. The internet... so significant, most dictionaries think it deserves a capital letter, like a nation. But this is a cyberg on a scale beyond national bureaucracies, a network that links half the planet’s humans to almost all the planet’s computers. Cisco claims there were 8.7 billion devices connected to the internet in 2012. As cybergs go, this one is the most spectacular in scale and potential. Yet it is still arguably outstripped by at least one larger cyberg...

1st: Money (7.3 billion)

This was the first cybernetic network, the first technical system to spread around our planet as both practice and tacit relations. As humans have grown more populous, so too has money spread with us – including into the virtual spaces of the internet, where this cyberg now lives as much or more than it does in the pockets of its humans. It seems positively simplistic next to the other gigacybergs, yet it engulfs almost every human; I have estimated that only 1-2% of the population of our planet are not caught up in the commercial cybernetic system. The sheer ubiquity of money as a concept is so complete that politics hinges more around budgetary numbers than about questions of how to live. This is one of our first technologies, as old as civilisation – and it remains our most successful.

More cybervirtue next week.


Tip of the Cyberg

CybergDoes technology simply increase human capabilities? Or have we radically misjudged the extent and complexity of the ever-growing abundance of tools around us?

The astonishing advances in technological prowess in the prior century or so give an impression of infinite capabilities – the closest secular thought gets to imagining omnipotence. Thus we have no difficult envisioning (if we suspend critical judgment) techno-immortality, sentient robots, or interstellar travel. Indeed, science fiction is replete with these imaginary grails. This way of thinking about our tools – as personal enhancement – radically misleads us in several important ways (many of which I discuss in Chaos Ethics), but perhaps the most striking is the sense that equipped with any technology we act autonomously. This is always a subtle deceit.

Science fiction helps bring this confusion into focus. In Star Trek, the communicator, universal translator, phaser, transporter, and tricorder all do one thing perfectly (except when drama requires otherwise), to the extent that a Starfleet officer equipped with these things can appear anywhere, talk to anyone, scan anything to know what it is and what condition it is in, and – when push comes to shove – stun or kill on demand. All these capabilities occur literally at the push of a button. Where do these miracle tools come from? How does they work? It doesn’t matter; it’s high technology (hi-tech) – which is strikingly parallel to the magic-laden worlds of high fantasy. Arthur C. Clarke’s adage that any sufficiently advanced technology is indistinguishable from magic should raise more questions than it does... even in the context of sword and sorcery, we are entitled to ask: where does the magic come from? What is expended when it is used? What are the unseen limitations to its usage?

In the Terminator movie franchise, mankind in the future has been driven to the brink of extinction by robotic killing machines humanity made without thinking. That much of its setting is not hard to believe, particularly when you see the effortlessness with which the armed drone made battlefield honour obsolete. Yet against the backdrop of a total collapse of civilisation and killer robots prowling everywhere,the Resistance movement in the future depicted by Terminator: Salvation somehow maintains safe houses, feeds the survivors, even operate fighter planes. The aeroplane sits in our mind like the tricorder and communicator – autonomous once paired with a human. But as Bruno Latour never tires of reminding us: airplanes do not fly, it is airlines that fly. In stark contradistinction with what we see in a Terminator movie, no plane takes to the air without their logistical supply chains bringing fuel, their air traffic control networks managing flight paths, their support personnel performing essential maintenance.

Technology is not magic, and even fictional portrayals of magic are not as autonomous as we imagine our tools make us. There is a stark difference between hammers, binoculars, and a wind-up torch on the one hand and computers, cars, and airplanes on the other. While both sets of examples are manufactured by complex meshes of people and things, the latter list also require a complex network just to operate, a point brought into clear focus by the actor-network theory developed by Michel Callon and Bruno Latour. If a cyborg is what occurs when an organism is cybernetically enhanced by a tool like a hammer, we can call the network that produces and maintains the more complicated cyborgs like cars-and-their-drivers or computers-and-their-users, a cyberg.

The iceberg famously has only 10% of its mass above the waterline, and thus only its top is visible to the casual observer. So it is with the cyberg – the cybernetic network required by the more convoluted of our technologies. We see only the cyborg – the car and it’s driver – and not the cyberg that makes it possible. When it comes to technology we are perpetually shallow sighted: we see only the ‘surface’ of the network, so flat that it can be expressed as a one-dimensional array or list (car, driver, fuel, road). If we manage somehow to become more deep-sighted, we can recognise the relations and dependencies that give the cyberg it’s network-qualities (ore mines, smelting mills, factories, oil rigs, refineries and far more besides). These dependencies rapidly become tangential and obscure: an oil rig has scuba divers who repair the metal structure when it corrodes with arc welders entirely unique to their profession, but who is deep sighted enough to think of the factories making hyperbaric welding kits or compressed air tank regulators when looking at a car?

It is the cyberg that defines our technological situation, more so than the scientific research projects that we (somewhat naively) see as feeding directly into new tools, like the magician conjuring a new alchemical potion out of thin air, having expended nothing but time. What is more, we can measure our depth into cyberg existence by looking at the numbers of people and things involved in the cybernetic network. A hammer made a millennia ago involved a miner and a blacksmith, a mule and a horse, a mine, a furnace and trees; no more than about a hundred beings and things were entailed in this early cyberg example. A functionally identical hammer today would entail a network of ten thousand beings and things, or even a hundred thousand.

Our cybergs get bigger, deeper, wider, and as they do our autonomy recedes even while the imagined scope of our autonomy grows. This is part of the technological blindness I have previously called cyberfetish and am here marking as shallow-sightedness; our strange capacity to see only the powers and potentials of our new tools, but to overlook or underjudge their consequences. Precisely because we have moved from tools that could be made by individuals or villages to tools that require nations or corporations to build and maintain, we live at a time where the cyberg is the measure of both possibility and catastrophe.

Although I have introduced the idea of a cyberg through the extended frameworks behind a specific tool, the concept behind these cybernetic meshes applies whenever beings and things are linked together into extended networks. When Benedict Anderson observed that the printing press allowed the imagined communities we call nations to form, his argument can be read as saying that nations are cybergs. Every corporation is a cyberg, constituted slightly differently from nations, but in the last half century rivaling and exceeding them for power and influence. Every one of us is embroiled and imbricated in cybernetic networks of such scope and influence as to make a mockery of our mythos of technological empowerment. For when it comes to our tools, the enhancement of our personal agency is truly just the tip of the cyberg.

Next week: Top Ten Cybergs


A Study in Psylocke

A Study in Psylocke was a short two-part serial that ran here at Only a Game from July 5th to 12th 2016. Effectively a sequel to Corporate Megatexts, it examined the relationship between the different comic series that featured Betsy Braddock (who becomes Psylocke), the circumstances in the Marvel offices surrounding her transformations, and the challenges involved in bringing such a racially ambiguous character into the X-Men movies.

The two parts are:

  1. Betsy Braddock
  2. Olivia Munn

If you enjoyed this serial, please leave a comment. Thank you!