The original guide to writing for games returns! Every chapter has been revised and expanded, and there are new chapters covering storytelling for MMOs, urban narrative, interactive script formats, and the different kinds of relationship players can have with a game's story. Available from Bloomsbury now as a paperback, hardback, or ebook!
Fascists! Fascists everywhere! They're after your jobs, they're after your homes, they're after your unborn children! They want to take away your rights, they want to take away your healthcare, they want to take away your very lives! Oh the terrible things they do, these fascists, and the worse things they want to do - we must rise up and force the State to come crashing down with all the power of the law and its enforcers so that the fascists can be quelled and dispelled. In short, we must become fascists or else the fascists will win!
Many thanks for your blog-letter, Too Comfortable to Consider Politics, which puts me back in dialogue with a rather old version of myself, 2006's younger model, who was still willing to write about Temperament Theory. Why did I stop...? It wasn't that I thought this model had lost its heuristic value, it is still a great tool for the kind of cartoon thought experiments that go on in Considering Politics, and I certainly don't consider Big 5 to have solved any of the methodological flaws that bedevil these kinds of personality inventories. But I came to realise that mainstream psychologists were very defensive of their territory - despite not really having worked out what that territory was, or what a 'mainstream' version of psychology might actually look like. I thought it best to pick other battles.
I began to write more and more about philosophy, because it satisfied my desire for more complex and subtle ways of thinking, and while I did not stop reading and writing about psychological issues, I did so mainly from the perspective of Leon Festinger's cognitive dissonance and Paul Ekman's emotion theories going forward. These were the most secure islands in the stormy seas of psychology, and terribly helpful for understanding how and why we play games too. But I still have conversations in terms of Temperament Theory with others who share the terminology, just as I can still talk about God with a Christian or a Hindu, or riff on materialist themes with a positivist... what we say often needs to reflect who we are speaking with. 2006-me did not need to consider this; he just wrote what he was thinking about. In the interim I have become more focused upon why I am writing, and that changes what I write about too.
I believe you are broadly correct in your analysis of the role of comfort in Western Liberal Democracy, which ever since hearing eclectic French musician's Rubin Steiner's album Say Hello to the Dawn of Paradox, I have begun (in a somewhat impish fashion) to think of as 'Industrial Liberal Fascism'. But this 'F' word is one we cannot safely use to communicate, alas, because it inconveniently means different things to different people. Originally, of course, 'Fascist' was a specific political party in Italy, and the name descends from the Italian word 'fascio', meaning just 'group'. As a crude approximation, we might take Mussolini's doctrine for national government as consisting of three key elements:
- A dictatorship…
- …where violently repressive means…
- …enforce an inescapable role for the state
Depending who you talk to, you’ll hear Fascism talked about as a right-wing, ultranationalist movement, or you’ll hear how liberal political advocates in the 1920s secured the rise to power of Mussolini’s fascists (both correct, by the way). Liberals in the US identify ‘fascism’ with (1) and (2) in the definition above, and conservatives with (2) and (3), by substituting ‘ideologically repressive’ for ‘violently repressive’ or by associating ‘violence’ with different acts (abortion, for instance). As a result, ‘fascist’ is an insult that can be used against left or right with equipoise, with the inevitable result that the everyone in the US can become hysterical about the rise of fascism in their nation without ever once noticing their own complicity in bringing this about.
Your allegation is that political disenfranchisement occurs because people get too comfortable, and engaging in politics is "a form of social warfare" that therefore only happens because people are forced out of their comfort zone by the loss of welfare (both in the sense of well-being, and in the sense of government programmes for supporting citizens). But this analysis, while broadly correct, perhaps misses two subtle distinctions about 'comfort' and 'politics' that I should like to tease out in reply.
Let us start with 'comfort'. We are an imaginative species - indeed, the most imaginative species we know. It gives us almost everything worthwhile in human life, but it also inevitably causes enormous problems because we can imaginatively project ourselves into other situations that we do not understand without ever once noticing our lack of understanding. Thus, for instance, the rush to provide computers to so-called 'Third World' countries. These computers have caused tremendous problems for us, but we don't like to think about that, and we prefer to see them as a source of comfort, which of course they are as well. Therefore, anyone without those computers has missed out. We ought to send those poor people abroad computers. Or, to put it another way, having reorganised these geographic regions into vassal states of our seafaring empires and gearing their economies solely for exporting resources to the 'First World' we now want to sell them 'First World' technology and increase the power and influence of profit-centred organisations like Google and Apple that it is far from obvious can be trusted at home, let alone further afield.
Similarly, I am at a loss to understand why advocates for the Trans community in the US felt it necessary to try and wield influence in British politics. My trans friends in the UK were not, in fact, crying out for this 'assistance' (although I have no trans friends under the age of 30, so perhaps younger people were?). But as a result of this attempted political intervention, the trans community has lost a great deal of support on this side of the Atlantic, and in the past five years violence against trans people has skyrocketed (in the most extreme assessment, quadrupling in that period). Not to mention the verbal abuse that US trans advocates have piled upon British lesbians and their allies (and vice versa!)... a "form of social warfare" indeed. And a heartbreaking one; as a long-time supporter of the wonderfully eclectic rag-tag alliance that flies a rainbow as its flag, it has been devastating to watch the trans and lesbian communities go to political war against each other, bringing to a savage end a co-operation that may well have been the last gasp of the civil rights movements.
Yet this depressing turn of affairs has been dwarfed by the even more bleak and dispiriting events of 2020, when the worst respiratory infection pandemic in some fifty years or so was rendered far, far more destructive and damaging by the descent of the medical discourses into a state of pseudoscience. Thus, in strict contradistinction to the urging of both epidemiologists and the WHO, the UK government let loose its duplicitous war cry of "follow the science!" before initiating a string of draconian national lockdowns that have sacrificed an entire generation's mental health and prospects, and unleashed hardship disproportionately upon our poorer citizens - all against an infection that was arguably already endemic, and all without adequate scientific monitoring to determine the terrible effects of this brutal quasi-fascist experiment. And what do you know, the point of origin of the disruption of the very research networks that could have helped us make good decisions when they were desperately needed was once again the United States, where the political left and the political right argued between a conception of the pandemic that was wildly over-exaggerated and one that was utterly dismissive, with the net result that many people who would not have died last year did in fact do so, including those middle aged people with heart disease or diabetes who died at home rather than risk going to hospital and catching an infection that was quite unlikely to have killed them.
The tragedy of SARS-CoV2 is not just what it has caused in each country, although this is devastatingly sad, but also what it has prevented happening between countries. While we do not yet have the WHO's estimated global mortality statistics for 2020, we have already had a warning from Dr Matshidiso Moeti, WHO Regional Director for Africa, about what the disruption of the support networks for malaria treatments in Africa last year will ultimately mean - namely between 10,000 and 100,000 additional deaths on top of the 400,000 that die from this disease every year, the vast majority of them babies and toddlers. I fear we will completely ignore these casualties, brushed under the carpet as merely another unfortunate consequence of the COVID-19 situation. Yet we might just as well link these heart-breaking deaths to a lack of support from their former colonial oppressors, who were too busy arguing about face masks to prove to the world, rather than to their neighbours, that black lives really do matter.
Make no mistake, this entire debacle represents the greatest collective failure of world citizens and their governments since World War II - which, to be fair, was several orders of magnitude more tragic as a global event. It is also the greatest failure of the scientific community in my lifetime, and I cannot escape the feeling that those two points are directly connected to one another. And just as in the case of US trans advocates inadvertently making the situation worse for trans people in the UK by trying to help them, the additional catastrophe that was the response to COVID-19 - the myriad harms of which will take years to fully understand - seems once again to have been caused at root by the political dysfunction of the United States, where hatred of fascism has led to a worsening of those disparate conditions claimed by either side as fascism.
I have acceded to your point about 'comfort' lessening political engagement, but my counterpoint is that comfort is a product of our imagined circumstances, not our actual circumstances. The very place where comfort was most readily available in terms of shelter, food, and entertainment was also the place where tremendous political capital was expended in the urgent battle against the double-headed coin of duofascism, which paints all our political 'enemies' as fascists while ignoring the resulting fascist tendencies in our own political demands. Thus it is fear, as it so often has been throughout history, rather than loss of comfort per se, that has been driving political crusades in the United States that have had devastating effects elsewhere in the world, whether we are talking in terms of the hatred cruelly directed at trans people, the British government's descent into quasi-fascism powered by the collapse of scientific discourse, or the soul-numbing losses of hundreds of thousands of black children whose lives, it seems, did not really matter after all.
If I leave our discussion there, it would be to fail to learn anything from the disaster of a year that was 2020, and that I could not bear. So let us turn to the other subtle point I want to discuss, that in connection to 'politics'. When you describe politics as "a form of social warfare", you are describing what currently happens under this name. Duofascism - the fascist tendencies of both antifascists and their rivals - lies behind this grotesque alternative to democracy we are currently pursuing in those parts of the world fortunate enough not to have far worse, far more oppressive, far more convincingly fascist regimes in charge. It is what I have called 'politics as war', where the purpose of political action is to defeat your enemies. And this is one of the worst conceptions of politics we could fall for, since there is almost no point at all in having democracy if you are not going to use it to negotiate a good life for everyone in our political community, which requires us to understand their visions for what a good life might be.
Democracy presumes a common political identity, a demos, as the Ancient Greeks put it. I think they had an easier job because, in the first place, these original democratic communities were only cities and therefore orders of magnitude smaller political bodies than those we wrestle with today, and in the second place, they didn't in fact offer political voice to everyone but solely to their elites. On this latter point, we are fast heading the same way, if we did not in fact already arrive there quite a while ago. When there is an authentic political community, when we belong to a demos, we can talk to one another about our needs, wants, and fears, and we can disagree productively and hence negotiate how we can each make ourselves a good life without demanding of others that which causes intolerable harm to their hopes of making a good life for themselves.
Duofascism, if we set aside the histrionic denunciation of those other fascists that are nothing like us, rests on the demand that the State must do certain things our way regardless of the harms this causes to our fellow citizens. As such, it is anti-democratic because it prevents any possibility of forming a demos. But oh, the things the United States has been able to achieve whenever it can form a united political community! Let us never forget that it was citizens of the United States like Eleanor Roosevelt that were the driving force behind the original human rights agreements during that hopeful time after the second World War when, as Michael Moorcock reports of Britain after the wars, everyone seemed to be working together to build a better life for all. Just because that didn't last is no reason for us not to try again.
These are dark times, and not just because of this particularly nasty respiratory virus and the terror that scurrilous journalists have stoked about it. But the sole thing we need to get beyond the democratic impasse is a laying down of hostilities and a re-opening of the possibility of forming political communities together. We lose sight of this all too often because 'politics as war' is what we have become accustomed to, and so we are willing to become fascists to stop fascism. But it is not the only way, and it is not a good way, nor will any good come from continuing to pursue a politics based solely upon hatred of the other side to our disturbingly mirrored political coin.
Let us try something new, or rather, something old that we can make anew. Let us give democracy a try instead.
With unlimited love,
Only a Game will return later this year.
Contains ideas some people may find distressing.
The science is clear! Masks save lives/don't work! But which is it, and even more importantly how can we know? To answer this wildly contentious question - one which so many on either side are utterly convinced is entirely settled - we first have to understand why this topic has not yet even been adequately debated, much less resolved beyond dispute. Join me, if you dare, on a disturbing journey through a scientific story from the United Kingdom in 2020, a tale that centres upon the world's second oldest university, Oxford...
First, however, a polite warning. This is a hot button issue, and therefore one with a high risk of triggering cognitive dissonance in those who have committed to a specific side... But if we care about the sciences, we cannot simply consent to keeping our mouths shut rather than debating the ambiguities of a live research question, regardless of how much of a minefield it becomes. In so much as the truth about this topic is currently known, the only two certain claims I can ascertain are that there is not enough good quality evidence to settle the debate definitively, and there is no longer even anything that might be called a debate, since both sides are now intractably locked into their beliefs. This kind of situation is a paradigm case of what I have called pseudoscience, the collapse of even the possibility of productive scientific work occurring.
Our story begins relatively early last year, as thousands of armchair epidemiologists took to social media to declare what was or wasn’t true on a great many topics that were far more complicated than anyone seemed to realise. A great deal of that complexity comes from the fact explored last week, namely that the sciences are discourses, series of conversations via written texts. This has the unfortunate consequence that the act of interpreting the evidence is seldom as simple (as the armchair epidemiologists apparently believed) as sifting out the ‘good evidence’ and discounting the ‘bad evidence’ - and doubly so since the evidence that is rejected in such a procedure is very frequently cast out as a result of confirmation bias rather than for any sound reason.
Not long after the social media platforms began to descend even further into a verbal war zone, severe disagreements broke out in the United Kingdom between medical researchers and practitioners about a newly proposed medical intervention for COVID-19, namely community masking. It's important to make a distinction here: use of personal protective equipment in hospitals is radically different from asking the population as a whole to deploy face masks; there are disagreements about the former as well as the latter, but since our interest in this case study is not in resolving these disputes but rather in examining them, it will be helpful to recall that the question that was being debated in the UK was not 'are face masks ever effective?' but whether we should require the general population to wear face masks to help stop the spread of the SARS-CoV2 virus. It was over this discussion specifically that medical scientific practice almost entirely collapsed in the UK.
The crisis point can be traced to a pivotal moment in June. Two months earlier, Trisha Greenhalgh of Oxford University and half a dozen other medical professionals had argued in a piece for the British Medical Journal that while “direct, experimental evidence for benefit is not clear cut”, we should follow the precautionary principle and recommend face masks for the public all the same. Intriguingly (and this will be important later), they also made the following remarks:
...trials have shown that people are unlikely to wear them properly or consistently, which is important since prevention depends on people not repeatedly touching their mask, and on all or most people wearing them most of the time.... the trials cited above have also shown that wearing a mask might make people feel safe and hence disregard other important public health advice such as hand washing and social distancing...[these] arguments may have been internally valid in the trials that produced them, but we have no evidence that they are externally valid in the context of covid-19. “The public” here are not volunteers in someone else’s experiment in a flu outbreak—they are people the world over who are trying to stay alive in a deadly pandemic. They may be highly motivated to learn techniques for most effective mask use.
In June, Professor Greenhalgh and her colleagues returned to follow up on their original piece. There had been enormous swathes of comments in the meantime, and heated arguments about the risks that might potentially be involved, not to mention how this proposal could be justified in terms of the precautionary principle, which cautions doctors not to use unproven interventions about which there is a potential risk of harm. Surprisingly, in responding to their critics the authors did not engage with any of the concerns that had been raised. Rather, they declared the myriad objections colleagues had presented as “straw men” (misusing the term, incidentally) and announced that the UK ought to do what they had suggested anyway. A week later, the UK government mandated community masking by law, with escalating fines for non-compliance. This led the Centre for Evidence-based Medicine (like Greenhalgh, also based at Oxford University) to run an unprecedented opinion piece denouncing the decision as politically motivated and scientifically unsound. From that point on, the outbreak of pseudoscience corrupted the discourse and little productive discussion on this topic has yet re-emerged.
An interesting aspect of the CEBM’s rebuttal was that it was entirely couched in terms of how the research had been conducted up until the year before, and the lack of strong supporting evidence - including mentioning the calls that had been made for further research on the efficacy of different kinds of face mask after previous epidemics that had never been followed up. Even if the CEBM's response was marred by the kind of righteous outrage that also corrupted discussion on social media, it is clear that (at the very least) they understood the role of the discourse in validating scientific claims, and saw the risks involved in pretending there was no prior understanding on the topic that might have made certain advocates of community masking more cautious than they were. In the sciences, scepticism can be both a blessing and a curse, but the absence of adequate scepticism - or the refusal to listen to it - almost always heralds mistakes, and sometimes disastrous errors. It is why allowing disagreements is essential to the work of the sciences, and every attempt to prevent such arguments from taking place fosters pseudoscience.
It is worth pausing briefly to point out that when I claim the medical discourse in the UK devolved into pseudoscience over this issue (and a parallel argument can almost certainly be constructed for the US, but I have spent less time examining the discourse there) I am not making any kind of claim about the truth of the competing claims about community masking. From the UK perspective, one side came to the table with a hypothesis that this intervention would be effective at preventing the spread of a respiratory virus, acknowledged the evidence they had at the time was inconclusive, recognised some of the specific risks involved in pursuing this intervention but claimed that - as a precaution - we ought to adopt the community masking anyway.
The positive argument made for the intervention was essentially ‘it might save lives and we might avoid the known harms so we must do it’. Yet as a purely logical matter, this is poor reasoning, and as a medical question the precautionary principle could not plausibly be applied on this basis (as some pointed out at the time, it cautions the exact opposite of what was done). Thus right from the outset, the necessary discussion on the topic was on dangerous ground. But this certainly does not exclude the possible benefits of community masking; rather, what was indicated was an urgent need for trials to establish the balance of benefits to risks. In ignoring the ambiguous state of knowledge regarding the potential harms, the discourse failed and we entered the condition of pseudoscience.
If we had remained in a state of productive scientific discourse, what should have happened next was commissioning studies to gather evidence to resolve the ambiguities. Yet this did not happen, and still has not happened, and it is incorrect, as British evidence-based medicine practitioner Margaret McCartney shrewdly observed, to claim that the evidence could not be gathered because it would be unethical to do so:
Another argument is that large scale trials, say of face mask use in schools, are impossible, because of the belief that every child would need a guardian to consent, making recruitment practically impossible. But this is deeply problematic. This suggests that the government can choose and implement any policy, without requiring any individual consent, as long as it is not called a trial. For as long as this double standard is allowed to persist, giving less powerful results and unnecessary uncertainty, people may come to avoidable harm. Nor does valuable information come only from randomised controlled trials. Complex interventions require multiple disciplines and types of research for assessment. But where are they? [Emphasis added]
Furthermore, it is rather strange that Greenhalgh and her colleagues specifically identified a key risk associated with mask use (touching an infected mask - see the quotation above), but set this aside by claiming that the public would be “highly motivated to learn techniques for most effective mask use.” Yet the British government provided negligible guidance on effective mask use to the public. Considerable expense was put towards promoting the idea on television and other media that the British public should wear masks, but almost none at all on what good mask technique ought to consist of. Notes on the government website, however, did provide numerous important warnings - about not re-wearing used masks, about storing used masks in plastic bags etc. - none of which I have seen practiced by anyone but myself in months and months of government-enforced mask wearing. Nor were any studies conducted to even check the quality of the mask technique that was occurring in the community! Once the law was passed to mandate face masks, even those concerns openly acknowledged by the medical professionals who had called for community masking in the UK were simply ignored.
If you had suggested to me in 2019 that the British government was going to mandate a medical intervention on weak evidence and then commission no studies to verify either the efficacy or the safety of that intervention I would have at the least raised an eyebrow, and at the worst asked what you were smoking. Yet this is precisely what happened. The entire affair has caused me quite considerable distress, not because I know the truth of the matter (community actions are far more complex research subjects than most people seem to realise), but because I would never have believed in 2019 that it would take just eight weeks to disrupt the capacity for the medical networks of the United Kingdom to act as scientists, nor that anyone would propose to use the force of law to compel everyone into a medical intervention the case for which had never even been adequately debated, let alone investigated. It is doubly amazing to me that anyone can use phrases like “following the science” or, worse, “the science is clear!” in a situation where the truth is that the required scientific work has not yet been adequately conducted.
The concern I am raising here is rather independent of what transpires to be the truth about community masking if and when scientific discourse is restored. Even if future evidence did eventually validate the hypothesis, it would not change the fact that the British government acted improperly by enforcing penalties by law for non-compliance with an intervention they apparently had no intention of confirming was effective, nor indeed of ruling out the possible health risks suggested by earlier mask studies - perhaps most significantly that cloth face masks, improperly used, might increase the rate of infection (as the CEBM commentary points out, and as Greenhalgh and colleagues acknowledged was a risk). There was more than enough evidence in April to formulate a hypothesis, but nowhere near enough to settle the issue unequivocally - as indicated by the fact evidence-based medical practitioners in both England and Scotland publicly spoke out against both the lack of good evidence and the abject failure of the British government to commission any new studies to gather it.
I can think of no better name for this depressing collapse of the medical discourse in the UK than pseudoscience. This condition destroys the ability of the sciences to operate by undermining our capacity to disagree, which is fundamental to the pursuit of scientific truth. What's more, once this situation occurs, the problem is no longer constrained to the topic that initiated it, and alas creates ample opportunities for unscrupulous people to manipulate the truth for personal profit while the scientific networks are effectively disabled. Thus in November 2020, the British Medical Journal's Executive Editor Kamran Abbasi issued an unprecedented editorial about the suppression of scientific research in the UK's most respected medical forum declaring:
Science is being suppressed for political and financial gain. Covid-19 has unleashed state corruption on a grand scale, and it is harmful to public health. Politicians and industry are responsible for this opportunistic embezzlement. So too are scientists and health experts. The pandemic has revealed how the medical-political complex can be manipulated in an emergency—a time when it is even more important to safeguard science.
This is not some off-the-cuff remark by an armchair epidemiologist on social media, this is the Executive Editor of a major British journal issuing an editorial for the express purpose of lambasting the British government for "state corruption on a grand scale" and "opportunistic embezzlement", this latter point relating to the news story (reported in October by the BMJ) that the government had handed out contracts without tender for face masks and other protective equipment, some of which was not even fit for purpose. (I note for context that Abassi appears to have remained agnostic about community masking - although not about Facebook censorship over the issue). How curious that this serious breakdown in scientific discourse did not even warrant a mention in any British news source! But then, each of the channels, each of the newspapers had already picked a side on the face mask issue, so they simply ignored and discredited any and all contrary viewpoints... thus the journalists followed the scientists into pseudoscience too, if they did not in fact lead them into it.
Logically, if the US medical community had not descended onto this crooked path immediately beforehand, we would be hard pressed to explain how this could have happened in the UK at all (it is exceptionally unusual to argue to undertake a precautionary measure while admitting the evidence for it is still inconclusive, for obvious reasons). However, since I have not examined these earlier discussions in any great depth, I leave it open whether there might be some other explanation besides the most obvious one, namely that the UK's pseudoscience outbreak was caused by a metaphorical infection of human thought that spread from the other side of the Atlantic where political partisanship had already destroyed any possibility of clear scientific thinking at a time when it was most needed.
Hence the epidemic of armchair epidemiologists who dealt with every contrasting perspective by the expedient means of summarily discounting the views of anyone who disagreed with them. Yet for their chosen position to be in any way credible, these partisans still have to explain why they have needed to discredit so many people who are well-versed in the medical sciences. As this UK case study hopefully makes clear, whichever stance is taken in 'masks save lives/don't work', at least one senior academic at the prestigious Oxford University, plus hundreds more academics at other faculties around the world, will be on the other side. How far are you willing to go in your crusade of denouncements and discreditings just to uphold a specific interpretation of the still-ambiguous evidence as being both clear and irrefutable? Will you say that their political beliefs misled them, while yours miraculously had no effect on your truth-finding powers...?
Accepting this as an outbreak of pseudoscience, on the other hand, provides both an explanation for this otherwise incomprehensible lack of collective discernment, and a potential solution as well: restore debate over the key disagreements, and either conduct the required research or entirely withdraw the legal requirement for community masking in the UK (or wherever you happen to live). Without embracing dissent, there can be no legitimate scientific position on community masking at all, only the counter-productive war of bias-against-bias I have named pseudoscience. The sooner we accept this, the fewer lives we will lose to these two infections - the deadly SARS-CoV2, and the even deadlier outbreak of pseudoscience it has fostered.
As long as we pretend that this issue is resolved beyond further dispute, rather than trapped in a limbo where such resolution is impossible to reach, the more people will die who did not need to. Not because some people wouldn't wear masks, but because we have collectively destroyed the ability of the sciences to do what they do best: to investigate ambiguous situations and explore all the possible explanations for the evidence gathered thus far. The science is clear? No, it almost never is. But our guilt in undermining the work of the sciences is all too clear, and for this I fear we should all feel greatly ashamed.
Comments welcome, but please don't comment angry! If this piece enrages you, please wait a short while before replying.
We celebrate Albert Einstein as the greatest scientific genius of the preceding century, yet we tend to focus solely upon his theories in physics when we do so. In the 75 years since his death, we have continuously taken steps to place greater importance upon science and mathematics and to downplay the importance of the humanities. Yet Einstein himself would have cautioned against taking this path. He remarked, in a piece for the New York Times in 1952 (and please forgive his exclusive use of male pronouns, which at the time was entirely usual in English):
It is not enough to teach a man a specialty. Through it he may become a kind of useful machine but not a harmoniously developed personality. It is essential that the student acquire an understanding of and a lively feeling for values. He must acquire a vivid sense of the beautiful and of the morally good. Otherwise he – with his specialized knowledge – more closely resembles a well-trained dog than a harmoniously developed person. He must learn to understand the motives of human beings, their illusions and their sufferings, in order to acquire a proper relationship to individual fellow men and to the community. These precious things are conveyed to the younger generation through personal contact with those who teach, not – or at least not in the main – through textbooks. It is this that primarily constitutes and preserves culture. This is what I have in mind when I recommend the ‘humanities’ as important, not just dry specialized knowledge in the fields of history and philosophy.
How fascinating that at the time he was writing, the danger Einstein saw was that only history and philosophy would be taught in the humanities! Today, neither subject is a priority at most universities, and the humanities as a whole have been relegated to a lesser status next to so-called STEM (Science Technology Engineering Mathematics) subjects. Einstein, as this quote and others like it attest, was against this elevation of the sciences above the humanities, against the specialisation that has become the hallmark of contemporary higher education... he saw great danger on the path that we were already upon in the 1950s. We did not listen.
Today, even those of us who value both the humanities and the sciences for their unique contributions to human flourishing will tend to treat the former as worthy and the latter as useful. The impression is thus that the humanities are an optional extra, while the sciences are doing the real work in advancing human knowledge. Indeed, it sometimes seems that what distinguishes the humanities from the sciences is that humanities scholars merely ‘talk’ while scientists ‘do’. But this is an illusion brought about by the impoverished state of our philosophy of science. In actuality, every science is also a discourse. Not understanding this subtle point leads to a great many errors.
The story we like to tell about Einstein's scientific work, and the tales we tell of Galileo and Newton as well, have a nasty habit of valourising these theoreticians and natural philosophers as lone heroes fighting for truth against the Church or some other orthodoxy (e.g. the ether, in Einstein’s folk history). Almost always, these tales are mythically exaggerated - even to the extent of falling into magical science, as previously discussed. Regarding Galileo, Paul Feyerabend is not the only historically-inclined philosopher of science to observe that it was the Church at that time who was more “faithful to reason” in the famous dispute. As Charles Taylor puts the matter: “If we look at the period we’re examining, we see that the mantle of sober scientists was often seized by the defenders of orthodoxy.” In each and every case, looking at what scientists came to accept afterwards is an inadequate way of understanding how they reached these new understandings, which always entailed disagreements being worked through by a community.
What I find particularly fascinating about the relationship between the sciences and their discourses is that contemporary scientists - quite unlike Einstein and natural philosophers like Newton - typically do not understand themselves as being in a discourse at all. I would suggest this shortcoming happens precisely because scientists today are trained in blinkered specialist degrees and do not receive a university education in the sense that Galileo or Newton would have understood, and that Einstein championed. For the natural philosophers, to go to university was to be prepared to understand the world as a coherent whole - a universe, hence ‘university’ (both terms coming from the Latin, ‘universus’ - whole, complete). There was no concept of humanities vs sciences for these scholars, and although there was for Einstein, he urged us to pursue both and considered the humanities to be so important that a good education ought to revolve around it.
A university education in the classical sense required you to understand, for instance, that Newton’s laws of motion spring from Newton’s writings, which were part of a mathematical discourse with his predecessors and peers. Not without good reason did Newton famously claim to be “standing on the shoulders of giants.” Conversely, while I was studying physics at the world-class physics department at the Schuster Laboratory in Manchester, every theory was presented to the undergraduates as if it had come from nowhere, just a magical free-standing edifice, a roof without walls to support it. Humanities scholars broadly understand their fields as sustained by their texts, while contemporary science students are taught misleading nonsense like ‘the scientific method’ instead (see the earlier discussion for why this is incoherent), although I note that, to their credit, no professor at University of Manchester ever suggested any such thing to me. Alas, a great many people today seem to foolishly believe that ‘the science’ speaks for itself, yet that it does so through them, as indeed oracles claimed of the gods that spoke through them (another manifestation of magical science, perhaps...?).
Every scientist is part of a discourse - and they ignore this to their (and sometimes our) peril, most especially because training in one field does not automatically give you expertise in all fields. Newton is not the only one who stood on the shoulders of giants, every scientist (every scholar in every discipline, in fact) necessarily does so, and every mythic image that conceals this poses risks to scientific practice. As much as I have dabbled with being a polymath since graduating, I have only ever managed this by committing to learning new discourses and being willing to both listen and talk to practitioners in those other fields - as I had to do in 2011 with aesthetics and 2012 with the evolutionary sciences in order to write about them for my first two philosophy books. To conceive of the sciences as uncovering truth without borrowing those giant shoulders is to deceive yourself. The sciences are community practices, and have always been so.
Einstein's Hope for the Future
We take Einstein as a scientific hero with good cause, but like his natural philosopher predecessors he did not associate knowledge with intense specialisation, but rather with co-operation within and between disciplines. Remember that Einstein performed no experiments to verify his theories (although he designed one experimental instrument, his “little machine”, which does not appear to have worked) - he didn't need to conduct his own practical research; he could count on the physicist community to be curious enough to want to consider all the possibilities with care, because of their shared commitment to determining the truth of each situation.
As much as I admire the sheer elegance of his mathematical derivation of special relativity, which I studied in high school, there is an Einstein quote that for me sums up his genius more than anything else:
Perfection of means and confusion of goals seem – in my opinion – to characterize our age. If we desire sincerely and passionately for the safety, the welfare, and the free development of the talents of all men, we shall not be in want of the means to approach such a state. Even if only a small part of mankind strives for such goals, their superiority will prove itself in the long run.
The message here may not be immediately clear: it is not enough for scientists - nor indeed anyone seeking to serve humanity as a whole - to be siloed away in a specialism ‘perfecting means’. Yet because we have become so good at doing this, because our means (our technology) have become so powerful, we could easily achieve a state of near-universal human flourishing if only that was the goal we should wish to undertake. It was Einstein’s hope that we would. Yet we did not, and still do not, in part because Einstein’s generation of scientists were the last that learned their science as a discourse, and thus did not look down upon the humanities as somehow lesser, requiring the self-deceit that the sciences transcend human discourse to speak directly with the universe - or as Einstein would say, with God. Einstein would not have said that the moral truth was given by God, however, but discovered by us, through pursuing our disagreements in the humanities, which are at least as important as the sciences when properly understood.
The mission statement I take Einstein to be laying out here is not one I associate with spreading high technology indiscriminately around the world, thus bringing the community-rich ‘Third World’ down to the impoverished social state of our so-called ‘First World’, nor with dictating for all what a technological good life should (or worse, must) be. On the contrary, the safety, welfare, and the free development of the talents of all humanity will be quite seriously threatened by our technology if we do not change how we think about it, a topic I have explored in The Virtuous Cyborg. Rather, I take Einstein as participating in a prior discourse (a lowly humanities discourse...), that of the Enlightenment philosophers such as Immanuel Kant, whose major works Einstein had already read at age 16. I take it, therefore, that Einstein was proposing to work towards what Kant suggested was the “merely possible” future state where we can support everyone in pursuing their own chosen ends provided they do not prevent others from pursuing their own ends. Both pseudoscience and magical science disrupt our ability to do this, in part by obscuring the truth that both the humanities and the sciences are vital discourses we cannot afford to disrupt, a fact that has alas become obfuscated by this very division of human thought.
This schism in knowledge - a grenade whose pin was accidentally pulled by Kant in his rethinking of the university system - now threatens everything the Enlightenment strived towards. For me, the best reason to pursue philosophy of science - to take part in the discourse about the discourses of the sciences - is to help fulfil Einstein’s dream of ensuring the safety, the welfare, and the free development of the talents of all humanity, an ideal originally espoused by Kant, Mary Wollstonecraft, and others like them. In so doing, I join Einstein, Wollstonecraft, and Kant’s discourse, without of course ever speaking to them. It is my hope, vain though it might be, that more might still follow us - but I fear this will not happen without a seismic shift in our understanding of the contributions both the humanities and the sciences make towards our collective knowledge, and with it a vast and long overdue improvement to our philosophies of science.
Comments always welcome.
A short while ago, whilst working through all the James Bond movies, you declared that you were coming to the conclusion that was no such thing as a good Roger Moore Bond film. But I have quite a different take: there’s no such thing as a bad Roger Moore Bond movie - only different ways to appreciate the brilliance of Roger Moore Bond movies. Yes, they are sexist, but markedly less so than Sean Connery Bond movies. Yes, they have content that if filmed today would be outrageously racist, but they were not filmed today and the cringes of hindsight do not undo the gains for cultural inclusion these films may strangely have achieved. Indeed, so much do I rate the late Roger Moore’s stint as Bond that for our first family movie night experience, my wife and I choose these films for my three sons to share with us. Are we mad? Probably. But there is definitely method to our madness and I should like to share that with you without any attempt to persuade you that your perception of these films is mistaken. It is not. I rather suspect you just haven’t the prior experience required to enjoy these particular (very particular!) movies.
My wife is from Tennessee like you (unless I’m mistaken) and comes to Bond on my suggestion having really loved the first (and only the first) Austin Powers film. As such, the Sean Connery Bond movies were a Where’s Waldo? extravaganza for her! “It’s Doctor Evil!” she exclaimed upon seeing Blofeld for the first time because, well, of course it undeniably is. When we finished watching the first Roger Moore outing, Live and Let Die, she declared “I don’t know if that was the best movie I’ve ever seen or the worst.” That is the greatest description - and highest praise! - of Moore’s Bond films I can imagine. For you must be able to enjoy bad movies for what they are good at to love Moore as Bond. The 1981 Clash of the Titans is quite the same; it’s a masterpiece. It’s also a cinematic dumpster fire with LA Law’s Harry Hamlin totally unable to anchor his own action movie and upstaged quite inevitably by Ray Harryhausen’s stop motion menagerie.
This brings me to the first reason to love these films: Derek Meddings. A special effects genius at a time when such things required immense practical skill, Meddings is best known for his amazing work with Sylvia and Gerry Anderson on their incredible Supermarionation shows like Thunderbirds and Captain Scarlet. My boys and I are working through these on Saturday mornings (along with classic Doctor Who), and are currently enjoying Stingray. Meddings contributed model work to five of the seven Moore Bond films, and was Oscar-nominated for Moonraker. You can spot a Meddings model shot from a mile away, although I do wonder if you have to have watched those classic 1960s sci-fi puppet shows to truly appreciate the craft involved. Appreciation flows from our prior experience; I never appreciated shot composition until I watched Seven Samurai, still my favourite film of all time. But Kurosawa movies are brilliant in almost every way. That’s not what Moore’s tenure as Bond is about. Meddings work carries a lot of appeal for me, holding the same joy as a beautiful matte painting, which is so much more wonderful than anything you can do in CGI to my eyes. I’m so delighted Meddings won an Oscar for his work on the 1978 Superman film. He was to miniature shots what Harryhausen was to stop-motion: a legend.
Neither is Meddings the only such mythic cinematic contributor to these films. John Barry, perhaps the greatest and most influential orchestral film composer Britain has produced, does some of his best work during Moore’s run, although his work with Shirley Bassey is more striking in the earlier Bond films and his magnum opus is arguably Louis Armstrong’s "All the Time in the World" from On Her Majesty’s Secret Service (which I believe we both rate highly as a Bond film). I think, on balance, his score for that movie and for You Only Live Twice are a head and shoulders above his work for Roger Moore, but the British Film Institute did pick up on the score for Moonraker as one of Barry's ten best. I personally think videogame orchestral scores almost always draw from Barry when they are not instead stealing from John Williams. But the significantly insignificant difference here is that John Barry is British.
This British connection is important. Unlike my wife, I’m British, quite the mongrel actually - half English, quarter Scottish, with Italian and Belgian bloodlines also in my family history too. Roger Moore is the most British of all the Bonds, and his movies are so intimately caught up in British culture that comedian Steve Coogan could write a comedy scene in which his most enduring character (Alan Partridge from The Day to Day) recites verbally the entire opening sequence to The Spy Who Loved Me - including those lurid Maurice Binder titles - in an utterly hilarious irritable deadpan. It's worth noting, then, that Moore was the first English Bond. Connery? Scottish. Lazenby? Australian (not British). And afterwards: Dalton? Welsh. Brosnan? Irish (not British). It's only when we get to Craig that we get English again. And what a step down that is, from Moore to Craig - although presumably not for you!
Britain, of course, has an extremely chequered history from its time as a world power, which peaked in the nineteenth century, just as the United States' empire is peaking seems to be peaking in the twenty first. In 1973, when Live and Let Die arrived, Britons (especially the English, but not only...) were rather struggling to get to grips with the reality that whiteness is not Britishness. This was especially the case with respect to the burgeoning West Indian population - half a million arrived between 1948 and 1970 seeking jobs, which they were expressly invited to emigrate for but whose welcome was not always (or indeed often) warm. But there were still vanishingly few black actors on TV in the 70s. Doctor Who is one of a rather short list of shows to have had multiple black actors in key roles by Moore's debut. Britons were simply not used to watching black people in 1973. And then here is Live and Let Die - a suave, black supervillain, multiple black henchmen all with great charm - and none more so than dancer Geoffrey Holder as the quite literally marvellous Baron Samadhi. And black allies who are there for something more than just being killed! The message to spellbound Brits watching was that black people can be spies and criminal masterminds, just like white people. Yes, there’s massive influence from Blaxploitation films at work here. But the benefits for British cultural integration should not be underestimated.
So too with Vijay Amritraj and Kabir Bedi in Octopussy. Okay, we have to endure every cringe-inducing Indian cultural stereotype imaginable - but at a time when the Indian population of Great Britain were almost entirely invisible on recorded media, here is a film saying Hindus and Silkhs can be spies and superpowered villains too. The location shots from Udaipur are among the greatest in the entire Bond movie run, although as with the miniatures shots I mentioned above it takes a certain kind of film appreciator to enjoy location shots independently of their role in the narrative. Still, watching Amritraj pal up with Moore sends a clear message that Indian people can be superspies too - and that counts for something. Please do not underestimate these gains because they are tied up with casual racism... acceptance that Britishness need not entail whiteness begins with films like these, and while I do not know what black and Asian people in the 1970s made of them, the predominantly white audience for the movies here in the UK were, I suggest, subtly and positively affected by the inclusion of heroes and villains of colour. Even if these actors were not themselves British, they opened doors in the media industries for black and Asian actors who were.
What of Moore himself? Here we cannot tell any story without first acknowledging the centrality of Sean Connery to the Bond mythos. He embodies the phrase that was ironically said (by film critic Raymond Mortimer) in connection with the first Eon Productions Bond movie without Connery: "James Bond is what every man would like to be, and what every woman would like between her sheets." This is of course a problematic claim unless it is preceded with the phrase “in the imagination of men...” Which men? Why, 1960s stereotype men of course who, on the basis of Connery’s Bond, fantasise about striking women across the face so that they will then want to have sex with them - something Connery’s Bond does with embarrassing frequency.
But not so Roger Moore’s Bond. Whilst still sexist by contemporary standards, his version of the iconic character is markedly more respectful of women in that his technique for attracting women isn't to physically abuse them. Clearly, Bond is still at heart an adolescent power fantasy - but what action hero is not? More than that, Moore’s Bond isn’t just a fantasy for teenage boys, he is emotionally a teenage boy - with his distinguishing feature being that unlike any actual teenager he is written with the skills, gadgets, and sheer luck to actually succeed at everything instead of merely falsely believing that they would do so. Moore’s Bond is an absurdly dangerous teenage boy in a man’s body, who is always inches away from death by misadventure but is repeatedly saved by script immunity or, more often as not, by the magical science provided by Q’s gadgets.
Moore’s casting was not any kind of accident. His quasi-predecessor, George Lazenby, had the fatal flaw of not being Sean Connery, while Moore had the immense benefit of not being George Lazenby. Moore was chosen precisely because he had already shown himself more than capable of playing a gentleman spy, having done so as Leslie Charteris' 1920s hero Simon Templar in the TV show of The Saint, which aired from 1962 to 1969. Templar is a thief not a secret agent as such, but he is still very much part of the spy thriller genre broadly construed. And like Moore’s Templar, Moore’s Bond is impossibly skilled, implausibly righteous (yet never quite good, per se), and bucks authority with a glint in his eye, an impish grin, and more than a few raised eyebrows. Transplanting Moore into the Albert R. Brocolli film series was a safety play - and boy, did it work! The movie series’ success grew substantially during Moore’s tenure - he even got to ‘win’ against Connery in the much publicized ‘Bond vs Bond’ box office duel of 1983, when Octopussy outgrossed Never Say Never Again.
What I love most about Moore’s dangerous teenager is that quite unlike the brutal, emotionally stunted Bond of Daniel Craig, or the woman-beating Bond of Connery, Moore’s Bond is always respectful to those serving in the military (but never entirely to the civil command, which Bernard Lee's and Judi Dench's M represent) and largely avoids being a murderer - except for two instances, which apparently Moore himself was vehemently opposed to. Yes, enemies are killed, but largely in self-defence. Moore’s Bond is a warrior with honour, something quite unthinkable in contemporary cinema without transplanting the story back in time more than a hundred years. In the twenty first century, our spies and military are now permitted to murder even our own citizens with unquestioned yet utterly questionable impunity. But Moore’s Bond has an ethic to his spycraft that is as unrealistic as the magical science of his gadgets, but that makes him far easier to love because we somehow want to believe that spies could be this noble, even though we know they are not.
As I said at the outset, it’s not my intent to convert you to Moore, but rather to show how Moore’s Bond is tied up with British culture in a way that Connery’s Bond really isn’t (although some of his filthiest puns - penned by children's author Roald Dahl for Your Only Live Twice - require a grounding in British schoolboy humour to appreciate). Connery (Scottish) and Brosnan (Irish) are the most Americanized Bonds - and very enjoyable for it! But Moore is quintessentially English, his Britishness rooted in Oxbridge, the Officers’ Training Corps, and London gentlemen’s clubs (by which I do not mean strip clubs!). As problematic as this may be in retrospect - the false equation of Britishness with Englishness being a papering over of the aforementioned whiteness problem - it has an inherent charm that is also part of the appeal of Sherlock Holmes, another quintessentially English hero with magical science at his disposal.
I love Moore’s Bond, and I’ve only just scratched the surface of why in this short missive - why, I haven't even mentioned how they let the always astonishing Grace Jones design her own wardrobe in 1985's A View to a Kill, which must surely be the greatest costumes ever seen in a franchise known for its outlandish clothing. There's so much to adore in these films once you let them beguile you, but I think appreciating Moore as Bond requires either an openness to archaic Englishness as an aspect of Britishness (which is also helpful for appreciating classic Doctor Who), or an ability to enjoy an action movie purely as a pulp romp and not as cinema, per se. The Moore Bond movies may indeed be bad films, but they are among the greatest bad films ever made. It has been a pleasure sharing them with my three young boys, and I hope in writing this letter that I can give you at least a glimpse of why that might be so.
Please continue to be the good and excellent person you are, and to write about films, games, and whatever else you choose to discuss. If you should find the time to reply, I would love to hear your thoughts on any of this, or indeed on the 1980 film The Blues Brothers, which I personally view in quite similar ways, as allowing a vast raft of phenomenal black musical talent a cinematic spotlight they could never have had at that time without teaming up with white comedians.
With love and respect,
Comments and further blog-letters are always welcome!
Arthur C. Clarke famously suggested that any sufficiently advanced technology would be indistinguishable from magic. This suggests another maxim: any insufficiency developed philosophy of science is incapable of distinguishing between science and magic.
We all have our own philosophy of science, our conceptual framework for understanding scientific topics. In the best case, our personal philosophy of science informs us of the limitations of scientific knowledge, allows us to put research into a wider context, and ensures we remember that the work of the sciences is still at heart an entirely human endeavour. Alas, few of us have such a clear view of the sciences. Far more widespread is a kind of pervasive mythos we might call ‘magical science’, which affords to the image of science unlimited future power, and to scientists an awesome capacity to divine the truth through singular experiments, like a Roman haruspex reading animal entrails to predict the future.
Magical science has the dubious honour of being the only superstition widely encouraged today. We are all too frequently adamant that science has all the answers, science is the royal road to truth, that we can trust in the science... I notice that even the British Prime Minister has taken to invoking magical science in his speeches these days to validate his increasingly dubious actions. At heart, magical science may seem harmless, a mere rose-tinted vision of the work of scientists, one that tries to account for all the successes of our various research networks without any attempt at balance or insight. We typically overlook this kind of naive enthusiasm for scientific achievement on the basis that it's at least ‘supporting the right team’. Yet it becomes increasingly clear that blind support for science can manifest in ugly ways, even in ways that can prevent the sciences from working, plunging research into the debilitating condition of pseudoscience, as previously discussed.
The perceived infallibility of the sciences as truth-seeking procedures clashes worryingly with the necessity of scientists making mistakes, and thus magical science leads to anger at scientists when the actual scientific work is not as wondrous as it is imagined it should be (as with the ugly 2009 L'Aquila trial, where terrible earthquakes in Italy were not successfully predicted and the scientists blamed), or when any scientist speaks out against a claim that has been proclaimed unshakably true by its advocates. It is precisely because magical science is incapable of distinguishing science from magic that it represents a far greater danger to scientific endeavours than other philosophies, perhaps even so-called ‘anti-science’ philosophies. What deceives us here, what elevates scientists to their misguided role as flawless augurs rather than researchers struggling with ambiguous data, are the bad habits we have learned from the manifestations of science in fiction, where magical science is the norm. If we wish to see the work of the sciences with clearer eyes, we may have to start by putting some of the most iconic characters in fiction on philosophical trial.
Sherlock Holmes and the Flawless Investigation
It is sometimes remarked that in creating Sherlock Holmes, Sir Arthur Conan Doyle produced the first hero of ‘the scientific age’. The Victorians were the ones who coined the term ‘scientist’ and it was their obsession with the sciences that set the scene for the unfolding technological transformation of the world over the next century and a half. We tend to treat the character of Holmes as significant mainly for crime fiction, as the archetype from which all whodunits descend - but Holmes, quite unlike a Raymond Chandler or Agatha Christie detective, is always a practitioner of magical science. Partly, this proceeds from the inherent parsimony of storytelling whereby all questions will eventually be answered because everything is there by the author’s design. Partly, however, it proceeds from Holmes’ essential power - which upon closer inspection is not deductive reasoning at all, but rather the infinite convenience possible solely in literature.
Doyle gives Holmes a quite impossible access to every conceivable fact as a starting point, such that a berry stain or the smell of a particular tobacco can certainly be identified, and then (to pile on the absurdity) Holmes by purest chance always encounters a set of circumstances that allow for only one viable interpretation. This particular brand of tobacco, for instance, is sold in exactly one place in London... We thus end up admiring Holmes purportedly scientific form of investigation while what we ought to admire is the way Doyle effortlessly conceals the magical science entailed in this depiction by making it seem as if all of Sherlock’s deductions (and inductions) were strictly logical. Doyle has contrived a set of circumstances that Holmes, with his unlimited catalogue of facts, can be certain to solve. This makes Holmes a disastrous role model for scientists (or indeed, detectives!) since it is only through the meticulous construction of literary contrivance that he possesses any investigative power at all. This becomes clearest when Holmes relies upon facts we know are false - such as the ludicrous snake plot device in The Speckled Band, which entails behaviour implausible to coax out of any reptile. Holmes’ claims to be a man of science are rather fraudulent behind the scenes: he is simply the locus of a mythic depiction of magical science.
Neither is Holmes the only such character. Both Spock and Data in the worlds of Star Trek share this power of magical science - also manifested in these shows by the tricorder, which like Holmes spits out every required fact on demand and without error. Or consider Doctor Who from the third Doctor onwards: anything necessary is certainly known by the Time Lord, except when the story requires a convenient (and often temporary) amnesia for dramatic effect. That both Data and the Doctor had a spin at being Baker Street’s most eligible bachelor is not accidental, nor perhaps is Stephen Moffat’s concurrent time as showrunner for both Doctor Who and Sherlock... Magical science heroes seem to reaffirm our faith in the power of scientific knowledge, while also playfully exposing the quirky personalities of scientists. House, The Big Bang Theory, and much more besides all participate in a literary tradition that stems from the Sherlock Holmes tales, and is now seemingly dominated by his science fiction proteges.
Yet these are not scientific heroes, but magical science heroes. They have exactly the facts and the circumstances to answer perfectly every time, without ever having to confront the ambiguity, indeterminacy, and incompleteness of an authentic scientific problem. They are to science what Superman is to police officers: naively idealized caricatures. They find the answers solely because they live in stories where uncovering the truth is possible by design. This is a wildly misleading template for scientific truth, and although we know these are ‘just’ stories, we somehow import our wilder beliefs about the sciences into our everyday thinking unless we are extremely careful. If we are to break this spell, we need a philosophy capable of distinguishing science and magic - and for this, we need a clearer understanding of ‘scientific truth’.
Desperately Seeking Truth
Even if we start with the acknowledgement that the sciences are capable of discovering or affirming truth, the question of what might qualify as a ‘scientific truth’ is far trickier than it seems. As the preceding discussion on pseudoscience made clear, we cannot simply append ‘scientific’ to known truths without distorting the essential ambiguities of the research process where we cannot in practice know if the apparent truth of a researched claim will hold in the future. In fact, we have a choice. We could align ‘scientific truth’ with the unshakeable deep truth of reality and thus admit that the claims asserted by scientists cannot be known as truth at all (effectively contracting the domain of scientific truth to concluded research programmes like optics). Or else we can align scientific truth with the body of beliefs held by scientists, with the inevitable consequence that such truths can be later revealed as false - or even abominable. We don’t even have to go back a century to find all manner of racist, sexist nonsense asserted as truth by those who identified as scientists.
Now those who buy into magical science have an easier job here, but only by being wildly dishonest about both truth and scientific methods. According to magical science, scientists uncover truth infallibly so all claims asserted by scientists are scientific truth. Thus if and when the circumstances shift we can ‘debunk’ or ‘discredit’ those responsible and say they were not really scientists at all, or even exclude their claims from consideration in the first place! This is where ‘pseudoscience’ has been used as a label, although as I have argued previously it is not a terribly viable way of using the term. Babette Babich has made even stronger - and oft misunderstood - claims about the way the discrediting associated with the term ‘pseudoscience’ serves as a dogmatic attempt to demarcate legitimate science, while all too frequently preventing any scientific enquiry from even beginning. Thus when this particular word comes out, it narrows scientific knowledge by declaring certain topics forbidden and out of bounds - and woe betide the researcher who goes on to try to report experimental results from such verboten fields...
The highly problematic implication of every attempt to discredit and thus demarcate ‘science’ from ‘pseudoscience’ must be that we cannot know when scientists assert a claim whether it will later need to be ‘debunked’. Thus faith in magical science is inevitably a distortion of the truth - for things we will say are scientific truths on this philosophy may later be ‘discredited’, or even discredited before they are considered at all. The alleged truths of magical science are thus only defended by ignoring the inevitable consequences of the inherent revisionism of scientific practice and pretending that the current consensus among researchers is ‘more true’ than it was yesterday and thus that now (and by implication, only now) we can trust everything scientists say as long as we are standing guard for those pernicious pseudoscientists who ruin it for everyone. To say that this is dangerous nonsense is easy; to replace it with a more sound philosophy of science will be much harder.
There might be a way out of this maze, but it would require us to think differently about the relationship between truth and the sciences. Part of what deceives us here is our desire to understand the truth in terms of a set of valid statements. Since we can point to scientific concepts we abandoned, like phlogiston (which was a hypothetical substance that made combustion possible), we want to assert a gradual improvement in the accuracy or scope of our ‘book of facts’. “We would not be fooled by phlogiston today,” we might think. Yet phlogiston was an important - and arguably entirely scientific - proposal that was merely discarded when our understanding of chemistry shifted such that combustion could be thought of in terms of a chemical reaction with oxygen.
The brutal truth of the ‘book of facts’ is that such a collection of statements today would theoretically contain far more ultimately false claims than it would in the 1770s, simply because the number of scientists and the diversity of research fields has increased dramatically we are now paradoxically more wrong than researchers in the 18th century (in terms of sheer numbers of errors made) - the inescapable consequence of asking both more and more difficult questions. What makes it feel as if we are now more right is knowing that phlogiston was to become replaced by a new understanding of chemical reactions and thus combustion and so forth. But this is largely an illusion caused by examining successful research programmes in hindsight.
Similarly, when I say phlogiston was ‘scientific’, I am projecting with hindsight since the term ‘scientist’ was not coined until 1834... researchers in the 1770s would not have described anything they were doing as ‘scientific’ - it is our desire to paint the sciences as something with a history of more than two centuries that makes us ‘claim’ both phlogiston and oxygen (not to mention Copernicus, Galileo, Newton and so forth) as part of the story of ‘science’, rather than the natural philosophy that those involved would have stated they were pursuing. Thus our ‘book of facts’ not only contains more errors than our predecessors two and a half centuries ago, it is not even entirely honest about its relationship with its own past. Add to this the unavoidable truth that this imagined ‘book of facts’ does not exist (for all that encyclopedias and their successors have wished to fulfil this role) and it begins to feel uncomfortably like we are deceiving ourselves - as if we have all fallen for the seductive confusions of magical science.
We want to defend our intuitive impression of the sciences as truth-seeking, and also (in some nebulous sense) successful at doing so. How do we do it?
One option we can consider is that which I proposed in Wikipedia Knows Nothing: to switch our focus from facts (true statements) to practices (skills and equipment). To know how to use something - a polymerase chain reaction, an interferometer, a fractional distillator - is more a matter of knowing what to do than it is a ‘book of facts’, even though that knowledge also produces facts related to the equipment used (and any theories deployed to give a context to the reading of the instruments). Thus an astronomer armed with geometric theorems can use an interferometer to measure the diameter of stars, while an engineer can use an interferometer and the wave theories of light to measure very small objects precisely. The practices associated with both the equipment (the interferometer) and the theories associated with each specific usage give rise to facts - in this case, distances. The difference lies in what legitimizes the activity in question: on the usual conception of knowledge, if you had the facts you had legitimate knowledge if those facts were true and the reasons for justifying them were correct - which actually provides no means of knowing what is or is not legitimate since our criteria for legitimacy requires an appeal to something beyond the situation (the truth) that we cannot access directly. Conversely, when we view knowledge as a practice, what makes the facts legitimate is that we are using the tools correctly. In this context, we have recourse to everyone with the relevant knowledge of the tools entailed to verify the legitimacy of the practices used and hence the facts reported.
On this understanding of knowledge, unlike an appeal to the truth, we can construct a viable understanding of ‘scientific truth’, since certain equipment, certain theories can be uncontroversially attributed to the sciences, and their correct usage can be judged by anyone else with access to the same knowledge practices. On this path we can therefore distinguish between scientific truth (facts emerging from legitimate research practices) and errors, provided we allow the disagreements to be properly explored in any given research community. However, as Babich warns, this cannot happen if we rush in with a dogmatic cry of ‘pseudoscience’, since every attempt to discredit something a priori entails an outright refusal to think about a given topic at all. Ironically, such attempts to discredit effectively cause an outbreak of the condition of pseudoscience, in my sense (a state of disrupted communication where scientific work can no longer be pursued), since whomsoever speaks this word with the intent to discredit (and thus ignore something) signals the very breakdown of legitimate scientific disagreement required to understand whatever is (not) being discussed.
The deeper problem we encounter when we look more clearly at how scientists discover or verify truths is that the claims that are asserted soon exceed simple assertions of facts. Once they do, it requires another set of knowledge practices to disentangle the relationships between facts and conclusions - and these are not strictly scientific at all, for all that scientists engage (unknowingly) in these kind of interpretative philosophical practices every time they assert anything but the most trivial of claims. Indeed, precisely the crisis of contemporary sciences is that their application is not a scientific practice, but a philosophical one - and Einstein’s generation may have been the last where scientists spanned these disciplines rather than retreating behind specializations that narrow, rather than widen, the scope of our collective understanding.
It is small wonder that we seem to have arrived in a “post-truth” world: the attempt to make the only acceptable truths those that flow from scientific endeavours renders a great many of the truths that matter impossible to adequately discuss, precisely because the important truths (those that pertain to what we ought to do, for instance) could never be scientific and thus cannot be established solely by an appeal to the facts. Yet we keep looking to scientists to give us a certainty that is not in any way available through scientific methods - and as the L'Aquila trial in Italy demonstrated, we will turn upon those who do not live up to our insanely unrealistic expectations and even accuse them of committing crimes when they, inevitably, make mistakes. But it is we that have failed, by falling for such an impoverished understanding of the complexity of scientific research as that of magical science.
Breaking the Spell
The needs of a narrative require magical science for the very same role as arcane magic - as a plot device limited solely by our imagination - and the two are (in more ways than we tend to acknowledge) equivalent, exactly as Clarke foreshadowed. The problem is, the actual work of the sciences, the global cybernetic collaboration of scientists that began under that name in the 1800s and continues today, is magical solely in its lustre and not in its details. Yes, the collective technological achievements facilitated by the work of countless scientists is now already indistinguishable from magic in a great many situations. But the work of scientists is not magic, and is certainly nothing like the magical science of a Sherlock Holmes fable. When we mistake the two, when we treat a human who conducts scientific work as someone wielding all the sorcery of magical science to know, automatically, everything that needs to be known, we are not supporting scientific truth-finding at all, but making it far, far harder, and in the worst cases, rendering it entirely impossible.
I will not say we must stop enjoying the fantasy of magical science in our stories - escapism is mostly harmless, after all, even if it is not entirely blameless - but is it not perhaps about time we stopped pretending that our scientists are superheroes with magical powers to determine truth? Scientific truths are extremely specific, and much narrower than we want them to be - they are at their most precise precisely when their claims are most limited. The heroism of actual researchers is of a patient, humble kind, that requires time and substantial disagreements to bring about. It is neither as spell-binding as Holmes’ contrived deductions, nor as charmingly detached from human fallibility as Data or Spock’s inhuman resourcefulness suggest. Neither has any living scientist access to the unquenchable moral certainty of the later incarnations of the iconic Time Lord to guide them either. These role models all imply a role that is impossible to bring to life: we should be careful not to buy too deeply into such implausible exemplars, without dismissing entirely the hopes and ideals that they embody.
Actual scientific practice is amazing, but it is neither miraculous nor supernatural. It is rather mundane in its details, which never entail perfectly prophetic experiments, and always require a great deal more arguing about the possible interpretations of the facts than literature has ever depicted. When we cannot distinguish science from magic, we obscure scientific truth and the immense and heroic efforts required to produce and understand it. We do all our scientists a disservice when we mistake them for sorceresses and wizards, and we entirely dishonour the scientific traditions when we censor or revile researchers for not living up to our hopelessly elevated expectations of their truth-discovering powers.
If we cannot distinguish science from magic, we need to either improve our philosophy of science or else remain silent on scientific topics. As Holmes remarks: the grand gift of silence makes Watson quite invaluable as a companion, for scientists, much like Holmes, often need us to pay close attention to their work and their disagreements, so that together we can eventually reveal true claims about our world. When we work to silence and discredit others we disagree with, rather than remaining silent so we might hear those disagreements we are denying, we have destroyed the very conditions for any kind of legitimate scientific investigation to occur. If we truly wish to be friends of the sciences, perhaps we too ought to know how to hold our tongue and try to listen to the quiet whispers of the truth when the game is afoot.
Comments always welcome, especially the polite ones!