Magical Science

Elementary Dear DataArthur C. Clarke famously suggested that any sufficiently advanced technology would be indistinguishable from magic. This suggests another maxim: any insufficiency developed philosophy of science is incapable of distinguishing between science and magic.

We all have our own philosophy of science, our conceptual framework for understanding scientific topics. In the best case, our personal philosophy of science informs us of the limitations of scientific knowledge, allows us to put research into a wider context, and ensures we remember that the work of the sciences is still at heart an entirely human endeavour. Alas, few of us have such a clear view of the sciences. Far more widespread is a kind of pervasive mythos we might call ‘magical science’, which affords to the image of science unlimited future power, and to scientists an awesome capacity to divine the truth through singular experiments, like a Roman haruspex reading animal entrails to predict the future.

Magical science has the dubious honour of being the only superstition widely encouraged today. We are all too frequently adamant that science has all the answers, science is the royal road to truth, that we can trust in the science... I notice that even the British Prime Minister has taken to invoking magical science in his speeches these days to validate his increasingly dubious actions. At heart, magical science may seem harmless, a mere rose-tinted vision of the work of scientists, one that tries to account for all the successes of our various research networks without any attempt at balance or insight. We typically overlook this kind of naive enthusiasm for scientific achievement on the basis that it's at least ‘supporting the right team’. Yet it becomes increasingly clear that blind support for science can manifest in ugly ways, even in ways that can prevent the sciences from working, plunging research into the debilitating condition of pseudoscience, as previously discussed.

The perceived infallibility of the sciences as truth-seeking procedures clashes worryingly with the necessity of scientists making mistakes, and thus magical science leads to anger at scientists when the actual scientific work is not as wondrous as it is imagined it should be (as with the ugly 2009 L'Aquila trial, where terrible earthquakes in Italy were not successfully predicted and the scientists blamed), or when any scientist speaks out against a claim that has been proclaimed unshakably true by its advocates. It is precisely because magical science is incapable of distinguishing science from magic that it represents a far greater danger to scientific endeavours than other philosophies, perhaps even so-called ‘anti-science’ philosophies. What deceives us here, what elevates scientists to their misguided role as flawless augurs rather than researchers struggling with ambiguous data, are the bad habits we have learned from the manifestations of science in fiction, where magical science is the norm. If we wish to see the work of the sciences with clearer eyes, we may have to start by putting some of the most iconic characters in fiction on philosophical trial.

Sherlock Holmes and the Flawless Investigation

It is sometimes remarked that in creating Sherlock Holmes, Sir Arthur Conan Doyle produced the first hero of ‘the scientific age’. The Victorians were the ones who coined the term ‘scientist’ and it was their obsession with the sciences that set the scene for the unfolding technological transformation of the world over the next century and a half. We tend to treat the character of Holmes as significant mainly for crime fiction, as the archetype from which all whodunits descend - but Holmes, quite unlike a Raymond Chandler or Agatha Christie detective, is always a practitioner of magical science. Partly, this proceeds from the inherent parsimony of storytelling whereby all questions will eventually be answered because everything is there by the author’s design. Partly, however, it proceeds from Holmes’ essential power - which upon closer inspection is not deductive reasoning at all, but rather the infinite convenience possible solely in literature.

Doyle gives Holmes a quite impossible access to every conceivable fact as a starting point, such that a berry stain or the smell of a particular tobacco can certainly be identified, and then (to pile on the absurdity) Holmes by purest chance always encounters a set of circumstances that allow for only one viable interpretation. This particular brand of tobacco, for instance, is sold in exactly one place in London... We thus end up admiring Holmes purportedly scientific form of investigation while what we ought to admire is the way Doyle effortlessly conceals the magical science entailed in this depiction by making it seem as if all of Sherlock’s deductions (and inductions) were strictly logical. Doyle has contrived a set of circumstances that Holmes, with his unlimited catalogue of facts, can be certain to solve. This makes Holmes a disastrous role model for scientists (or indeed, detectives!) since it is only through the meticulous construction of literary contrivance that he possesses any investigative power at all. This becomes clearest when Holmes relies upon facts we know are false - such as the ludicrous snake plot device in The Speckled Band, which entails behaviour implausible to coax out of any reptile. Holmes’ claims to be a man of science are rather fraudulent behind the scenes: he is simply the locus of a mythic depiction of magical science.

Neither is Holmes the only such character. Both Spock and Data in the worlds of Star Trek share this power of magical science - also manifested in these shows by the tricorder, which like Holmes spits out every required fact on demand and without error. Or consider Doctor Who from the third Doctor onwards: anything necessary is certainly known by the Time Lord, except when the story requires a convenient (and often temporary) amnesia for dramatic effect. That both Data and the Doctor had a spin at being Baker Street’s most eligible bachelor is not accidental, nor perhaps is Stephen Moffat’s concurrent time as showrunner for both Doctor Who and Sherlock... Magical science heroes seem to reaffirm our faith in the power of scientific knowledge, while also playfully exposing the quirky personalities of scientists. House, The Big Bang Theory, and much more besides all participate in a literary tradition that stems from the Sherlock Holmes tales, and is now seemingly dominated by his science fiction proteges. 

Yet these are not scientific heroes, but magical science heroes. They have exactly the facts and the circumstances to answer perfectly every time, without ever having to confront the ambiguity, indeterminacy, and incompleteness of an authentic scientific problem. They are to science what Superman is to police officers: naively idealized caricatures. They find the answers solely because they live in stories where uncovering the truth is possible by design. This is a wildly misleading template for scientific truth, and although we know these are ‘just’ stories, we somehow import our wilder beliefs about the sciences into our everyday thinking unless we are extremely careful. If we are to break this spell, we need a philosophy capable of distinguishing science and magic - and for this, we need a clearer understanding of ‘scientific truth’.

Desperately Seeking Truth

Even if we start with the acknowledgement that the sciences are capable of discovering or affirming truth, the question of what might qualify as a ‘scientific truth’ is far trickier than it seems. As the preceding discussion on pseudoscience made clear, we cannot simply append ‘scientific’ to known truths without distorting the essential ambiguities of the research process where we cannot in practice know if the apparent truth of a researched claim will hold in the future. In fact, we have a choice. We could align ‘scientific truth’ with the unshakeable deep truth of reality and thus admit that the claims asserted by scientists cannot be known as truth at all (effectively contracting the domain of scientific truth to concluded research programmes like optics). Or else we can align scientific truth with the body of beliefs held by scientists, with the inevitable consequence that such truths can be later revealed as false - or even abominable. We don’t even have to go back a century to find all manner of racist, sexist nonsense asserted as truth by those who identified as scientists.

Now those who buy into magical science have an easier job here, but only by being wildly dishonest about both truth and scientific methods. According to magical science, scientists uncover truth infallibly so all claims asserted by scientists are scientific truth. Thus if and when the circumstances shift we can ‘debunk’ or ‘discredit’ those responsible and say they were not really scientists at all, or even exclude their claims from consideration in the first place! This is where ‘pseudoscience’ has been used as a label, although as I have argued previously it is not a terribly viable way of using the term. Babette Babich has made even stronger - and oft misunderstood - claims about the way the discrediting associated with the term ‘pseudoscience’ serves as a dogmatic attempt to demarcate legitimate science, while all too frequently preventing any scientific enquiry from even beginning. Thus when this particular word comes out, it narrows scientific knowledge by declaring certain topics forbidden and out of bounds - and woe betide the researcher who goes on to try to report experimental results from such verboten fields...

The highly problematic implication of every attempt to discredit and thus demarcate ‘science’ from ‘pseudoscience’ must be that we cannot know when scientists assert a claim whether it will later need to be ‘debunked’. Thus faith in magical science is inevitably a distortion of the truth - for things we will say are scientific truths on this philosophy may later be ‘discredited’, or even discredited before they are considered at all. The alleged truths of magical science are thus only defended by ignoring the inevitable consequences of the inherent revisionism of scientific practice and pretending that the current consensus among researchers is ‘more true’ than it was yesterday and thus that now (and by implication, only now) we can trust everything scientists say as long as we are standing guard for those pernicious pseudoscientists who ruin it for everyone. To say that this is dangerous nonsense is easy; to replace it with a more sound philosophy of science will be much harder.

There might be a way out of this maze, but it would require us to think differently about the relationship between truth and the sciences. Part of what deceives us here is our desire to understand the truth in terms of a set of valid statements. Since we can point to scientific concepts we abandoned, like phlogiston (which was a hypothetical substance that made combustion possible), we want to assert a gradual improvement in the accuracy or scope of our ‘book of facts’. “We would not be fooled by phlogiston today,” we might think. Yet phlogiston was an important - and arguably entirely scientific - proposal that was merely discarded when our understanding of chemistry shifted such that combustion could be thought of in terms of a chemical reaction with oxygen.

The brutal truth of the ‘book of facts’ is that such a collection of statements today would theoretically contain far more ultimately false claims than it would in the 1770s, simply because the number of scientists and the diversity of research fields has increased dramatically we are now paradoxically more wrong than researchers in the 18th century (in terms of sheer numbers of errors made) - the inescapable consequence of asking both more and more difficult questions. What makes it feel as if we are now more right is knowing that phlogiston was to become replaced by a new understanding of chemical reactions and thus combustion and so forth. But this is largely an illusion caused by examining successful research programmes in hindsight.

Similarly, when I say phlogiston was ‘scientific’, I am projecting with hindsight since the term ‘scientist’ was not coined until 1834... researchers in the 1770s would not have described anything they were doing as ‘scientific’ - it is our desire to paint the sciences as something with a history of more than two centuries that makes us ‘claim’ both phlogiston and oxygen (not to mention Copernicus, Galileo, Newton and so forth) as part of the story of ‘science’, rather than the natural philosophy that those involved would have stated they were pursuing. Thus our ‘book of facts’ not only contains more errors than our predecessors two and a half centuries ago, it is not even entirely honest about its relationship with its own past. Add to this the unavoidable truth that this imagined ‘book of facts’ does not exist (for all that encyclopedias and their successors have wished to fulfil this role) and it begins to feel uncomfortably like we are deceiving ourselves - as if we have all fallen for the seductive confusions of magical science.

Legitimate Practices

We want to defend our intuitive impression of the sciences as truth-seeking, and also (in some nebulous sense) successful at doing so. How do we do it?

One option we can consider is that which I proposed in Wikipedia Knows Nothing: to switch our focus from facts (true statements) to practices (skills and equipment). To know how to use something - a polymerase chain reaction, an interferometer, a fractional distillator - is more a matter of knowing what to do than it is a ‘book of facts’, even though that knowledge also produces facts related to the equipment used (and any theories deployed to give a context to the reading of the instruments). Thus an astronomer armed with geometric theorems can use an interferometer to measure the diameter of stars, while an engineer can use an interferometer and the wave theories of light to measure very small objects precisely. The practices associated with both the equipment (the interferometer) and the theories associated with each specific usage give rise to facts - in this case, distances. The difference lies in what legitimizes the activity in question: on the usual conception of knowledge, if you had the facts you had legitimate knowledge if those facts were true and the reasons for justifying them were correct - which actually provides no means of knowing what is or is not legitimate since our criteria for legitimacy requires an appeal to something beyond the situation (the truth) that we cannot access directly. Conversely, when we view knowledge as a practice, what makes the facts legitimate is that we are using the tools correctly. In this context, we have recourse to everyone with the relevant knowledge of the tools entailed to verify the legitimacy of the practices used and hence the facts reported.

On this understanding of knowledge, unlike an appeal to the truth, we can construct a viable understanding of ‘scientific truth’, since certain equipment, certain theories can be uncontroversially attributed to the sciences, and their correct usage can be judged by anyone else with access to the same knowledge practices. On this path we can therefore distinguish between scientific truth (facts emerging from legitimate research practices) and errors, provided we allow the disagreements to be properly explored in any given research community. However, as Babich warns, this cannot happen if we rush in with a dogmatic cry of ‘pseudoscience’, since every attempt to discredit something a priori entails an outright refusal to think about a given topic at all. Ironically, such attempts to discredit effectively cause an outbreak of the condition of pseudoscience, in my sense (a state of disrupted communication where scientific work can no longer be pursued), since whomsoever speaks this word with the intent to discredit (and thus ignore something) signals the very breakdown of legitimate scientific disagreement required to understand whatever is (not) being discussed.

The deeper problem we encounter when we look more clearly at how scientists discover or verify truths is that the claims that are asserted soon exceed simple assertions of facts. Once they do, it requires another set of knowledge practices to disentangle the relationships between facts and conclusions - and these are not strictly scientific at all, for all that scientists engage (unknowingly) in these kind of interpretative philosophical practices every time they assert anything but the most trivial of claims. Indeed, precisely the crisis of contemporary sciences is that their application is not a scientific practice, but a philosophical one - and Einstein’s generation may have been the last where scientists spanned these disciplines rather than retreating behind specializations that narrow, rather than widen, the scope of our collective understanding.

It is small wonder that we seem to have arrived in a “post-truth” world: the attempt to make the only acceptable truths those that flow from scientific endeavours renders a great many of the truths that matter impossible to adequately discuss, precisely because the important truths (those that pertain to what we ought to do, for instance) could never be scientific and thus cannot be established solely by an appeal to the facts. Yet we keep looking to scientists to give us a certainty that is not in any way available through scientific methods - and as the L'Aquila trial in Italy demonstrated, we will turn upon those who do not live up to our insanely unrealistic expectations and even accuse them of committing crimes when they, inevitably, make mistakes. But it is we that have failed, by falling for such an impoverished understanding of the complexity of scientific research as that of magical science.

Breaking the Spell

The needs of a narrative require magical science for the very same role as arcane magic - as a plot device limited solely by our imagination - and the two are (in more ways than we tend to acknowledge) equivalent, exactly as Clarke foreshadowed. The problem is, the actual work of the sciences, the global cybernetic collaboration of scientists that began under that name in the 1800s and continues today, is magical solely in its lustre and not in its details. Yes, the collective technological achievements facilitated by the work of countless scientists is now already indistinguishable from magic in a great many situations. But the work of scientists is not magic, and is certainly nothing like the magical science of a Sherlock Holmes fable. When we mistake the two, when we treat a human who conducts scientific work as someone wielding all the sorcery of magical science to know, automatically, everything that needs to be known, we are not supporting scientific truth-finding at all, but making it far, far harder, and in the worst cases, rendering it entirely impossible.

I will not say we must stop enjoying the fantasy of magical science in our stories - escapism is mostly harmless, after all, even if it is not entirely blameless - but is it not perhaps about time we stopped pretending that our scientists are superheroes with magical powers to determine truth? Scientific truths are extremely specific, and much narrower than we want them to be - they are at their most precise precisely when their claims are most limited. The heroism of actual researchers is of a patient, humble kind, that requires time and substantial disagreements to bring about. It is neither as spell-binding as Holmes’ contrived deductions, nor as charmingly detached from human fallibility as Data or Spock’s inhuman resourcefulness suggest. Neither has any living scientist access to the unquenchable moral certainty of the later incarnations of the iconic Time Lord to guide them either. These role models all imply a role that is impossible to bring to life: we should be careful not to buy too deeply into such implausible exemplars, without dismissing entirely the hopes and ideals that they embody.

Actual scientific practice is amazing, but it is neither miraculous nor supernatural. It is rather mundane in its details, which never entail perfectly prophetic experiments, and always require a great deal more arguing about the possible interpretations of the facts than literature has ever depicted. When we cannot distinguish science from magic, we obscure scientific truth and the immense and heroic efforts required to produce and understand it. We do all our scientists a disservice when we mistake them for sorceresses and wizards, and we entirely dishonour the scientific traditions when we censor or revile researchers for not living up to our hopelessly elevated expectations of their truth-discovering powers.

If we cannot distinguish science from magic, we need to either improve our philosophy of science or else remain silent on scientific topics. As Holmes remarks: the grand gift of silence makes Watson quite invaluable as a companion, for scientists, much like Holmes, often need us to pay close attention to their work and their disagreements, so that together we can eventually reveal true claims about our world. When we work to silence and discredit others we disagree with, rather than remaining silent so we might hear those disagreements we are denying, we have destroyed the very conditions for any kind of legitimate scientific investigation to occur. If we truly wish to be friends of the sciences, perhaps we too ought to know how to hold our tongue and try to listen to the quiet whispers of the truth when the game is afoot.

Comments always welcome, especially the polite ones!

What is Pseudoscience?

PhrenologyWhen we talk about something being ‘pseudoscience’ what we tend to mean is that it’s ‘not true’, and we reach that conclusion because what we mean by pseudoscience is something that is ‘not scientific’, and we associate the sciences with truth. Yet the alternative to truth is not automatically falsehood; there is also ambiguity, indeterminacy, and incompleteness to consider. What’s more, if we call things scientific only if they are true, we are admitting that we don’t actually know what is or isn’t scientific until some future time when the arguments about some given topic are finally resolved. There is a confusion here worth examining closely.

Implausible Methods

Ask someone to explain how the sciences work and chances are they will tell you about the scientific method:

1. Observe a situation
2. Come up with a hypothesis (an untested theory) to explain a phenomenon
3. Devise an experiment to test whether the hypothesis is valid
4. If the experiment is successful, the hypothesis becomes a theory. Congratulations you’ve discovered scientific truth!

This description is so far from adequate that it is a wonder that so many university students are taught it! Quite apart from the way it sets aside the most difficult aspect of scientific practice (the interrelationships of existing knowledge on any subject) it fancifully imagines that scientists determine truth simply by performing just one experiment, as if scientific truth were as simple as revealing a scratch card – three microscopes, we have a winner! Rather than an adequate description of how contemporary scientific processes operate, this is more akin to a catechism recited in order to bolster faith in the ability of the sciences to reveal truth – and as such, it obfuscates the complexity of the relationships between experiments, theories, and truth, and prescribes a method almost certain to lead to error every time.

If a hypothesis and experiments are indeed the necessary elements of a claim that a certain activity is ‘scientific’, then anthropology, economics, almost all of the evolutionary sciences, and a fair amount of biology and medicine are all doomed to be ‘unscientific’. These kinds of accusation are indeed sometimes advanced – a furore occurred in 2010 when the American Anthropological Association decided to removed the word ‘science’ from its mission statement, despite many of its members feeling this was a consequence of a narrow and reductionist description of the sciences. There are also questions here about concluded research programmes: no-one has needed to perform further experiments in optics, for instance... has it ceased to be scientific? Or did it earn its place in scientific heaven by being a good research field while it was still alive...?

Tied up with this confusion is the idea that the sciences are ‘value free’, i.e. that scientific research is inherently unbiased. This is a naive mistake to make, and on two counts. Firstly, as Nietzsche warned back in 1882, we are “still pious” when it comes to scientific truth – all scientific research rests on a core moral value, a commitment to the pursuit of truth. Without this, the endeavours we undertake in the name of science make no sense; ‘valueless science’ is entirely implausible. Secondly, and even more importantly, scientists are still human, and as such they have their own values. The attempt to purge the sciences of values is nonsensical and indeed impossible! No matter how much you try to present scientific research as a purely rational, emotionless, valueless activity, scientists will continue to pursue research motivated by their own moral values (to save lives or to save the planet, to advance knowledge or technology, to win fame or wealth etc.). The idea that having these values is somehow unscientific is to doom all the sciences to oblivion! The values and the facts are intimately related or, as Hilary Putnam described it, entangled. The idea of a science without values is pure nonsense.

At this point, you have a choice in how you respond to this critique of ‘scientific method’, and this in itself may be illuminating. On the one hand (and especially if you’ve spent any time at all thinking about philosophy of science), you can happily cast off this quite ridiculous dogma and still maintain a viable understanding of the sciences without it. That’s the easy way... but it still has some hard consequences. Or alternatively you can dig in your heels and try to cast out the demons of those that don’t follow ‘the method’, attempting to purify research of pseudoscience, meaning in this case ‘not following the scientific method’, but usually playing out by simply deriding counter-claims against whatever dogmatic position has been adopted on any given point. That path is so misguided it’s a wonder that plenty of otherwise intelligent people seem to fall for it.

As it happens, the sciences themselves show us why this purported ‘scientific method’ is unworkable. Psychology – which has been staunchly dedicated to ‘the method’ yet still gets cast out as ‘soft science’ – has provided a lot of neat titles for the various kinds of human bias. Defenders of ‘the method’ like to evoke hindsight bias to defend the need for hypotheses – “if you don’t make a hypothesis, you’ll just end up seeming to expect the result you get!” But these cognitive biases cut both ways: if you do make a hypothesis, you are now prone to confirmation bias – cherry picking your data and references to support the position you have chosen. This is why medical sciences insist on good quality evidence from randomized trials where even the experimenters don’t know what’s going on until all the data is in. We know from bitter experience that when you set out to prove some specific claim, you are more likely to find (and report) the evidence that supports what you have chosen. In other words, not having a hypothesis condemns you to bias, and having a hypothesis condemns you to bias! What makes something legitimately scientific cannot be the elimination of bias, or else nothing could ever be sufficiently purified of values to qualify. There has to be another way of conceptualising the difference between ‘science’ and ‘pseudoscience’ if either is going to have any legitimate meaning.

Ghosts of Science Past

The celebrated historian of science, Thomas Kuhn, lays out the question of pseudoscience at the very outset of his project to understand the nature of scientific change. The problem as he presents it is that if we judge the historical precedents to our scientific practices as pseudoscientific (he talks of them being ‘myths’), then we have to acknowledge that pseudoscience can be pursued and justified by the same methods and reasons we now use to defend science against its alternatives. Yet if we call these artefacts of older research ‘science’, then we have to accept that the sciences were beset by wild beliefs that today we would find unthinkable (even abominable). He argues very persuasively that from a historical perspective we have no choice but to accept that “out-of-date theories are not in principle unscientific because they have been discarded.”

Kuhn’s position is widely accepted today – yet it runs directly contrary to the view of Sir Karl Popper that the boundary of legitimate science is falsification – the ability to have a theory proven false. Amazingly, this viewpoint is also widely accepted today, even though the two approaches are essentially incompatible, and indeed were the basis for an unresolved despite between the two academics. Kuhn saw Popper’s falsification as applying solely to those rare periods of scientific upheaval (paradigm shifts) where one way of thinking replaces another. His view was that ‘normal science’ never dabbles in big theoretical changes at all, but is always about solving problems using the current theoretical apparatus. Again, these two viewpoints are entirely incompatible, yet both are widely supported views on the sciences.

Popper suggested that Kuhn’s approach committed him to saying that astrology is a science because it entails problem solving within its own paradigm. Kuhn denied this, and argued that in the context of astrology “particular failures did not give rise to research puzzles” and thus astrology was never a science. Both men died without resolving their disagreement; I think it clear, however, that both are wrong about astrology. We cannot – as Kuhn himself warns – back-project our current scientific judgements upon prior practices that were claimed as sciences at earlier times without distorting what we are trying to assert. To do so is to deny the very capacity for scientific revolutions that Kuhn’s account provides. The suffix ‘-ology’ by itself is a clue that the practices of astrology had at one point in its history a claim to knowledge, and the question of whether astrology was ever a science in Kuhn’s terms is a historical investigation requiring far more application to the task than either Popper or Kuhn were willing to commit. As such, this question is in fact still very much open to debate! But nobody wants to do so, because everybody with any skin in this game wants to show that astrology isn’t a science and never was – thus again preempting any possible research except that which will prove this one tenuous point.

If Kuhn’s historical theory (albeit not Kuhn himself) is able to defend against Popper’s attack, Popper’s falsification criteria has no equivalent defence against Kuhn’s criticisms. Indeed, Kuhn expressly doubted that falsifying experiences ever really happen. He did not need the psychologist’s label ‘confirmation bias’ to realise that giving up a scientific paradigm is a major conversion for anyone (comparison with religious conversion is quite justified here), made all the less likely by the problem that if every failure of a theory in the face of contradictory evidence were sufficient grounds for rejecting it, all theories ought to be rejected at all times! That’s because the very reason that Kuhn’s ‘normal science’ has problems to solve is precisely that no theory is capable of fitting all the observations it seeks to explain. As the French science studies scholar Bruno Latour puts it, the theories are all under-determined with respect to the evidence – and this conclusion is unavoidable if you spend time examining what scientists actually do rather than merely reciting the catechism.

But this does not mean there is no way of distinguishing science from pseudoscience, even though we have to accept a certain amount of historical contingency after Kuhn (or Foucault – he gets to the same place via a different route). What we might reasonably suggest as a provisional criteria for calling something ‘pseudoscience’ is a combination of Popper and Kuhn’s claims: when even the possibility of falsification is removed, or when the investigative practices cease to produce further enquiries in response to the questions the previous research implies, the claim to be scientific evaporates. As chemist-turned-philosopher Isabelle Stengers attests, successful experiments in the sciences give rise to new research questions. When they do not produce any more, it is because the field has managed a complete description of its subject matter (as with optics). The difference here is that such ‘completed’ fields have produced theories capable of making unfailing predictions. And such cases are vanishingly rare.

The Condition of Pseudoscience

What tied us up in conceptual knots here, and kept Popper and Kuhn from reaching an accord, is that we want to level the accusation ‘pseudoscience’ at fields like astrology or phrenology. But understanding the sciences as an ecology of practices, as Stengers has brilliantly discussed, shows that this is not the only way we might identify a breakdown of Kuhn’s ‘normal science’. We could (indeed must) give up the idea that ‘pseudoscience’ is a way of trashing any theory, research, or evidential claims we don’t agree with. On the contrary, I propose that the clearest way of understanding pseudoscience is as a condition within a scientific discourse that undermines or destroys its power to investigate.

Thus, to continue with phrenology’s original models of mental function after animal experiments began to show that its suggested brain regions did not hold up to scrutiny would have been to enter into a condition of pseudoscience, because its practices could not produce viable new research questions in the light of this new evidence. It would, however, be wildly unfair to it to suggest it was always in this condition: it is from phrenology, after all, that the idea of the brain being the organ of the mind originated, and while most of its specific claims did not pan out, it remains an important part of the backstory of neuroscience. If phrenology had not become spread around as working class ‘popular science’ (thus earning the enmity of Victorian cultural elites), we might well have kept the name ‘phrenology’ (science of the mind) rather than renaming brain research ‘neurobiology’. It’s not at all clear to me that phrenology was ever in the condition of pseudoscience, except perhaps at the very end – although anyone practicing it today would be behaving very oddly indeed.

Pseudoscience is thus akin to an ailment afflicting scientific practices that have become shorn from the logic of legitimacy provided by their current paradigm. The sign that a field has fallen into pseudoscience is not the truth or falsehood of its claims as such. Indeed, these will frequently not be in any way settled, forcing us into highly suspect retrospective accusations, such as that levelled routinely at phrenology. Rather, you can see the condition of pseudoscience occurring whenever scientists give up the values that motivate their enquiry - when they purposefully falsify data, or conceal it ‘to defend the truth’, or give up experiments and data gathering entirely in order to maintain a status quo based upon whatever happens to have been previously claimed. And once we see this, we are forced into the realisation that we are currently in the condition of pseudoscience in several entirely legitimate research fields, and over the last year we have had the audacity to defend the breakdown in the medical discourses that has put us into a state of collective pseudoscience as “following the science”!

The truth is, we cannot ‘follow the science’, it is the science that must follow us. For the values of science are those of discovery and verification, and this only has a purpose in so much as it serves to resolve those questions our other values compel us into exploring. Thus, while medicine commits to ‘first, do no harm’ as a supreme value governing its own practice, that particular principle sets no positive goal at all. The medical practitioners and the cybernetic networks supporting them take on the objectives that we have collectively given to them. If the circumstances that follow from that pursuit make falsification of a medical claim impossible, or provide no means to reliably answer the relevant medical questions, those medical practitioners affected (and anyone trusting their judgements) enter into the condition of pseudoscience, a (temporary) renunciation of the values of scientific practice, capable of precisely the great harm doctors are sworn to avoid. For the collective medical power we exercise cybernetically always causes some degree of harm along with the pursuit of its goals – requiring medical practitioners, on pain of becoming (temporary) pseudodoctors, to commit to studying the impact of any procedure or intervention attempted or else risk violating all the values of contemporary medical science. This is an extreme example, but it is also an extremely important one.

Now whether the values of discovery and verification have always conditioned the work of scientists, and whether they always will isn’t the point, for they are our moral requirements for the sciences now and on this point we quite miraculously do not disagree. In so much as pseudoscience is a phenomenon, it is merely a consequence of recognising that scientists are human, and what makes them seem otherwise is the remarkable power that they bring to bear when cybernetically linked into singular networks, working together – not just by co-operating but just as importantly by disagreeing, refining the research questions by honing the essential ambiguities into points sharp enough to penetrate our ignorance by pursuing further investigations and experiments. Pseudoscience prevents that dialogue from happening, and breaks up the network connections, making research harder or preventing it entirely, setting bias against bias and thus blocking the communication essential to verification, which is necessarily a distributed activity.

When verification stops, pseudoscience has begun... it goes away when we can go back to listening to those objections that our human bias prevented us from hearing. The ugly truth of it all is that fear, anger, and self-righteousness spread pseudoscience all too easily, yet banishing it is as easy – or as impossible – as going back and listening to the objections in order to work out where in the maze of ambiguity, indeterminacy, and incompleteness the truth of each disagreement can be found.

More philosophy of science soon.

Reliable Witnesses: A Retraction

In December, in a piece entitled Scientists Distorting Science? I suggested that in the context of Joshua Greene’s paper “The Secret Joke of Kant’s Soul”, Greene was not a reliable witness in the sense offered by Bruno Latour. This was an error, since for Latour a reliable witness is what a researcher talks about. The researcher themself is a spokeperson for the thing in question e.g. in the context of special relativity, Einstein was a spokeperson for light, and the Kennedy-Thorndike experiment was a reliable witness.

This being so, I should not have claimed, in Latour’s sense at least, that Greene was not a reliable witness. I should have said that Greene was a bad spokeperson for the human brain in this paper, or perhaps that Greene was a bad spokeperson for deontological ethics (probably both). The reason he was a bad spokeperson is that he didn’t not have any reliable witnesses in the context of what he was claiming, but he himself could not be a reliable witness in Latour’s sense.

I’d also like to say that the title of this post was rubbish and I wish I’d thought of something better, particularly since precisely Latour’s point about science studies is that we shouldn’t think about scientists ‘distorting the science’ since a scientist is just a researcher, and ‘science’ in this sense is just an abstraction (Latour would write it Science and contrast it to the sciences). Scientists don’t distort the sciences, they just research, and if they do their research well they find reliable witnesses and thus become spokepeople for various things. It was precisely because Greene did not focus on the part of his work that was a reliable witness (for the neural systems involved in moral judgement) that he failed to be a good spokeperson for the brain in this paper.

The Robot Gatekeepers

robophone Can you talk to anyone in a large company without first getting passed their robot gatekeepers?

It has become standard practice for large organisations to erect a virtual barrier at the edge of their operations such that the first contact anyone outside has with it occurs with a robot. The classic instance is the automated phone system, a simplistic robot programmed to operate a number-keyed maze blocking access to any of the organisations human members while simultaneously lying to the caller insisting that their call “is very important” to the very institution that chose a robot gatekeeper to deal with it. (Calls which are genuinely important are not dealt with by robots).

The phone mazebot is by no means the only robot defending institutional borders. Many companies only accept contact by email if it is submitted by an online form robot that sorts and files text for later review by human eyes. Even if an open email address is offered, a spambot must analyse messages, permitting only some to pass unhindered. A letter is one of the few ways to circumvent an institutions robot gatekeepers, but even these are sorted by a postbot at some point in delivery. And while going in person to an organisation's premises will likely put you in contact with a human – the security guard or front desk secretary – you are still likely to be blocked from entering by a lockbot until permission is granted for you to enter.

This ubiquitous yet largely unnoticed automation of institutional borders continues to expand as robot technology improves. The BBC, in a recent recruitment drive for their new offices at Media City, Salford, had all candidates initially assessed by a robot before considering anyone for an interview. The job-bot takes information about skills, administers a workplace assessment, an IQ test and a personality instrument and presumably sorts applicants prior to any human intervention. Although job interviews are far from a perfect method for judging potential employees, I find the robot alternative to be a somewhat chilling indicator of the direction all large organisations are moving in terms of deploying robots as gatekeepers.

Our Robots

Smiley iphone Driving long distance, listening to a computerised voice warning me that “after one mile, turn left”, I suddenly realised that we aren’t waiting for our robots to finally arrive – they’re already here.

In the the early twentieth century, science fiction was full of images of flying cars, nutrition pills and a robot in every house. Well the flying cars turned out to be too expensive to run, we got diet pills instead of nutrition, and as for the robot in every house – we now have a robot in every pocket. In a subtle transformation we scarcely even noticed, we stopped having a phone in our pocket and started having a robot. That’s the success of the iPhone, and why Nokia can no longer compete with Apple: it’s the robot in your pocket, standing by to serve your every whim.

I mentioned before Donna Harraway’s idea that we were always already cyborgs, which builds on the idea (developed by Bernard Stiegler and others) that technology has been part of the human condition for as long as we have thought of ourselves as humans. This new ‘robot revelation’ is an extension of this theme in many ways, although not all technology qualifies as a robot of course – the notebooks that transformed thinking in Athens for the philosophers of ancient Greece were mere tools, and the abacus may be the earliest computing device but it did nothing on its own. Today, we have many autonomous devices

The reason we didn’t notice that we’re surrounded by robots is that the films and books prepared us for a different kind of robot. The much beloved Droids of Star Wars, for instance, emphasised the idea of the android – the human-like robot. These are – sorry Asimo – still a long way off as consumer devices. It’s not that we can’t build them, it’s just that the technology is expensive, and really not that advanced. But as soon as you start thinking of robots as autonomous devices that don’t need to be animal shaped, the perspective changes.

My alarm clock, for instance, is an autonomous device capable of just one task – triggering the radio at a certain time – my alarm clock is a robot, albeit a crude one. My wristwatch, on the other hand, is a device but it does nothing on its own, and an old mechanical alarm clock similarly seems to fall just short of the status of robot. Why? Those old alarm clocks have the same function as my digital alarm clock, after all. The boundary is ultimately arbitrary, but I feel a strong difference between setting a mechanical trigger and communicating with a device. When I set my current alarm clock, using an analogue wheel, I tell it things about what I want and when. This is a very different interaction from turning a cog to position an automated trigger.

My iPhone is even more clearly a robot – indeed, I have taken to calling it “my robot”… What’s that song that’s playing? I’ll ask my robot. Don’t forget to call your wife! No problem, my robot will remind me. What’s five hundred euro in dollars? My robot has the answer. Of course, for many of these functions my robot interfaces with the internet to find a solution, because the internet is packed full of robots. Ask Jeeves had the metaphor but not the technology, while Google Search has it the other way around. When it answers my question directly (as it does with currency or temperature conversions) instead of showing me search results, it functions as a robot – a robot in this case that has no physical body, but can be “channelled” by any suitably equipped robot I own.

It was my NavBot that really convinced me that the robots are already here. A gift from a friend who had just upgraded his own GPS device, its capacity to autonomously plot a route, and then deliver directions (sometimes very bad directions!) by using a human-like voice is so deeply resonant of the kind of interactions with the ship’s computer on classic Star Trek I found it impossible to deny that this box attached to my dashboard was indeed a robot. It’s not as versatile as my iPhone, of course, which can learn to do all sorts of nifty tricks (including bad navigation!) but with its slightly electronic verbal communication, the NavBot feels a lot more like a robot than my alarm clock.

We are surrounded by robots at every turn, from the docile cash machine to the feisty Roboraptor, the dumb traffic light to the smartphone, the ecology of the city is dominated by robots who exist in vaster numbers than the pigeons and other animals that have adapted to live in the concrete landscape alongside us (insects and bacteria notwithstanding). Having a robot is the most basic sign of contemporary urban life – yet for strange and largely historical reasons, we call our personal robots phones. But the computer in your pocket isn’t really a phone, it hasn’t been for quite a while. It’s your robot servant, waiting in your pocket for further orders from you. How long, I wonder, before it takes the initiative and starts leading the conversation instead of just listening, with infinite patience, for the next instruction…

Reiss Against Adaptation

Reiss John O. Reiss is Professor of Zoology & Department Chair for Evolutionary and Developmental Morphology at Humboldt State University. Since 2005, he has been writing against the dominant paradigm that views evolution through the lens of natural selection and adaptation, pushing instead for a perspective grounded in the existence of animals – an approach with remarkable connections to Kant’s perspective on life I wrote about in
Kant on Intelligent Design. His alternative paradigm is closely related to the conditions for existence discussed by Georges Cuvier before Darwin published his landmark book. We recently discussed his work, and the following interview serves as a bookend to this Summer’s discussions of the Myths of Evolution.

Chris Bateman: You waded into a whole host of philosophical discussions about evolution with your paper ‘Natural Selection and the Conditions for Existence: Representational vs. Conditional Teleology in Biological Explanation’, and your book Not by Design: Retiring Darwin’s Watchmaker.

John O. Reiss: Well yes, but let me start by reminding you that I am not a philosopher, but a scientist, and what concerns me is really making the science better.

Chris: Sure, and indeed that is why your stance on these issues is particularly useful and enlightening to me, coming at it from the philosophical fields. I always hope for greater osmosis between disciplines, and on this particular topic philosophy, science and even theology (in the case of certain specific issues...) all have a role to play in untangling the debate.

John: That is exactly what I think too – there was a time when philosophers and “natural philosophers” (scientists) were not so far apart, and it still seems to me that philosophy ought to be relevant to science, both in informing our perspective and keeping us from committing errors in logic.

Chris: I find your proposed 'overturning' of Darwinian metaphor and return to Cuvier's more Kantian approach to be one of a very small number of viable options for dealing with contemporary problems in this space.

John: I'm glad to hear you think it offers some possibility of escaping the problems of the design metaphor.  It is a strange historical anomaly that Darwin’s intellectual environment was formed by the British context of Paley et al., who never really assimilated the Kantian perspective.

Chris: It seems you recognize the merit of scientific metaphor in so much as you say that “metaphors may have heuristic value in science”, and your concern appears to be the dangers inherent in taking such metaphors literally. In this, we agree – our principle difference is that I view the metaphors as inescapable, while you presumably view them as something that can be eliminated.

John: I guess I would agree – up to a point.  Metaphor certainly helps the public understanding, and often plays a role in scientists' conceptual understanding – for example one might compare gravitational to emotional attraction.  In a way mathematical models are themselves a sort of metaphor for what is "really" going on.

Chris: Stephen Yablo says exactly this, in fact – that numbers are metaphors.

John: To me the problem is not just metaphor, it is metaphor that obscures or confuses features of the phenomenon one is studying.  As I detail in the book, I think the metaphor of design by natural selection has led people to some pretty strange views of the evolutionary process, and kept them from fully grasping the broad scope of evolution.

Chris: Yes, but your objection to the metaphor of design approach discussed by Michael Ruse seems to rest on a false dichotomy between “metaphor as real” and “only a metaphor” – but this kind of objection (which I do not subscribe to) is far from constrained to the metaphor of design. Darwin's metaphor of natural selection suffers the same problem, as for that matter does Cuvier's conditions for existence that you use. Talk of the conditions for existence of organisms as boundary conditions for evolution still makes heuristic use of a fiction, actually several fictions – “boundary conditions” and “conditions for existence” most pertinently. Is it that you see these metaphors as being “real” rather than just “heuristically useful”?

John: Well, while “boundary conditions for evolution” may also be a metaphor, it is one that (to me at least) does not have the same issues as the metaphor of design.  As Ed Ricketts said, people are by nature teleologists, and the design metaphor plays to the natural tendency of humans to see purpose in nature.

Chris: Even Dawkins admits this, saying that people may have “purpose on the brain”.

John: Sure. Evolution as a process in time lends itself to teleological metaphors, but they are particularly dangerous here because we frequently don't have the rigor to go along with the metaphor.

Chris: So the issue isn’t the use of metaphors, it’s whether the science can back them up with sufficient rigorousness?

John: Yes.

Chris: I wonder what the consequences would be of people accepting your viewpoint…

John: How so?

Chris: Well let's suppose that your approach proves popular as a solution to the excesses that we both agree occur in the context of the metaphor of design. Because it is the metaphor of design and that fitness-to-environment reading of natural selection that make these topics tractable for discussion with a wider audience, is it not an inevitable consequence of wide-acceptance of your proposal that it will become difficult, perhaps impossible, to teach evolution as part of a high school curriculum?

John: No I don't think getting rid of the design metaphor makes evolution less accessible to the public.  In fact, there is a certain unreality and squishiness in the way evolution by natural selection is presented that I think makes evolution harder to understand – it is what I call in the book the separation of adaptedness from existence.

Chris: So you don’t think there’s a need for evolution to become solely a university topic – you believe there is some accessible way to present it without talking about the metaphor of design, selection, and adaptation?

John: Evolution needs to be presented based on evidence for its reality - fossils, homology, etc.  Once the fact of evolution has been established, then the basic mechanism of mutation and differential survival and reproduction can be brought in to explain it.  All of the talk of natural selection and adaptation just obscures.  Or so I think.

Chris: My suspicion is that most people will find your approach too “thin” to be appealing – but of course, it is “thick” where it counts – its empirical foundations are unimpeachable. The question is: are biologists willing to set aside the adaptation-of-features paradigm and admit their ignorance of the evolutionary history of biological features? And even if they are, can this perspective be ‘sold’ to a wider audience who are usually not interested in what we don't know, even though it is frequently the case that the limits of knowledge have a great deal to teach us.

John: Well, the chance that I will prevail in this argument is rather small – too many are wedded to the current paradigm.

Chris: If the question is whether John Reiss’ account will become the new dominant paradigm, then like you I feel it is extremely unlikely! But if the question is whether you can influence the overturning of the old paradigm then I would be far more hopeful. My stance is that one cannot easily see beyond the boundaries of the current ways of thought (even when working against them!) but I would be surprised and disappointed if your contribution wasn't a part of the story going forward.

John O. Reiss’ book Not by Design: Retiring Darwin’s Watchmaker is published by University of California Press and is available from and all good bookstores.

The Problem with Updates

A bit of a rant on ihobo today, in the form of my less-than-unbiased perspective on automatic software updates:

Nowadays, the role of the patch as an optional intervention has been extensively replaced with automatic updating – software simply installs its own patches, sometimes with the approval of the user, often under its own auspices. This, I presume, is supposed to have made matters better for all concerned... yet I am at a loss to see where the benefit to me is coming from... While security updates for Windows are certainly a necessity given the number of malicious hackers always looking to take vengeance on the monopolistic media monolith, I am astonished at the number of other software suppliers who use up the resources of my computer and internet bandwidth to robotically check for updates – irrespective of any utility this process may hold for me.

Heidegger's Time vs Spacetime

Spacetime Martin Heidegger, the most celebrated continental philosopher of the twentieth century, places at the centre of his thoughts the notion of “being in time” - indeed, his magnum opus is called Being and Time (Sein und Zeit). For Heidegger, time was more important to our existence than space. But how should Heidegger's idea be understood in connection with Einstein's theory of spacetime?

The contents of Being and Time are notoriously difficult to appreciate, and doubly so for non-German speakers since Heidegger draws frequently upon grammatical features of his native language that don't always translate into other languages. The centrepiece of his ideas is Dasein (often translated as “being-there”, although Heidegger denied this was accurate). A shorthand for understanding Dasein is that it means the experience of being aware of time, of the consequences and the circumstances of being positioned in a history we can look back upon, and aware of our death ahead of us. We think of this kind of perspective as being uniquely human – while other animals are certainly beings, only humans are Dasein as far as we know. It is vital to Heidegger's idea that Dasein is always already embodied in time – there is no question of considering human consciousness as a separate subject for him. We humans are thrown into time as Dasein, and it is thoroughly fruitless to doubt external reality, we simply must attend to our 'throwness' as it comes.

For the purpose of this discussion, the key point to raise is that for Heidegger time was more important to Dasein than space. Where we are born in the world can become quite irrelevant as we move through our lives, as we can relocate, learn from other cultures and so forth. But we cannot move through time in this way – when we are born is of critical importance both to who we are and whom we can become. If we were born in ancient Greece we could not possibly become an astronaut, for instance, whereas if we were born into an Amazonian tribe we could still potentially become an astronaut (however unlikely) as long as we were born in this time.

A challenge the physicist may want to raise is that this separating out of time and space is misleading, since what we are really existing within is spacetime, a four dimensional continuum in which treating time and space as separate is misleading. But it is the physicist here who is misled – they have mistaken a mathematical model as having precedence over the actual terms of our existence (or our Dasein, in Heidegger's term), mistaking a scientific model as being necessarily of higher importance. It's the same kind of error a theist makes when they mistake their concept of God for truth – one may very well claim that God has access to universal truth, but that does not warrant any claim by the believer in God to any such access; to do so is tantamount to blasphemy in conventional religious terms. The physicist's prioritising of the spacetime model over humanity itself engages in a similar kind of immodesty.

Recall Hannah Arendt's commentary on Heisenberg's notes as to how quantum physics changed our understanding of science: the experiment remains a “question put before nature”, and as such the answers of science always remain questions asked by people. We are confused over what is “objective” when we assume that there can be answers without questions and a question-asking being – a Dasein, in Heidegger's terms. Dasein comes first and it alone is primordial to our experiences and knowledge. Science is a set of tools for thought and action that depend first and foremost upon Dasein, upon our being in time. There can be no science without thinking beings to conduct it.

Spacetime, then, is a tool that can be at hand for us to use as Dasein (in understanding the nature of the universe as we observe it, for instance) but it does not mean that the concept of spacetime is more fundamental than the notion of time in the context of being and existence. We all too frequently mistake our scientific tools for objective truth because the modern paradigms of science trick us into thinking that our experimental results, and the theories that organise these, are meaningful independent of humanity. But this is far from the case: they are a product of our being, our Dasein. Another kind of being would not necessarily derive the same theories. A hypothetical entity whose awareness was embodied within spacetime (rather than time) could not arrive at anything like Einstein's theory of general relativity, which has its sense precisely because space and time are separately measurable for us.

Yet Heidegger may yet have spoken too soon when he said that time was more important to Dasein than space, for this assumption rests securely on the idea that when we talk of spatial separation we are talking solely of distances upon our world. Heidegger never entertained the possibility of sentient beings elsewhere in the universe – of non-terrestrial Dasein. And if this is allowed, then spatial separation could be far more radical than temporality in changing the nature of how Dasein is thrown into the world...

But perhaps we would be wise to heed Daniel Dennett's caution to be wary of science fiction thought experiments, because the strong intuitions they form may be illusory (a warning also hinted at by Wittgenstein). Whether or not there is alien Dasein doesn't matter for us at the moment because there is no question or possibility of our being thrown into their world – we are always already thrown into this one. And for all of us, as Dasein here on Earth at this time, the question of when we live has truly become vastly more important than where.

I'm a PC, and I Hate It

You may have seen the Windows 7 advertisements that make rather spurious claims that Microsoft took guidance from their ordinary users in deciding what to put into the new version. Well, as a person forced to use Windows for work, what I would like to see in Windows is a resource-light, quick-starting, swiftly-running foundation of software that does little more than co-ordinate the hardware layer, allowing me to install what applications I choose over the top, and which is so robustly constructed that it remains in service for decades and not merely years. Computer scientists call it an "operating system". Any chance you could make Windows into one of those?

Does It Pay To Advertise?

Currently, none of my blogs run ads. But I'm quite often getting good traffic through them, and my Google listings are good enough that I now receive spam asking if they can advertise on one of my blogs. Does anyone have any experience of running blog ads they can share, or have an opinion on this topic they'd like to voice? I don't want to inconvenience my readers with adverts for just a few extra pennies, but on the other hand I don't want to turn down revenue that might be available.

Thoughts and opinions welcome!