Top Ten Incredible Philosophy Books No-one Recommends (1): Prelude

Dusty TomesWhich amazing books of philosophy are the ones that hardly anyone suggests people read? That idle thought set me down a path that culminates in the Top Ten list that begins tomorrow, and concludes next week.

The Top Ten is a cheap trick, a means of adding the vicarious thrill of competition to what would otherwise be just a list. But as it happens, the list itself is also an artifice, a simple game that lures us in by evoking our curiosity. Whatever the number of items in the list, and whether or not it is ordered, we become tempted by the list whenever we think we might know something that might appear. This confession, that at root what I am doing here is something of a subterfuge, belies my motive for undertaking this exercise. For I am not interested in clickbait so much as I am motivated to find ways to render discussion of philosophy into forms with a modicum of popular appeal. Hence the Top Ten.

As for its subject matter, it emerges from the wonderful yet tragic predicament of the reader of books in our time: we have more books than anyone can possibly read, a situation I lamented in Crisis in the Infinite Library. This thought was fresh in my mind as I finished reading Alasdair MacIntyre’s Three Rival Versions of Moral Enquiry, and it struck me that this was a text no-one I knew of had ever recommended reading, despite it’s incredible reflections upon our contemporary academic situation. That’s because the book by MacIntyre everyone recommends is After Virtue (which, for context, would have been #9 in the Top Ten, if it had been nominated). I began to wonder: how many other incredible philosophy books are there that no-one is recommending?

I asked for nominations primarily on Twitter and Google+, and referred people to the blog post For a Library of Forgotten Knowledge for the terms and conditions, taking the title of the post from a remark that Babette Babich made in response to my enquiries. I never got a nomination from her, alas, but she made this remark in connection to the practical limitations I was imposing:

…a list of forgotten books cannot be limited. There are bookshelves full of overlooked studies. Each a world, each worth the attention required to read it. But there are topics we like, and topics we don't...

I could not agree more – but the task I had set myself was to produce a Top Ten, primarily because I have been somewhat neglecting my task of ‘popularising philosophy’ recently, and so wanted to do something that was at least ostensibly ‘popular’.

In a move that rather blindsided me, Terrance Blake nominated Babich’s book The Hallelujah Effect... This one has been on my reading list for a while, but became even more interesting in the light of Blake’s capsule review that the book was an “excellent treatise on neuro-power, psycho-power, and noo-power.” However, incredible or not, it seemed cruel to me to include so recent a publication in a countdown of philosophical obscurity. If it had qualified for inclusion, it would have been #2 in the list i.e. extremely obscure, but as I codified the rules I took the precaution of excluding books that weren’t at least twenty years old. A recent publication, I hope and trust, is still being mulled. What I was hoping to focus upon was texts that had already become somewhat lost... 

Steadily, slowly, I acquired a set of 12 nominations (including my own nomination of MacIntyre), and to judge their obscurity – since there was no even remotely plausible way to parameterise their incredibleness! – I generated the range of search engine hits based on an exact-title match plus an author name field. I have taken the midpoint of this range of values as the ‘obscurity score’ of the book, so that #1 is the most obscure, and #10 is the least. The list has a number of peculiar qualities, but I feel these add to its charm. In particular, it contains things that some people will feel are not in any way obscure. This, if nothing else, effectively calibrates the exercise.

Two nominations didn’t make the list, of course, since it was capped to ten. These were #11, Richard Bach’s Illusions (nominated by Brenda Holloway) and #12, Mary Shelly’s Frankenstein (nominated by Lee Douglas). These pulled in well over 346,000 and 444,000 hits respectively. Some of the books you see in this list might not seem to qualify as philosophy, but I was open in my criteria in this regard: as long as the nominator felt it was a work of philosophy, I allowed it. I have no interest in erecting a boundary fence around whatever ‘legitimate philosophy’ might be… what I wanted to do was explore obscurity in philosophical writing, and I have thoroughly enjoyed my adventures in doing so.

The first half of Top Ten Incredible Philosophy Books No-one Recommends, charting the nominations from least obscure to the most, begins tomorrow, with the concluding part next week. Hope you enjoy the ride!

Tomorrow: From 10 to 6

With especial thanks to Babette Babich, Terence Blake, Lee Douglas, Stefano Gualeni, Will H., Benjamin E. Hardisty, Brenda Holloway, J. Moesgaard, Michael Pereira, Greg Sadler, Bart Stewart, Judith Stout, and Adrian Voce.


For a Library of Forgotten Philosophy

Old BooksLooking for nominations for “Incredible Philosophy Books No-one Recommends”, for a future piece here at Only a Game looking at texts outside the mainstream corpus of philosophy. These could be great books by obscure philosophers, or they could simply be overshadowed masterpieces by well-known philosophers that get overlooked because other texts have become standard (for instance, Wittgenstein’s On Certainty is massively overshadowed by the Tractatus Logico-Philosophicus and Philosophical Investigations).

Please focus on books at least twenty years old, as younger philosophy books are still earning their stripes, and I’d be grateful if the books in question had an English translation too. Popular bestsellers are also rather beside the point, of course. Other than that, anything that sits outside the usual philosophy curriculum is a welcome nomination.

Suggestions by comment here, or by reply in Twitter, will be gratefully received.

With thanks to Babette Babich for the title of this post.

 

Update: nominations are now closed. The countdown begins Monday 25th July.


Prezi: Cyberfetish and the World of Tomorrow

For those of you who have brought a suitable device to the Red Gallery (or for interested souls not able to make it to the Futurism v Fatalism event), here is my Prezi for my presentation Cyberfetish and the World of Tomorrow so you can explore it with me:

Click the button to start the Prezi, then use the arrows (or arrow keys) to advance the slides, or you can explore the content freely by zooming in and out and dragging the canvas. I also recommend using the button in the bottom right to put it into full screen. You can also view it over at the Prezi website by following this link for the Cyberfetish and the World of Tomorrow Prezi.


Wherefore Philosophy? Whence Emotions?

An open letter to Chris Billows responding to his blog-letter Depths, Mirrors, and Mine Detectors at The Journals of Doc Surge as part of the Republic of Bloggers. Further replies welcome!

Kwang Ho Shin paintingDear Chris,

A particularly disturbing aspect of the era in which we live is the certainty with which some people admit to not understanding something, but then confidently dismiss or condemn it. We have taken to using a suffix derived from the Greek ‘phobos’ – fear – to describe such hatred... homophobia, Islamaphobia and so forth. The arguments against such reactions, which it seems really do involve fear, tend to invoke our ideals of diversity, a move that cannot work in practice and tends to lead to what I have called intolerant tolerance – the hatred of haters.

What a muddle we have all made of things! So it is that you yourself can claim a commitment to diversity, then pour scorn on the practitioners of an entire discipline who are “missing the boat”, engage in activities that are “complicated, prescriptive, and arcane”, “limited” and that amounts to “mental masturbation”. Wow – can’t help but wonder why you would want a discourse with me at all given such a terrible assessment! Joking aside, your phobosophy is not really your unique possession, but in fact a structural problem of our time – one well worth examining. (Doesn’t it seem like it should be ‘philophobia’? But that would be ‘fear of love’, which would be a very different problem!) If you genuinely want to make diversity your ideal, you need to understand how this rejection of philosophy undermines rather than supports your position, and for that I would first have to offer a different image of philosophy.

Hence the first question that heads this letter: ‘Wherefore philosophy?’, meaning ‘What is the purpose or reason for philosophy?’ And here we need to begin by clearing up the confusion we have created around the difference between a person’s philosophy and the discipline (or disciplines) of philosophy. For Brian Eno is correct that you can’t avoid doing philosophy, and thus everybody does it – although often badly. And unfortunately being a philosopher does not necessarily mean that you can help people with their philosophy, in part because of the insane specialisation that infects academia today, which has emerged from the formalisation of the sciences. It can indeed seem that academic philosophy is a waste of time – but that’s also true of a lot of scientific research, which oddly is rarely accused of this. And this points to the first of three crises of contemporary philosophy.

The first crisis is that ever since the sciences split from philosophy there has been a tendency to see this division as making philosophy redundant, instead of seeing that as well as continuing what was once called ‘natural philosophy’, scientists have begun doing other kinds of philosophy badly. Let’s call this the counter-philosophy revolt – the desire to tear down what philosophers do, and to fail to recognise what is replacing it. Secondly, in response to the revolt, philosophers have increasingly allied with those voices in positivism (i.e. elevation of the sciences) most hostile to philosophy, perhaps thinking aiding the sciences is now the only worthwhile task philosophy can perform. Call this collaboration. Lastly, and crucially, philosophy is assumed (as you say) to “make claims to provide a deeper understanding about life and its problems” – call this the authority on life problem.

I take your core complaint to be that philosophy is a poor guide for life if it ignores the emotions. My rebuttal has two elements. Firstly, why would you think philosophy ignores the emotions? In my experience, it is solely the collaborators who fall prey to this. Secondly, why would you think studying philosophical problems would grant authority at all? Perhaps the single greatest achievement of Modern Philosophy (a period, incidentally, that ended about a century ago) was the invention of autonomy, and thus our potential liberation from all centralised claims to authority, like that of the Christian church you criticised two letters ago, or the alleged authority of a rather nebulous thing called ‘Science’ invoked by counter-philosophy.

There may be no better place to start than looking at where contemporary ideas about emotions come from – namely Modern Philosophy. Hence the second part of my title: ‘Whence emotions?’

 

The Passions of Philosophers Past

Both Modern Philosophy and the word ‘emotion’ begins in the 17th century with Descartes. There is not a single philosopher in this era who views the emotions – or, as they are more commonly known at this time, ‘the passions’ – as anything less than an indispensable element of human life. It is Descartes’ 1649 Passions of the Soul that gives us the first systematic study of what we now call the emotions, although that particular word (which Descartes coined) meant little more than ‘motion’ (i.e. movement) at the time. Other terms in use in this century include ‘affect’ (particularly with Spinoza) and ‘sentiment’ (especially among British philosophers). As for the passions, this term was often reserved for those ‘violent’ feelings that were either particularly agitated or unresponsive to reason.

Questions about our emotional lives were the exclusive purview of philosophers at this time, since ‘science’ was just a synonym for ‘knowledge’, and (as I already noted) what we would call ‘science’ was known then as ‘natural philosophy’. A good half of Spinoza’s monumental Ethics in 1675 is concerned with defining and categorising the ‘affects’ and contemplating the possibility of freedom, discussions that obviously built upon Descartes. Spinoza, however, denied we could gain control over our passions – an argument that in many respects lives on today – and had a rather low opinion of every feeling more extreme than the kind of moderate joy that comes from being active. It is Spinoza who first puts reason and the passions into opposition, a tendency than many today – you included – have inherited.

In the middle of the eighteenth century, the Scottish philosopher (and incorrigible wag) David Hume offered an even more refined account of the passions, which he divided into ‘calm’ and ‘violent’ passions while noting that even calm passions can be strong and violent passions can still be weak. His most innovative idea in this respect may be to suggest that the passions are what motivates all our actions, and that reason would be impossible without them. His infamous remark that “reason is, and ought only to be the slave of the passions” is still widely discussed today. In addition to his own substantial contributions to philosophy, Hume has the distinction of inspiring Kant, who famously described this experience as being wakened from his “dogmatic slumber”.

At the close of the eighteenth century, Kant develops an anthropology that has three different terms for what we would now call emotions. Two of them – affects and passions – are judged as hindrances, entailing a lack of morality and a motivation towards ‘evil’ respectively. Yet the third, feelings, are viewed as an aid to moral thinking, and indeed conducive to virtuous living. Despite the popular view of Kant as emotionally repressive, he continues the general tradition of Modern Philosophy in holding our emotional constitution essential to a life well-lived.

 

Logical Seduction

So if Modern Philosophy did not, as you erroneously alleged, exclude our emotional lives from importance – and, indeed, placed great value upon this side of being – where did it all go wrong? A full answer to this question goes far beyond what I can hope to cover here, but the shortest answer I can give is that excessive faith in the sciences messed everything up, and is still causing problems today. It is not that the sciences don’t do good work or aren’t useful, but rather there has been a kind of logical seduction that has affected primarily English-language philosophy, collectively termed Analytic Philosophy and contrasted (somewhat derisively) with ‘Continental’ Philosophy. Both Mary Midgley and I tend to point fingers at an early twentieth century movement known as the Logical Positivists, who seemed convinced it would be a simple matter to exorcise humanity of everything that was not the sure and certain revelations of empirical research. They were so very, very wrong about this, yet they still have conceptual descendants today.

The upshot of this is that treating philosophy as a monolithic enterprise and then making general pronouncements about it isn’t going to get anyone very far since the range of different methods and perspectives on offer within the field is vast. Those suffering from logical seduction are quick to make the accusation that the variety of positions within philosophy must show it must be deeply flawed in some way. The assumption is that if there is only one true world, all valid investigations must converge. E.O. Wilson calls this consilience (although the term was originally coined by the Modern Philosopher William Whewell) and there is clearly some validity to the idea of evidence converging. However, I urge anyone truly committed to diversity to be cautious about such simplistic unifying principles, since there is an ever-present risk of claiming a god-like capacity to adjudicate all truth-claims in an absolute manner, in total denial of the plurality of human existence. Here is a context where your concerns about narrow cognitive perspectives can indeed be manifested, and while it is a philosophy, it is one that is primarily peddled by certain scientists and their collaborators.

So wherefore philosophy? What is the reason to persist with an activity that can’t even agree with itself? Well, for a start, nothing is going to make philosophy go away so it would seem prudent for at least a few people to try to do it well. Also, lack of unequivocal consensus doesn’t seem to bother us in art, history, sport, literature and so forth so why single out philosophy? The answer appears to be the aforementioned authority on life problem: people tend to think philosophy is claiming to have the ‘ultimate answers’, even though no philosopher I know ever makes this assertion. The philosopher has been confused with the prophet, to everyone’s loss. Philosophy is so much more about exploring questions than it is about providing unshakeable answers, and the importance of this skill is all too easily missed.

The eclecticism of philosophy stems from the near-infinite space of ideas: rejecting philosophy as a discipline because of that vastness may simplify what a person feels they ‘need to know’, but it can’t plausibly change the true dimensions of the realm of possible concepts. Furthermore, people should not feel – as I fear explains the tendency to phobosophy – that they must oppose philosophy or else be condemned to get involved with its horrendous intricacies, as if no-one could truly claim knowledge without either practicing or negating philosophy. We are happy to defer both empirical and historical research to experts in those fields; we should feel the same way about what might be called the technical problems of philosophy. You are not lacking something essential if you can’t explain how Modern Philosophy established talk about the emotions any more than you are deficit in not being able to explain 14th century crop rotation – nor micro-crystallography for that matter! No-one – quite literally! – can know everything, and that truth does not require anybody to denigrate anything.

In her forthcoming book, What Is Philosophy For? Mary Midgley provides the following explanation of our academic discipline:

...the philosophers’ business is not – as some people mistakenly think – merely to look inward. It is to organise what concerns everybody. Philosophy aims to bring together those aspects of life that have not yet been properly connected so as to make a more coherent, more workable world-picture. And that coherent world-picture is not a private luxury. It’s something we all need for our lives.

The point being, once again, that we all do philosophy, and the philosopher is merely someone who has dedicated more time to it, and has perhaps been drawn into working upon certain specific complexities. Few philosophers are certain that this habit makes them better at living life (like scientists, we tend to get awfully wrapped up in our abstruse problems!), but every philosopher hopes to clear up some persistent confusions, or to provide a better understanding of a certain problem. That’s why Isabelle Stengers and Phillipe Pignarre talk about philosophers as ‘sounders of the depths’ – and isn’t this a form of what you are calling a mine detector?

Let me close with another apposite quote from one of my philosophical correspondents, Allen Wood:

Reason and emotion are not opposites: emotions – even irrational ones – always have some degree of rational content, and healthy emotions are indispensable vehicles of rationality.

For Wood, the ‘cognitive intellect’ probably does count as a primary tool in the human toolbox, as you say in your letter, but even he does not deny the importance of our emotions. I’m not sure who does... maybe the die-hard consequentialists who think morality can be calculated? Whoever it is, it’s certainly not me. I am acutely aware that curiosity, compassion, and satisfaction are core emotional components of my philosophical inclinations. Neither is it enough for me to pursue my work in isolation: if I cannot share it, there is no point in doing it at all. Which is precisely why the letters you and I exchange are so important to me.

With great love and respect,

Chris.

The opening image is an untitled oil painting by KwanHo Shin, which I found here, and which may have originated from his Behance site at www.behance.net/ShinKwangHo. As ever, no copyright infringement is intended and I will take the image down if asked.


Allen Wood on Free Will

The indefatigable Allen Wood recently sent me this reply to my piece Is Free Will Too Cheap? which I post here with his permission, and with its original US English spellings.

Dear Chris,

Very good post. Having just plowed through one tome of mine, this may not come to you as welcome news, but a new tome has just appeared [Fichte’s Ethical Thought].

The arguments to which you refer about Fichte on freedom are reprised in the first half of Ch. 3 of this book. More generally, I think Fichte was on to the kinds of views you're discussing. He called them ‘dogmatism’ and insisted that transcendental philosophy is the only way to avoid them. My book talks about this, especially in Ch. 2.

I have said – and still believe – that if there is a solution to the traditional problem of free will (“How does our freedom to choose fit into our objective conception of the natural world?”) then it would have to be a compatibilist one. Unfortunately, however, it does not follow from this that any form of compatibilism is a defensible position. The traditional problem of free will, so understood, may be insoluble. I would reject my colleague Tim O’Connor's views too, since they involve a supernaturalist way of solving the problem. They too are trying to fit free will into some conception of the objective world. It’s just that they include supernature as well as nature. I don't find supernaturalism a defensible position since there is no good evidence for it. The fact that we can't solve the free will problem is no evidence for anything except that we can’t solve the free will problem.

Hume is usually understood as a compatibilist, and in the Enquiry, he does describe his view about the causal determination of the will and the conditions of moral responsibility as a “reconciliation project.” But for reasons of literary popularity, Hume was trying to be audience-friendly in the Enquiry and to downplay the paradoxical side of his views. In the Treatise, he is more candid and shocking. His view is that we lack free will – our every action is causally determined by particular passions or other motives. But far from it's being the case that this destroys moral responsibility, Hume argues that it is a necessary condition of moral responsibility. That shocking paradox – which can't be described as compatibilism about free will and determinism, since it supports only determinism and denies free will, his his real view.

In short: Those who call Hume a compatibilist are whitewashing his views (and probably their own as well). In the Treatise, Hume is being more candid. He's not reconciling anything, or showing anything to be compatible. He is claiming baldly and bluntly that free will is incompatible with moral responsibility.

The remark you quote from Ramachandran, and the view ascribed in your post to Crick, puts them, and those who agree with them, in the following position: Either (a) they, as “scientists,” are mysteriously exempt from what they say about the rest of us, or else (b) their own claims that none of us exist, none of us understand why we do what we do, that nothing of what we believe about ourselves is true, are self-discrediting. For if their views are true about themselves, then they are in no position to assert those views and can have no reasons for them. For they do not exist, and whatever they think about themselves – including the science that they believe in – is an elaborate post-hoc rationalization that bears no relation to the truth. The same would of course be true of us if we became convinced of their views, and so our being convinced of their views would involve the same illusion.

One has to suppose that they do not intend to exempt themselves from the human condition that their views describe – although sometimes one has to wonder about this. One of my favorite movie lines comes early in Ghostbusters. A lady has just seen a ghost, and Bill Murray, in cross-examining her, asks her insultingly if this is “her time of the month.” Another guy wonders if this is a proper question for him to ask. Bill Murray replies: “Back off, man. I’m a scientist!” A lot of scientist-philosophers seem to take the same attitude toward their audience (namely, us).

I think it has to be admitted that their views might be true, but if they are, then neither they nor we nor anybody else (except a God or pure intelligence who is exempt from the conditions of human cognition) could ever be in a position to know or justifiably to believe that their views are true. And if their views include (as they usually do) that disembodied cognition is impossible, then no such divine or pure intelligence could exist either.

Best regards,

Allen

PS: Relating to the quotation from Crick, I should also have quoted a remark from one of my favorite writers – Robert Benchley, a writer for the New Yorker for many years. In one of his articles: ‘Did You Know That...?’ he is satirizing columns in magazines and newspapers that purport to inform you of little known and paradoxical truths. On his (absurd) list of these supposed truths is the following: “No one has ever actually seen the Brooklyn Bridge. It is merely the action of light waves on the retina of the eye.” Crick’s quoted statement reminded me of that.


Is Free Will Too Cheap?

Klint.Free WillDo we possess a genuine capacity to choose, or is our sense of agency always an illusion? Or to put it another way: is it free will or just a cheap trick?

“Your conscious life”, neurologist Vilayanur Ramachandran declares, “is nothing but an elaborate post-hoc rationalization of things you really do for other reasons”. This is the general position of a disparate group of researchers who insist that free will is merely an illusion, a self-deception we conduct upon ourselves. That we frequently deceive ourselves in these ways is hardly news – philosophers and clergy from the sixteenth century onwards were already discussing this oh-so-human capacity, and older references can also been found. What has made these ‘new illusionists’ into something newsworthy has been their willingness to inflate these claims into the broadest strokes: it is not that there is a risk of self-deceit, there is no free will at all, because what really motivates human action is never occurring at a conscious level.

The historical context for the contemporary dismissal of our conscious lives have been concisely discussed by Mary Midgley in her book Are You An Illusion? She quotes co-discoverer of the DNA molecule, Francis Crick, as giving one of the clearest examples of this fairly recent trend, when he wrote (in 1994): “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their attendant molecules.” Midgley raises a proverbial eyebrow at Crick’s use of ‘in fact’, and justifiably questions whether what we are dealing with here really is a factual concern. This is clearly a case of scientists muscling in on philosophical turf – and one should always be careful when claiming authority over somewhere that’s already populated.

As it happens, the new illusionists are latecomers to a game that is as old as philosophy – and in this case, both in its Western and its Eastern traditions. The core of the philosophical conflict has primarily been over the question of how human freedom relates to the apparent causal nature of the physical world. The common sense perspective is philosophically defended by supporters of what are known as libertarian positions – we have a genuine capacity to choose. This was a particularly important argument in Medieval Europe since such freedom had both theological and juridical implications: you cannot blame someone unless they have responsibility for their own actions. But there are two other main camps opposed to this understanding.

The incompatablist position is arguably the default position on free will at the moment. Since cause follows effect so reliably (the standard argument goes), free will would seem to be excluded since in a deterministic universe there is no room for it. Free choice must mean the possibility of having chosen differently – which determinism presumably excludes. The new illusionists come at this position armed with experiments that purport to provide proof of a slightly different kind – namely that our conscious deliberations aren’t evidence of free decision making anyway, since we act unconsciously. The outcome of such lines of attack are still firmly incompatablist, however: free will is excluded by arguments about (psychological) causation.

Back in 1738, the Scottish philosopher and intellectual rake David Hume turned the two established positions over free will on their head with a robust compatabilist argument. Libertarian claims could not be correct since to suggest a different decision could have been made amounted to breaking causality by bringing in a random element. Yet if chance was involved in our decision making, we could hardly claim to have ownership over our choices! Hume carefully defined necessity and liberty and demonstrated an absence of contradiction. The assumption that ‘we could have chosen differently’ has a rather suspicious meaning when it is examined closely, and we need determinism to make sense of our choices (Hume suggests), lest they unravel into randomness.

The argument advanced by Hume, if accepted, would destroy the credibility of every hokey science fiction tale that hangs on a branching timeline where a character makes different choices. If his argument is accepted, these no longer make a lick of sense: from where could this different outcome spring, exactly? If it comes from chance, there can be no coherent claim for the will to command our actions. But if it comes from elsewhere, are we committed to some extra-physical component of mind to explain how multiple outcomes could emerge from the same decision?

Free will cannot mean that in a parallel universe you chose differently: a different outcome would mark a different person. This has been my understanding of the problem of free will ever since I read Hume. We make an utterly metaphysical (i.e. untestable) assumption when we think ‘choosing freely’ must mean the possibility of different outcomes, since we only ever exist within time, and within just one sequence of events. Incompatablist arguments are making untestable assumptions on a grand yet oft-unnoticed scale. But whilst I found Hume's arguments very compelling on this subject, something always felt out of place in his account. It has taken me some time to track it down.

In a fascinating book thriving in the interface between philosophy and empirical research, Nancey Murphy, George Ellis, and Timothy O’Connor collect a host of perspectives on what contemporary neurobiology means for the discussion of free will. Entitled Downward Causation and the Neurobiology of Free Will, the most liberating aspect of this volume’s eclectic discussions is its clear recognition that the traditional debate about free will has run aground over its assumptions about determinism. In a decidedly Humean move, the editors recognise that libertarian vs. incompatablist arguments are held up on a perception of causality that is essentially reductionistic – they presuppose a single dimension of causation, from the ‘bottom’ upwards.

Contemporary researchers give ample examples of emergent behaviours that contradict reductionism, and thus display what can be termed downward causation. In a 2006 paper, for instance, philosopher Robert C. Bishop points to Rayleigh-Bénard convection as a simple paradigm case: self-organising non-linear structures in heated fluids (convection cells) must be understood at a scale above that of individual molecules or the entire phenomena is incomprehensible. Bishop correctly recognises the importance of this case for metaphysics and philosophy of mind, since it demonstrates (without any torturous assumptions) that treating causal relations as merely one-directional isn't even sufficient for physical systems, let alone living organisms, or complex minds.

For the traditional arguments regarding free will, the entire conceptual framework is thrown open by bringing into doubt the more simplistic conceptions of causality. It is no longer plausible to assign responsibility for cause and effect in a purely linear and reductionistic fashion, as if atoms were the sole foundational element of reality. Rather, there is a growing recognition in both philosophy and the sciences that downward causation is both plausible and indeed necessary to explain all manner of complex systems. In the case of human behaviour in particular, our symbolic faculties, such as language and mathematics, create spaces for downward causation whereby understanding what counts as a ‘cause’ has to mean much more than simply reducing our focus to the sub-atomic.

As the 2012 volume (which is based on a workshop from 2007) explores, these changes in empirical understanding massively reframe the free will argument, and undermine new illusionist claims. Two such viewpoints are discussed at length: Benjamin Libet’s neurobiological research that suggests our brains begin responding before we are consciously aware of willing an action, and Daniel Wegner’s psychological work separating the feeling of volition from the mental causes of action. Both Wegner and Libet are looking to deflate conventional views on free will. The philosophical push-back demonstrates once again a confusion of concepts, particularly in Timothy O’Connor's chapter, which simultaneously debugs both the new illusionists and conventional understandings of free will, sketching a new framework for understanding volitional behaviour.

In the broadest strokes, however, all the new illusionists are offering self-defeating accounts. This is a point mentioned by numerous authors in the aforementioned anthology, and also by Allen Wood in his discussion of Fichte’s notions of absolute freedom. The essential problem is that if, as Ramachandran and others assert, our entire conscious life is an illusion, there can be no scientific investigation of any credence –these too must be reduced to “elaborate post-hoc rationalization”. Fichte’s arguments, from the tail end of Hume's century but buffed-up by Wood’s contemporary scholarship, form a sharp point of rebuttal. If there is no free will, then there can be no concept whatsoever of understanding, at least as it is usually considered. To understand inherently implies a wavering between possibilities before settling upon one as the adequate explanation. All the sciences depend upon this mental phenomena. Yet if determinism destroys the possibility of free will (or, in the contemporary argument, conscious thought is mere confabulation) this must also make the sciences impossible, since this wavering between possibilities is the essence of free decision-making.

Thus it transpires that it’s the new illusionist arguments that are far too cheap to be taken seriously. But by engaging with them, philosophers once again show the benefits of inter-disciplinary discourse, and the productive gains available when the sciences exchange ideas with philosophy. The classical free will problem is not resolved (nor can it be, because of its inherent metaphysical assumptions), but perhaps we have at long last begun to move beyond it into a new and productive understanding of the relationships between volition and action.

The opening image is Hilma af Klint’s Free Will (1907). No copyright infringement is entailed by displaying this image.


Ontology as Non-Theology

Chidgey.Music of the SpheresTo speak of ontology is to speak of being, to say what exists, or how it exists, or how the things that exist are related, while to speak of gods or God is what is called theology. Every theology is necessarily a form of ontology – it takes a specific position on what exists – but not every ontology is a form of theology. However, every ontology is and must be, at the very least, a non-theology. Which is to say, you can’t talk about being or existence without at some point crossing into religious territory, however tangentially.

Let’s get some helpful maps for entering this rather contentious territory.

Either you have an image of God, or of gods, in your mind, and it matters to you, or you have an image of the impossibility of deities and that matters to you, or you do not find images of divine entities are of any particular relevance for how you think about existence. These three positions are those of the religious and non-religious positions of theism (e.g. Christianity, Islam), atheism (e.g. secular humanism, Marxism), and non-theism (e.g. Theraveda Buddhism, Jainism). That makes it sound as if you cannot get by without positioning your view of existence (your ontology) in respect to images of God or gods, even though you quite obviously can – it is just that when you are confronted about theology, whatever stand you take must be positioned somewhere within the space of this particular game.

But it is not just theology that everyone is forced into a position upon, ontology is equally inescapable: everybody who speaks has an ontology – even if it just all the names of all the things that they know about. Your ontology is the set of things you can say exist, and this process is well established in us long before language gives us names for them. Some philosophers develop quite intricate systems for describing how things are, or for setting the limits of what can be known about how things are, in what could be called technical ontologies. However,, despite the care with which it is practiced, ontology is not a subject prone to widespread agreement: the number and kinds of ontology are limitless, and all of the more sophisticated ontologies come with a recognition of the limitations of this kind of thinking.

Kant’s Critique of Pure Reason (1781) sets the pattern here. While ontology in one form or another goes back at least three millennia to the Sanskrit scriptures known as the Vedas and to the ancient Greek philosophers, the way we think about technical ontologies remains coloured by the work of Kant during the Enlightenment. As Theodor Adorno has commented, Kant recognised that an ontology “exists only for others” and thus has no meaning outside of lived experience, while he simultaneously tried to conduct a “salvaging of ontology” as something beyond experience. The tension between these two elements of Kant’s ontological work has never gone away.

Today, philosophers can be broadly divided into two camps. Firstly, there are those who have continued to pursue Kant’s project of ontological rescue who are engaged in trying to construct ontologies that can be claimed to go beyond experience. My personal favourite of these is Alain Badiou, who identifies ontology with mathematics (set theory in particular), and then reasons about ontology by using maths as his foundation. On the other hand there are those who are engaged in Kant’s project of ontological critique, who are primarily engaged in situating ontologies (including theologies) as elements of a plurality. Here I have a fondness for Paul Feyerabend, who found technical ontology less than useful, and was dismissive of what Terrance Blake calls “the detour through ontology”.

What ontology and theology have in common, what binds them together as conceptual sisters, is that both are about how we imagine existence. If we did not live in a world so heavily conditioned by theistic traditions, it might not even be necessary to distinguish between these two practices – but our intellectual inheritance is inescapably coloured by the Judaic concept of history, the Islamic reinterpretation of Greek philosophy and mathematics, and the Christian faith in truth, which descends from the earlier monotheistic practices and has given rise to the tradition of scientific investigation. The core danger of trying to paint our time as one where religion and science ‘fight’ is that the key battlegrounds are within the sciences and within religious (and non-religious) thought, as I drew out of the discussion within The Mythology of Evolution. Despite the ‘official story’, the majority of those who believe that the sciences uncover the truth about the world are Christian, and the most vociferously asserted theologies in the public sphere are atheologies that insist upon crossing out all gods. 

Theology, including atheology, always possesses a moral element (or an aesthetic element – the distinction is not always important). Take any of the atheologies being deployed today and you will always find behind it a (moral) commitment to truth. Since gods clearly do not exist (the logic goes), we must commit ourselves to an atheology where gods are not an acceptable part of our thought. This position is undergirded by a prior commitment to the importance of truth. It is because gods are not true that we must reject them. The theological positions are generally more varied, and include those that are a direct inversion of the standard atheology (starting once more with the commitment to truth), as well as others in which God serves as a distant moral guarantor (which was broadly Kant’s position), or in which a moral order is otherwise given a divine foundation.

Now in the case of contemporary technical ontologies, the moral element may appear to be absent, and this could be taken as a justification for not linking these systems of thought with theology altogether. However, this is not as straightforward as it might appear. Many non-theological ontologies begin with the same (moral) commitment to truth as other theologies/atheologies, even if that prior moral claim is sometimes obscured by a claim to nihilism, usually developed with (or perhaps against) Nietzsche. But nihilism is essentially a self-negating position for philosophers: if it were plausible to void all truth and meaning, there would be no viable topics for any philosophy to address. Only the rather limited claim that ‘the universe in itself is devoid of value or meaning’ is available, and this is a terribly uninteresting observation until it enters theology, where it becomes a rather straightforward atheological claim.

Even those technical ontologies that do not begin with the moral commitment to truth cannot avoid entering into moral territory later. Once you make a claim for how existence is organised or can be understood it is hard to avoid this becoming a demand to understand in this way (or something like it) on pain of error. If the ontologist did not have this kind of commitment to truth before theorising, and they manage to avoid acquiring it afterwards, then what motive would they have for sharing their ontology? There is always a moral value here, even if it is concealed behind epistemic justifications. We should expect this: no-one is going to pursue ontology or theology without a motive, and that motive will always contain a moral (or aesthetic) element.

Tolstoy claimed that it was impossible for a person to have no religion, in the sense that this word means a relationship to the universe. This statement no longer seems as self-evident as it did a century and a half ago because the meaning of ‘religion’ has become mangled through its deployment as a caricatured ‘enemy’ to be fought… those whose self-image is founded upon ‘fighting religion’ are effectively barred from considering how this practice might also seem like a religion when viewed from the outside. It was for this reason that I began to talk of non-religions, and for equivalent but opposite reasons that others talk of ‘worldviews’. Technical ontologies scrupulously avoid overt religious elements, but they cannot entirely avoid operating as non-religions, because you simply cannot talk about existence without taking some kind of moral (or aesthetic) stand upon it.

Thus ontology can be understood as non-theology, as a means of conducting the same kind of how-and-why-things-are-this-way discussions that occur within theology – the ‘Queen of the sciences’ as it was once known – without having to take any particular positive or negative view on the existence or otherwise of divine forces. Except, of course, they always do. How can they not! You can’t have a system for summing up existence and yet never be required to take a theological stand when the vast majority of the planet constrains their ontological concerns to those of theology. These two practices are twinned; they are distinct, but they can never be separated while theology is still being practiced. Accepting this proposition doesn’t mean that everyone has to be a theologian – but it does mean that you can’t practice ontology without at least brushing up against theology. And good fences, as they say, make for good neighbours.

The opening image is David Chidgey’s Music of the Spheres, which I found here on his website Art Glass Mosaics. As ever, no copyright infringement is implied and I will take the image down if asked.


Wikipedia Knows Nothing, the Book

It gives me great pleasure to announce that I have finished the draft manuscript of my latest philosophy book, Wikipedia Knows Nothing. Based upon content I’ve been writing here on the blog over the last year, but substantially revised and expanded, this is my first work of epistemology, expanding upon the idea of knowledge as a practice as well as the concept of a multiverse that I developed in Chaos Ethics. I have a publisher in mind, but I won’t announce that until the deal has been made.

For now, I’m asking any of the stalwarts here at Only a Game if they would like to be a pre-reader. The book is only 24,243 words long, and thus a much lighter undertaking than most of my epic tomes. If you’d like me to send you a copy of the unedited manuscript, contact me through the usual channels.

Many thanks to everyone who helped me get this far, and especially to Chris Billows whose interest in the topic was hugely influential in the decision to take this further, and translucy (where are you now?) for giving me the idea for the Republic of Bloggers, which features in the book. Also, thanks to the Wikipedia for giving this topic an accessible point of entry, and which might know more than the book’s title lets on…


The Subjective Knowledge of Squirrels

Grey Squirrel from Oak TreeIf you startle a grey squirrel who is foraging on the ground, they will immediately dash towards the nearest tree, run up and around it until they cannot be seen by you, then climb upwards as much as needed to in order to feel somewhat safe. If you pursue the squirrel, they will shoot up the tree, along the branches, and head for the point in the foliage of the canopy where another tree overlaps, or is at least fairly close by. When the neighbouring trees do not touch, the squirrel who is avoiding your unwanted attention will leap from its original tree across to a branch on the next one – sometimes arriving on nothing more than a twig, which may bend worryingly downwards under its weight. The squirrel, while visibly perturbed by such an event, quickly clambers to the secure footing on the other tree and can be gone from your sight before you even noticed it was there.

Now the squirrel’s capacity to negotiate trees not only exceeds our own, but their knowledge of trees and their surfaces exceeds even our ability to conceptualise an adequate parallel. Next to this arboreal labyrinth, our flat pathways and roads are positively dull exercises in trivial navigation, while the world of the tree squirrels is one of possibilities and intersections along fractal routes they can traverse with consummate skill. It is something that we can only barely imagine: like Thomas Nagel denying we could know what it is like to be a bat, the squirrels’ experience of trees is fundamentally barred to us. We would not, it is clear, want to describe the sure and certain agility of squirrels as being objective knowledge, no matter how assuredly they traverse it. Yet if the capabilities of squirrels are to be understood as subjective knowledge, we are forced to admit that such knowledge can be every bit as reliable as what is produced by the sciences – and a damn sight more applicable to everyday life!

Suppose we understand objective knowledge in the manner proposed by Isabelle Stengers, as something produced through the painstaking creation of methods of investigation. When scientists manage to produce something that can resist all objections, it forms a reliable witness on some topic, albeit in a rather narrow fashion. Objective knowledge necessarily tends towards this kind of partial quality, and would not by itself be a plausible way of being part of any world: someone who solely understood gravity, quantum mechanics, organic chemistry, and cell mitosis would be thoroughly ill-equipped for life. Such a narrow field of vision inevitably follows from the complex compositions of things that must be coaxed into developing an objectivity for those who are working with them.

Accepting the myopic quality of our various and impressive scientific apparatus makes the contrast between objective and subjective knowledge feel rather claustrophobic… the vast majority of knowledge lacks this quality of objectivity, of belonging to networks of objects that give a voice to something that cannot speak without us. Yet these other, subjective knowledges, while lacking the robustness of their artificially created cousins, are still capable of being reliable witnesses too. We can trust a builder to know how to construct a house, a mechanic to know how to repair an engine, a chef to know how to bake a soufflé, an artist to know how to blend paint to make a certain hue, or a musician to know how to harmonise. Likewise, we can trust a squirrel to climb a tree.

We have all been inculcated into an understanding that subjective knowledge is unreliable, a situation that comes out of contrasting it with objective knowledge, which is deemed reliable, essentially by definition. We should not trust our own judgements as they are merely subjective, but we should trust what is reported by scientists as this is objective fact. But if reliability is our purpose, subjective knowledge is just as capable of producing it as its cousin, and furthermore the methods that produce objective knowledge are just as capable of error as anything else. Aligning objectivity with reliability, and subjectivity with error, is to simultaneously misunderstand the core qualities of skilful practices while artificially canonising scientific techniques with an unwarranted infallibility that is thoroughly undeserved.

This dichotomy between the subjective and the objective has been handed down to us over nearly four centuries from original ideas in Descartes’ Meditations on First Philosophy, first published in 1641, which imagined a thinking subject (“I”) and contrasts it to the world of extension and matter it is situated within. Mind is thus contrasted to matter in Descartes proposal, and the fingerprints of this Cartesian dualism are found everywhere today, as Hubert Dreyfus and Charles Taylor observe, even among people who claim to repudiate Descartes. For instance, those who turn against the dualism of mind and matter that was core to Descartes’ account often do so by deflating the significance of mind, thus raising matter to centre stage by suggesting mind simply emerges from the action of matter. Such materialist apples have not fallen far from their dualist tree.

Drawing against the work of the two pivotal figures of twentieth century philosophy, Martin Heidegger and Ludwig Wittgeinstein, Dreyfus and Taylor offer an alternative understanding of our relationship with the world – one that is not based upon our being trapped inside the disconnected mind implied by the famous image of a brain-in-a-vat. They propose instead that we should understand ourselves as in contact with the world we live within. Rather than our minds forming images of an ‘outside’ world that they are forever separated from (the Cartesian mythos of mind versus matter) they suggest that we have a direct encounter with reality that utterly straddles the supposed divided into subject and object.

This contact theory (as they term it) is strongly rooted to Heidegger’s idea of being-in-the-world, and leads to a sense that at the base of those experiences what we usually term ‘subjective’ is a co-production between a being and the reality it lives within. It certainly counts in favour of this view that it aligns with work in numerous scientific fields, including cognitive science, neurobiology, and artificial intelligence, all of which are drawing away from Descartes and towards Heidegger’s perspective under umbrella terms such as ‘embodied cognition’ and ‘enactivism’. Our minds, in the contact view, are engaged in transactions with reality at the centre of which can be found not just our brains but our entire bodies. Hence the phrase ‘embodied cognition’: our minds are not just in our heads, they are in our worlds.

Accepting my understanding of knowledge as a practice, we can see that the kind of subjective knowledges I have suggested here i.e. building, repairing, baking, painting, harmonising – not to mention tree climbing for squirrels – are genuine knowledge practices. They have the reliability that is the sign of knowledge, they produce facts as a side-effect of this reliability, and they are sustained by networks of practitioners. Having come this far down this line of approach, it begins to seem as if the adjective ‘subjective’ has become empty and vacuous: the knowledge of beings is knowledge. Objective knowledge – the knowledge teased from objects – is just a special case of knowledge, not its paradigm case.

If this leads us to a conclusion that feels surprising, or even alien, this is only because we are unaccustomed to recognising knowledge as a practice, and even less comfortable with admitting that other animals can possess knowledge. Still, the squirrels’ adroitness with trees must be understood this way if we are honest about their capabilities. Dreyfus and Taylor suggest that blind spots like these come about because we treat subjective experience as if it were a picture generated internally of an external world – the Boxer picture of reality I rejected as incomplete on different but parallel grounds in How Many Real Worlds Are There? When we think this way, we are ignoring all the intricate perceptual practices of our living body that Maurice Merleau-Ponty drew attention to. Just seeing an object clearly is a knowledge-practice: we routinely underestimate the skill we possess in such matters, which requires us to pick out an object in a cluttered visual field, bring it into focus, and often to move our body to gain an optimal vantage point.

One objection may spring to mind: that rather than ‘knowledge’, we should understand the squirrel’s competence as instinct. But this is to make another of Descartes’ errors – to think of animals as mere machines, and thus ignore the way that beings other than humans also possess minds that influence their existence in the short term and, via the chain of inheritance and persistence of advantages, alter their biology in the long term. In this regard, I call to the witness stand the humble rock squirrel, a denizen of the desert plains of Mexico and the south-western United States, whose primary interest to us is their biological similarity to the grey squirrel I recruited as an example above. Despite the rock squirrel being about fifty percent bigger and heavier, the overall physiologies of the two species are notably parallel.

Rock squirrels are perfectly capable of climbing trees, but they seldom do so because they live in arid flatlands. Instead, their elongated claws (which are what help the grey squirrel with its arboreal escapades) are used for digging burrows. The same biological blessing – claws – supports two very different worlds, the climbing world of the tree squirrels and the burrowing world of the ground squirrel. It is not that the grey squirrel and the rock squirrel possess radically different instincts about how to use their claws – they are biologically similar in every key way, and are clearly close relatives. What differs between them are their knowledge-practices and the worlds that these lead to, or – equivalently – the worlds they live in and the knowledge practices that these lead to. It is the grey squirrels life in and around trees that gives them their competences, just as it is a life around rocks that give their cousins theirs. These lives and worlds are not fixed by biology, awaiting a chance mutation like a miracle from heaven; there is always a new world to be discovered when you leave the trees for the ground, or vice versa.

Knowledge-practices belong to the worlds they are embedded within – that they are embodied within – and to share a knowledge-practice is to share a world. Grey squirrels live with trees, and trees are as much a part of their world as cars, roads, and shoes are to us. Rock squirrels live in yet another world, with different knowledge-practices that belong to that world. Dreyfus and Taylor, developing arguments that parallel mine but proceed upon a thoroughly different line of attack, conclude their investigations by suggesting that realism must be connected to the worlds we are embodied within. Since there are multiple ways to describe nature, any or all of which could possess truth, the only viable realism available to us is what they call a plural realism. They took a different path, but one that ends in what I have called, following both Michael Moorcock and William James, a multiverse. We live in different worlds, we practice different knowledges, but all of us – including the squirrels – live in the same multiverse.

The opening image is by and copyright to Stephen Dalton, and is part of the ARKive project. As ever, no copyright infringement is intended and I will take the image down if asked.


How Many Real Worlds Are There?

Build Better Human Beings.Elijah BurgherA premise of modern thought is that there is only one real world. Against this are various forms of relativism that would claim that there are no worlds that could justifiably be called real. But there is a third option between the two: there could be many real worlds.

I need to be clear that these many real worlds are not the ones that physicists call a multiverse, having borrowed the term from novelist Michael Moorcock. In the physicist’s multiverse there is one real world and many possible worlds. What I’m talking about here is what William James called a multiverse, half a century before Moorcock: that rather than a universe, our existence is better understood as comprised of many real worlds.

Immediately we hit an impasse, because our very notion of ‘real’ gives us reasons to expect one and only one real world. We are, to use Wittgenstein’s phrase, held captive by a picture. In particular, if what we think of as ‘the real world’ is a gigantic, universe-sized box full of vacuum and atoms, it can be hard to see how there could be more than one real world. But even if there were only one real world, it could not possibly be like this image of space and matter; the physicists of the early twentieth century showed that extension was not absolute, but relative, and that existence was about which possibilities occur. Relativity and quantum mechanics call for a new understanding of the physical universe, since our current model descends from the era of Newton and begins to seem implausible. We must look again at what we mean by ‘real’.

Consider, as a stepping point, what happens if we replace the idea of a box of matter as the basis for what exists with the idea of a set of events and their relations. This is a concept that comes from Alfred North Whitehead’s wrestling with the philosophical consequences of early twentieth century physics. What immediately comes into play is a capacity to distinguish relationships between specific events from relationships that will apply to all events of a particular kind. To say what happens when we burn wood (the general claim) is different from talking about a particular tree that burns after a lightning strike (a specific claim). Our sense here is still of regularities: the general claims constrain the specific claims; the relationships between events behave in an orderly fashion. We can see immediately that we do not need the box of matter idea to conduct empirical research. The web of events can serve the same role.

Now imagine a confrontation between two alien races who have different images for existence along these lines. The Boxers look at the universe as a vast container of atoms, while the Eventers see it as a network of events. Both species find that their equivalent to scientists can, all other matters being equal, make accurate predictions wherever their theoretical models are strong – yet the actual models might be quite different in their concepts and terms. We would still expect that, where a mathematical model could be produced, that the numbers would come out the same for both species, but the kind of things either is thinking about would still be radically different.

The conventional way of explaining this situation is that the subjective worlds of the Boxers and Eventers ‘track the truth’ of the objective world. The objective world here is essentially a superset, within which all subjective worlds are nested (as dependent upon the objective world for their existence). This objective world is a comfortable concept for the Boxers, because it equates to there only being one ‘true’ configuration of the atoms in the universe. You could focus on some smaller region in space and time, but the Laws of the box would remain the same.

However, what is the objective world to the Eventers? The superset of all events is not a single object like the universe-box, because each event is in itself a cluster of events, as encountered by anything else according to its own unique conditions for its existence. There are, in fact, many different ways of understanding the sets of events, and hence many true configurations of events. Indeed, it is possible to ‘stitch together’ a completely consistent tapestry of events that includes all the space and time within the Boxer’s universe and still have a vast reserve of consistent event-sets ‘left over’. Eventers are not so likely to talk about ‘an objective world’ at all: they are far more likely to recognise that there are ‘objective worlds’ for every conceivable kind of entity.

This shows just one way of understanding the idea of multiple real worlds. The key point is that what makes one singular real world seem plausible is the image of a box to be filled in; of knowledge as a jigsaw to be completed, piece by piece. The image of a network of events instead suggests different possible ‘stitches’ between sets of events, and patterns between events that depend upon the particular focus in each case. The view attributed here to the Boxers is how we talk about the work of the sciences, but the view attributed to the Eventers better describes what scientists actually do. This is a point made, in different ways, by Whitehead, Bruno Latour, and Isabelle Stengers, and its implications take some pondering.

What the Boxer-view gives us is a reason for faith in the processes of the sciences being capable of rendering reliable witnesses out of the different entities we encounter, to use a turn of phrase Latour suggests for what scientific researchers do (i.e. allow inanimate objects to ‘bear witness’). It reflects the truth of the idea that what happened could be definitively settled once it has occurred. There is a way to put everything together coherently – but we always have to bear in mind that we don’t ever actually have a grasp of this coherent totality, and could never know we had even if we did!

What the Eventer-view gives us is a better understanding of why scientists are able to produce reliable witnesses – it brings into focus the tremendous work of not only establishing what to examine, but devising methods of translating the observed events such that they can secure this reliability in the eyes of anyone who cares to join the investigation. As Stengers notes, when we talk about what is ‘objective’, we are referring to the questions that can be answered reliably by a certain experimental apparatus, which then open up new questions. Understood this way, there is no need to invoke an ‘objective world’ to explain the work of the sciences. 

However, the Eventer-view also hints that scientists might not possess the only skills capable of producing reliable witnesses. In many cases – electrons, distant galaxies, DNA – scientists and their tools have the best chances of producing an adequate translation. But when it comes to, say, living in the Amazon rainforest, methods of observation must share the stage with the practices of living, which can bear upon a real world without that world having to be that of the Boxer-view – nor necessarily contradicting the state of perfect knowledge that image implies. Even a monkey who lives in the rainforest knows more of the relationships between its events than can reasonably be dismissed as ‘not real’, even though their knowledge in that regard might be limited.

The picture that holds us captive becomes visible when we imagine what is real and comprehend it as a single consistent arrangement of matter, and a single consistent set of propositions that accord with it. But relativity and quantum mechanics don’t suggest this kind of configuration for the physical elements of existence, and perhaps more importantly our intense focus on real distract us from the fact that this singular real world is imagined. It is precisely because it is imagined that a plurality of worlds need not contradict the inviolability of events: once we see that worlds are situated in their conditions, and cannot easily be totalised (and certainly not by imagining matter as a basis for doing so), it makes less sense to be talking about a singular world as the locus of the real. The real always exceeds us, no matter who or what we are, a point made in a rather different fashion by Alain Badiou.

If there was only one real world, we would have to conclude that no-one could know it, although they might be inspired by the possibility of something eventually knowing it. If there was no real world, even the possibility of knowing would be extinguished. But from a perspective grounded upon events or processes, everyone (everything!) can know something of the real without anyone ever possessing the whole of it. This is why we will find multiple real worlds if we look closely at what actually happens rather than focussing on the question of how all witnesses can be made to tally with a hypothetical ideal universe of knowledge. Of course, from any one single perspective, it can still be tricky to ascertain which aspects of your world are real and which are otherwise without seeking reliable witnesses, of any kind. But this ambiguity doesn’t detract from the reality of each world, nor should it. There is no universe, and never was. The fact of our living together in our different worlds should have made it clear that it was always a multiverse.

The opening image is Build Better Human Beings by Elijah Burgher, which I found at rivercrossings.org. As ever, no copyright infringement is implied and I will take the image down if asked.