Allen Wood on Free Will

The indefatigable Allen Wood recently sent me this reply to my piece Is Free Will Too Cheap? which I post here with his permission, and with its original US English spellings.

Dear Chris,

Very good post. Having just plowed through one tome of mine, this may not come to you as welcome news, but a new tome has just appeared [Fichte’s Ethical Thought].

The arguments to which you refer about Fichte on freedom are reprised in the first half of Ch. 3 of this book. More generally, I think Fichte was on to the kinds of views you're discussing. He called them ‘dogmatism’ and insisted that transcendental philosophy is the only way to avoid them. My book talks about this, especially in Ch. 2.

I have said – and still believe – that if there is a solution to the traditional problem of free will (“How does our freedom to choose fit into our objective conception of the natural world?”) then it would have to be a compatibilist one. Unfortunately, however, it does not follow from this that any form of compatibilism is a defensible position. The traditional problem of free will, so understood, may be insoluble. I would reject my colleague Tim O’Connor's views too, since they involve a supernaturalist way of solving the problem. They too are trying to fit free will into some conception of the objective world. It’s just that they include supernature as well as nature. I don't find supernaturalism a defensible position since there is no good evidence for it. The fact that we can't solve the free will problem is no evidence for anything except that we can’t solve the free will problem.

Hume is usually understood as a compatibilist, and in the Enquiry, he does describe his view about the causal determination of the will and the conditions of moral responsibility as a “reconciliation project.” But for reasons of literary popularity, Hume was trying to be audience-friendly in the Enquiry and to downplay the paradoxical side of his views. In the Treatise, he is more candid and shocking. His view is that we lack free will – our every action is causally determined by particular passions or other motives. But far from it's being the case that this destroys moral responsibility, Hume argues that it is a necessary condition of moral responsibility. That shocking paradox – which can't be described as compatibilism about free will and determinism, since it supports only determinism and denies free will, his his real view.

In short: Those who call Hume a compatibilist are whitewashing his views (and probably their own as well). In the Treatise, Hume is being more candid. He's not reconciling anything, or showing anything to be compatible. He is claiming baldly and bluntly that free will is incompatible with moral responsibility.

The remark you quote from Ramachandran, and the view ascribed in your post to Crick, puts them, and those who agree with them, in the following position: Either (a) they, as “scientists,” are mysteriously exempt from what they say about the rest of us, or else (b) their own claims that none of us exist, none of us understand why we do what we do, that nothing of what we believe about ourselves is true, are self-discrediting. For if their views are true about themselves, then they are in no position to assert those views and can have no reasons for them. For they do not exist, and whatever they think about themselves – including the science that they believe in – is an elaborate post-hoc rationalization that bears no relation to the truth. The same would of course be true of us if we became convinced of their views, and so our being convinced of their views would involve the same illusion.

One has to suppose that they do not intend to exempt themselves from the human condition that their views describe – although sometimes one has to wonder about this. One of my favorite movie lines comes early in Ghostbusters. A lady has just seen a ghost, and Bill Murray, in cross-examining her, asks her insultingly if this is “her time of the month.” Another guy wonders if this is a proper question for him to ask. Bill Murray replies: “Back off, man. I’m a scientist!” A lot of scientist-philosophers seem to take the same attitude toward their audience (namely, us).

I think it has to be admitted that their views might be true, but if they are, then neither they nor we nor anybody else (except a God or pure intelligence who is exempt from the conditions of human cognition) could ever be in a position to know or justifiably to believe that their views are true. And if their views include (as they usually do) that disembodied cognition is impossible, then no such divine or pure intelligence could exist either.

Best regards,

Allen

PS: Relating to the quotation from Crick, I should also have quoted a remark from one of my favorite writers – Robert Benchley, a writer for the New Yorker for many years. In one of his articles: ‘Did You Know That...?’ he is satirizing columns in magazines and newspapers that purport to inform you of little known and paradoxical truths. On his (absurd) list of these supposed truths is the following: “No one has ever actually seen the Brooklyn Bridge. It is merely the action of light waves on the retina of the eye.” Crick’s quoted statement reminded me of that.


Is Free Will Too Cheap?

Klint.Free WillDo we possess a genuine capacity to choose, or is our sense of agency always an illusion? Or to put it another way: is it free will or just a cheap trick?

“Your conscious life”, neurologist Vilayanur Ramachandran declares, “is nothing but an elaborate post-hoc rationalization of things you really do for other reasons”. This is the general position of a disparate group of researchers who insist that free will is merely an illusion, a self-deception we conduct upon ourselves. That we frequently deceive ourselves in these ways is hardly news – philosophers and clergy from the sixteenth century onwards were already discussing this oh-so-human capacity, and older references can also been found. What has made these ‘new illusionists’ into something newsworthy has been their willingness to inflate these claims into the broadest strokes: it is not that there is a risk of self-deceit, there is no free will at all, because what really motivates human action is never occurring at a conscious level.

The historical context for the contemporary dismissal of our conscious lives have been concisely discussed by Mary Midgley in her book Are You An Illusion? She quotes co-discoverer of the DNA molecule, Francis Crick, as giving one of the clearest examples of this fairly recent trend, when he wrote (in 1994): “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their attendant molecules.” Midgley raises a proverbial eyebrow at Crick’s use of ‘in fact’, and justifiably questions whether what we are dealing with here really is a factual concern. This is clearly a case of scientists muscling in on philosophical turf – and one should always be careful when claiming authority over somewhere that’s already populated.

As it happens, the new illusionists are latecomers to a game that is as old as philosophy – and in this case, both in its Western and its Eastern traditions. The core of the philosophical conflict has primarily been over the question of how human freedom relates to the apparent causal nature of the physical world. The common sense perspective is philosophically defended by supporters of what are known as libertarian positions – we have a genuine capacity to choose. This was a particularly important argument in Medieval Europe since such freedom had both theological and juridical implications: you cannot blame someone unless they have responsibility for their own actions. But there are two other main camps opposed to this understanding.

The incompatablist position is arguably the default position on free will at the moment. Since cause follows effect so reliably (the standard argument goes), free will would seem to be excluded since in a deterministic universe there is no room for it. Free choice must mean the possibility of having chosen differently – which determinism presumably excludes. The new illusionists come at this position armed with experiments that purport to provide proof of a slightly different kind – namely that our conscious deliberations aren’t evidence of free decision making anyway, since we act unconsciously. The outcome of such lines of attack are still firmly incompatablist, however: free will is excluded by arguments about (psychological) causation.

Back in 1738, the Scottish philosopher and intellectual rake David Hume turned the two established positions over free will on their head with a robust compatabilist argument. Libertarian claims could not be correct since to suggest a different decision could have been made amounted to breaking causality by bringing in a random element. Yet if chance was involved in our decision making, we could hardly claim to have ownership over our choices! Hume carefully defined necessity and liberty and demonstrated an absence of contradiction. The assumption that ‘we could have chosen differently’ has a rather suspicious meaning when it is examined closely, and we need determinism to make sense of our choices (Hume suggests), lest they unravel into randomness.

The argument advanced by Hume, if accepted, would destroy the credibility of every hokey science fiction tale that hangs on a branching timeline where a character makes different choices. If his argument is accepted, these no longer make a lick of sense: from where could this different outcome spring, exactly? If it comes from chance, there can be no coherent claim for the will to command our actions. But if it comes from elsewhere, are we committed to some extra-physical component of mind to explain how multiple outcomes could emerge from the same decision?

Free will cannot mean that in a parallel universe you chose differently: a different outcome would mark a different person. This has been my understanding of the problem of free will ever since I read Hume. We make an utterly metaphysical (i.e. untestable) assumption when we think ‘choosing freely’ must mean the possibility of different outcomes, since we only ever exist within time, and within just one sequence of events. Incompatablist arguments are making untestable assumptions on a grand yet oft-unnoticed scale. But whilst I found Hume's arguments very compelling on this subject, something always felt out of place in his account. It has taken me some time to track it down.

In a fascinating book thriving in the interface between philosophy and empirical research, Nancey Murphy, George Ellis, and Timothy O’Connor collect a host of perspectives on what contemporary neurobiology means for the discussion of free will. Entitled Downward Causation and the Neurobiology of Free Will, the most liberating aspect of this volume’s eclectic discussions is its clear recognition that the traditional debate about free will has run aground over its assumptions about determinism. In a decidedly Humean move, the editors recognise that libertarian vs. incompatablist arguments are held up on a perception of causality that is essentially reductionistic – they presuppose a single dimension of causation, from the ‘bottom’ upwards.

Contemporary researchers give ample examples of emergent behaviours that contradict reductionism, and thus display what can be termed downward causation. In a 2006 paper, for instance, philosopher Robert C. Bishop points to Rayleigh-Bénard convection as a simple paradigm case: self-organising non-linear structures in heated fluids (convection cells) must be understood at a scale above that of individual molecules or the entire phenomena is incomprehensible. Bishop correctly recognises the importance of this case for metaphysics and philosophy of mind, since it demonstrates (without any torturous assumptions) that treating causal relations as merely one-directional isn't even sufficient for physical systems, let alone living organisms, or complex minds.

For the traditional arguments regarding free will, the entire conceptual framework is thrown open by bringing into doubt the more simplistic conceptions of causality. It is no longer plausible to assign responsibility for cause and effect in a purely linear and reductionistic fashion, as if atoms were the sole foundational element of reality. Rather, there is a growing recognition in both philosophy and the sciences that downward causation is both plausible and indeed necessary to explain all manner of complex systems. In the case of human behaviour in particular, our symbolic faculties, such as language and mathematics, create spaces for downward causation whereby understanding what counts as a ‘cause’ has to mean much more than simply reducing our focus to the sub-atomic.

As the 2012 volume (which is based on a workshop from 2007) explores, these changes in empirical understanding massively reframe the free will argument, and undermine new illusionist claims. Two such viewpoints are discussed at length: Benjamin Libet’s neurobiological research that suggests our brains begin responding before we are consciously aware of willing an action, and Daniel Wegner’s psychological work separating the feeling of volition from the mental causes of action. Both Wegner and Libet are looking to deflate conventional views on free will. The philosophical push-back demonstrates once again a confusion of concepts, particularly in Timothy O’Connor's chapter, which simultaneously debugs both the new illusionists and conventional understandings of free will, sketching a new framework for understanding volitional behaviour.

In the broadest strokes, however, all the new illusionists are offering self-defeating accounts. This is a point mentioned by numerous authors in the aforementioned anthology, and also by Allen Wood in his discussion of Fichte’s notions of absolute freedom. The essential problem is that if, as Ramachandran and others assert, our entire conscious life is an illusion, there can be no scientific investigation of any credence –these too must be reduced to “elaborate post-hoc rationalization”. Fichte’s arguments, from the tail end of Hume's century but buffed-up by Wood’s contemporary scholarship, form a sharp point of rebuttal. If there is no free will, then there can be no concept whatsoever of understanding, at least as it is usually considered. To understand inherently implies a wavering between possibilities before settling upon one as the adequate explanation. All the sciences depend upon this mental phenomena. Yet if determinism destroys the possibility of free will (or, in the contemporary argument, conscious thought is mere confabulation) this must also make the sciences impossible, since this wavering between possibilities is the essence of free decision-making.

Thus it transpires that it’s the new illusionist arguments that are far too cheap to be taken seriously. But by engaging with them, philosophers once again show the benefits of inter-disciplinary discourse, and the productive gains available when the sciences exchange ideas with philosophy. The classical free will problem is not resolved (nor can it be, because of its inherent metaphysical assumptions), but perhaps we have at long last begun to move beyond it into a new and productive understanding of the relationships between volition and action.

The opening image is Hilma af Klint’s Free Will (1907). No copyright infringement is entailed by displaying this image.


Ontology as Non-Theology

Chidgey.Music of the SpheresTo speak of ontology is to speak of being, to say what exists, or how it exists, or how the things that exist are related, while to speak of gods or God is what is called theology. Every theology is necessarily a form of ontology – it takes a specific position on what exists – but not every ontology is a form of theology. However, every ontology is and must be, at the very least, a non-theology. Which is to say, you can’t talk about being or existence without at some point crossing into religious territory, however tangentially.

Let’s get some helpful maps for entering this rather contentious territory.

Either you have an image of God, or of gods, in your mind, and it matters to you, or you have an image of the impossibility of deities and that matters to you, or you do not find images of divine entities are of any particular relevance for how you think about existence. These three positions are those of the religious and non-religious positions of theism (e.g. Christianity, Islam), atheism (e.g. secular humanism, Marxism), and non-theism (e.g. Theraveda Buddhism, Jainism). That makes it sound as if you cannot get by without positioning your view of existence (your ontology) in respect to images of God or gods, even though you quite obviously can – it is just that when you are confronted about theology, whatever stand you take must be positioned somewhere within the space of this particular game.

But it is not just theology that everyone is forced into a position upon, ontology is equally inescapable: everybody who speaks has an ontology – even if it just all the names of all the things that they know about. Your ontology is the set of things you can say exist, and this process is well established in us long before language gives us names for them. Some philosophers develop quite intricate systems for describing how things are, or for setting the limits of what can be known about how things are, in what could be called technical ontologies. However,, despite the care with which it is practiced, ontology is not a subject prone to widespread agreement: the number and kinds of ontology are limitless, and all of the more sophisticated ontologies come with a recognition of the limitations of this kind of thinking.

Kant’s Critique of Pure Reason (1781) sets the pattern here. While ontology in one form or another goes back at least three millennia to the Sanskrit scriptures known as the Vedas and to the ancient Greek philosophers, the way we think about technical ontologies remains coloured by the work of Kant during the Enlightenment. As Theodor Adorno has commented, Kant recognised that an ontology “exists only for others” and thus has no meaning outside of lived experience, while he simultaneously tried to conduct a “salvaging of ontology” as something beyond experience. The tension between these two elements of Kant’s ontological work has never gone away.

Today, philosophers can be broadly divided into two camps. Firstly, there are those who have continued to pursue Kant’s project of ontological rescue who are engaged in trying to construct ontologies that can be claimed to go beyond experience. My personal favourite of these is Alain Badiou, who identifies ontology with mathematics (set theory in particular), and then reasons about ontology by using maths as his foundation. On the other hand there are those who are engaged in Kant’s project of ontological critique, who are primarily engaged in situating ontologies (including theologies) as elements of a plurality. Here I have a fondness for Paul Feyerabend, who found technical ontology less than useful, and was dismissive of what Terrance Blake calls “the detour through ontology”.

What ontology and theology have in common, what binds them together as conceptual sisters, is that both are about how we imagine existence. If we did not live in a world so heavily conditioned by theistic traditions, it might not even be necessary to distinguish between these two practices – but our intellectual inheritance is inescapably coloured by the Judaic concept of history, the Islamic reinterpretation of Greek philosophy and mathematics, and the Christian faith in truth, which descends from the earlier monotheistic practices and has given rise to the tradition of scientific investigation. The core danger of trying to paint our time as one where religion and science ‘fight’ is that the key battlegrounds are within the sciences and within religious (and non-religious) thought, as I drew out of the discussion within The Mythology of Evolution. Despite the ‘official story’, the majority of those who believe that the sciences uncover the truth about the world are Christian, and the most vociferously asserted theologies in the public sphere are atheologies that insist upon crossing out all gods. 

Theology, including atheology, always possesses a moral element (or an aesthetic element – the distinction is not always important). Take any of the atheologies being deployed today and you will always find behind it a (moral) commitment to truth. Since gods clearly do not exist (the logic goes), we must commit ourselves to an atheology where gods are not an acceptable part of our thought. This position is undergirded by a prior commitment to the importance of truth. It is because gods are not true that we must reject them. The theological positions are generally more varied, and include those that are a direct inversion of the standard atheology (starting once more with the commitment to truth), as well as others in which God serves as a distant moral guarantor (which was broadly Kant’s position), or in which a moral order is otherwise given a divine foundation.

Now in the case of contemporary technical ontologies, the moral element may appear to be absent, and this could be taken as a justification for not linking these systems of thought with theology altogether. However, this is not as straightforward as it might appear. Many non-theological ontologies begin with the same (moral) commitment to truth as other theologies/atheologies, even if that prior moral claim is sometimes obscured by a claim to nihilism, usually developed with (or perhaps against) Nietzsche. But nihilism is essentially a self-negating position for philosophers: if it were plausible to void all truth and meaning, there would be no viable topics for any philosophy to address. Only the rather limited claim that ‘the universe in itself is devoid of value or meaning’ is available, and this is a terribly uninteresting observation until it enters theology, where it becomes a rather straightforward atheological claim.

Even those technical ontologies that do not begin with the moral commitment to truth cannot avoid entering into moral territory later. Once you make a claim for how existence is organised or can be understood it is hard to avoid this becoming a demand to understand in this way (or something like it) on pain of error. If the ontologist did not have this kind of commitment to truth before theorising, and they manage to avoid acquiring it afterwards, then what motive would they have for sharing their ontology? There is always a moral value here, even if it is concealed behind epistemic justifications. We should expect this: no-one is going to pursue ontology or theology without a motive, and that motive will always contain a moral (or aesthetic) element.

Tolstoy claimed that it was impossible for a person to have no religion, in the sense that this word means a relationship to the universe. This statement no longer seems as self-evident as it did a century and a half ago because the meaning of ‘religion’ has become mangled through its deployment as a caricatured ‘enemy’ to be fought… those whose self-image is founded upon ‘fighting religion’ are effectively barred from considering how this practice might also seem like a religion when viewed from the outside. It was for this reason that I began to talk of non-religions, and for equivalent but opposite reasons that others talk of ‘worldviews’. Technical ontologies scrupulously avoid overt religious elements, but they cannot entirely avoid operating as non-religions, because you simply cannot talk about existence without taking some kind of moral (or aesthetic) stand upon it.

Thus ontology can be understood as non-theology, as a means of conducting the same kind of how-and-why-things-are-this-way discussions that occur within theology – the ‘Queen of the sciences’ as it was once known – without having to take any particular positive or negative view on the existence or otherwise of divine forces. Except, of course, they always do. How can they not! You can’t have a system for summing up existence and yet never be required to take a theological stand when the vast majority of the planet constrains their ontological concerns to those of theology. These two practices are twinned; they are distinct, but they can never be separated while theology is still being practiced. Accepting this proposition doesn’t mean that everyone has to be a theologian – but it does mean that you can’t practice ontology without at least brushing up against theology. And good fences, as they say, make for good neighbours.

The opening image is David Chidgey’s Music of the Spheres, which I found here on his website Art Glass Mosaics. As ever, no copyright infringement is implied and I will take the image down if asked.


Wikipedia Knows Nothing, the Book

It gives me great pleasure to announce that I have finished the draft manuscript of my latest philosophy book, Wikipedia Knows Nothing. Based upon content I’ve been writing here on the blog over the last year, but substantially revised and expanded, this is my first work of epistemology, expanding upon the idea of knowledge as a practice as well as the concept of a multiverse that I developed in Chaos Ethics. I have a publisher in mind, but I won’t announce that until the deal has been made.

For now, I’m asking any of the stalwarts here at Only a Game if they would like to be a pre-reader. The book is only 24,243 words long, and thus a much lighter undertaking than most of my epic tomes. If you’d like me to send you a copy of the unedited manuscript, contact me through the usual channels.

Many thanks to everyone who helped me get this far, and especially to Chris Billows whose interest in the topic was hugely influential in the decision to take this further, and translucy (where are you now?) for giving me the idea for the Republic of Bloggers, which features in the book. Also, thanks to the Wikipedia for giving this topic an accessible point of entry, and which might know more than the book’s title lets on…


The Subjective Knowledge of Squirrels

Grey Squirrel from Oak TreeIf you startle a grey squirrel who is foraging on the ground, they will immediately dash towards the nearest tree, run up and around it until they cannot be seen by you, then climb upwards as much as needed to in order to feel somewhat safe. If you pursue the squirrel, they will shoot up the tree, along the branches, and head for the point in the foliage of the canopy where another tree overlaps, or is at least fairly close by. When the neighbouring trees do not touch, the squirrel who is avoiding your unwanted attention will leap from its original tree across to a branch on the next one – sometimes arriving on nothing more than a twig, which may bend worryingly downwards under its weight. The squirrel, while visibly perturbed by such an event, quickly clambers to the secure footing on the other tree and can be gone from your sight before you even noticed it was there.

Now the squirrel’s capacity to negotiate trees not only exceeds our own, but their knowledge of trees and their surfaces exceeds even our ability to conceptualise an adequate parallel. Next to this arboreal labyrinth, our flat pathways and roads are positively dull exercises in trivial navigation, while the world of the tree squirrels is one of possibilities and intersections along fractal routes they can traverse with consummate skill. It is something that we can only barely imagine: like Thomas Nagel denying we could know what it is like to be a bat, the squirrels’ experience of trees is fundamentally barred to us. We would not, it is clear, want to describe the sure and certain agility of squirrels as being objective knowledge, no matter how assuredly they traverse it. Yet if the capabilities of squirrels are to be understood as subjective knowledge, we are forced to admit that such knowledge can be every bit as reliable as what is produced by the sciences – and a damn sight more applicable to everyday life!

Suppose we understand objective knowledge in the manner proposed by Isabelle Stengers, as something produced through the painstaking creation of methods of investigation. When scientists manage to produce something that can resist all objections, it forms a reliable witness on some topic, albeit in a rather narrow fashion. Objective knowledge necessarily tends towards this kind of partial quality, and would not by itself be a plausible way of being part of any world: someone who solely understood gravity, quantum mechanics, organic chemistry, and cell mitosis would be thoroughly ill-equipped for life. Such a narrow field of vision inevitably follows from the complex compositions of things that must be coaxed into developing an objectivity for those who are working with them.

Accepting the myopic quality of our various and impressive scientific apparatus makes the contrast between objective and subjective knowledge feel rather claustrophobic… the vast majority of knowledge lacks this quality of objectivity, of belonging to networks of objects that give a voice to something that cannot speak without us. Yet these other, subjective knowledges, while lacking the robustness of their artificially created cousins, are still capable of being reliable witnesses too. We can trust a builder to know how to construct a house, a mechanic to know how to repair an engine, a chef to know how to bake a soufflé, an artist to know how to blend paint to make a certain hue, or a musician to know how to harmonise. Likewise, we can trust a squirrel to climb a tree.

We have all been inculcated into an understanding that subjective knowledge is unreliable, a situation that comes out of contrasting it with objective knowledge, which is deemed reliable, essentially by definition. We should not trust our own judgements as they are merely subjective, but we should trust what is reported by scientists as this is objective fact. But if reliability is our purpose, subjective knowledge is just as capable of producing it as its cousin, and furthermore the methods that produce objective knowledge are just as capable of error as anything else. Aligning objectivity with reliability, and subjectivity with error, is to simultaneously misunderstand the core qualities of skilful practices while artificially canonising scientific techniques with an unwarranted infallibility that is thoroughly undeserved.

This dichotomy between the subjective and the objective has been handed down to us over nearly four centuries from original ideas in Descartes’ Meditations on First Philosophy, first published in 1641, which imagined a thinking subject (“I”) and contrasts it to the world of extension and matter it is situated within. Mind is thus contrasted to matter in Descartes proposal, and the fingerprints of this Cartesian dualism are found everywhere today, as Hubert Dreyfus and Charles Taylor observe, even among people who claim to repudiate Descartes. For instance, those who turn against the dualism of mind and matter that was core to Descartes’ account often do so by deflating the significance of mind, thus raising matter to centre stage by suggesting mind simply emerges from the action of matter. Such materialist apples have not fallen far from their dualist tree.

Drawing against the work of the two pivotal figures of twentieth century philosophy, Martin Heidegger and Ludwig Wittgeinstein, Dreyfus and Taylor offer an alternative understanding of our relationship with the world – one that is not based upon our being trapped inside the disconnected mind implied by the famous image of a brain-in-a-vat. They propose instead that we should understand ourselves as in contact with the world we live within. Rather than our minds forming images of an ‘outside’ world that they are forever separated from (the Cartesian mythos of mind versus matter) they suggest that we have a direct encounter with reality that utterly straddles the supposed divided into subject and object.

This contact theory (as they term it) is strongly rooted to Heidegger’s idea of being-in-the-world, and leads to a sense that at the base of those experiences what we usually term ‘subjective’ is a co-production between a being and the reality it lives within. It certainly counts in favour of this view that it aligns with work in numerous scientific fields, including cognitive science, neurobiology, and artificial intelligence, all of which are drawing away from Descartes and towards Heidegger’s perspective under umbrella terms such as ‘embodied cognition’ and ‘enactivism’. Our minds, in the contact view, are engaged in transactions with reality at the centre of which can be found not just our brains but our entire bodies. Hence the phrase ‘embodied cognition’: our minds are not just in our heads, they are in our worlds.

Accepting my understanding of knowledge as a practice, we can see that the kind of subjective knowledges I have suggested here i.e. building, repairing, baking, painting, harmonising – not to mention tree climbing for squirrels – are genuine knowledge practices. They have the reliability that is the sign of knowledge, they produce facts as a side-effect of this reliability, and they are sustained by networks of practitioners. Having come this far down this line of approach, it begins to seem as if the adjective ‘subjective’ has become empty and vacuous: the knowledge of beings is knowledge. Objective knowledge – the knowledge teased from objects – is just a special case of knowledge, not its paradigm case.

If this leads us to a conclusion that feels surprising, or even alien, this is only because we are unaccustomed to recognising knowledge as a practice, and even less comfortable with admitting that other animals can possess knowledge. Still, the squirrels’ adroitness with trees must be understood this way if we are honest about their capabilities. Dreyfus and Taylor suggest that blind spots like these come about because we treat subjective experience as if it were a picture generated internally of an external world – the Boxer picture of reality I rejected as incomplete on different but parallel grounds in How Many Real Worlds Are There? When we think this way, we are ignoring all the intricate perceptual practices of our living body that Maurice Merleau-Ponty drew attention to. Just seeing an object clearly is a knowledge-practice: we routinely underestimate the skill we possess in such matters, which requires us to pick out an object in a cluttered visual field, bring it into focus, and often to move our body to gain an optimal vantage point.

One objection may spring to mind: that rather than ‘knowledge’, we should understand the squirrel’s competence as instinct. But this is to make another of Descartes’ errors – to think of animals as mere machines, and thus ignore the way that beings other than humans also possess minds that influence their existence in the short term and, via the chain of inheritance and persistence of advantages, alter their biology in the long term. In this regard, I call to the witness stand the humble rock squirrel, a denizen of the desert plains of Mexico and the south-western United States, whose primary interest to us is their biological similarity to the grey squirrel I recruited as an example above. Despite the rock squirrel being about fifty percent bigger and heavier, the overall physiologies of the two species are notably parallel.

Rock squirrels are perfectly capable of climbing trees, but they seldom do so because they live in arid flatlands. Instead, their elongated claws (which are what help the grey squirrel with its arboreal escapades) are used for digging burrows. The same biological blessing – claws – supports two very different worlds, the climbing world of the tree squirrels and the burrowing world of the ground squirrel. It is not that the grey squirrel and the rock squirrel possess radically different instincts about how to use their claws – they are biologically similar in every key way, and are clearly close relatives. What differs between them are their knowledge-practices and the worlds that these lead to, or – equivalently – the worlds they live in and the knowledge practices that these lead to. It is the grey squirrels life in and around trees that gives them their competences, just as it is a life around rocks that give their cousins theirs. These lives and worlds are not fixed by biology, awaiting a chance mutation like a miracle from heaven; there is always a new world to be discovered when you leave the trees for the ground, or vice versa.

Knowledge-practices belong to the worlds they are embedded within – that they are embodied within – and to share a knowledge-practice is to share a world. Grey squirrels live with trees, and trees are as much a part of their world as cars, roads, and shoes are to us. Rock squirrels live in yet another world, with different knowledge-practices that belong to that world. Dreyfus and Taylor, developing arguments that parallel mine but proceed upon a thoroughly different line of attack, conclude their investigations by suggesting that realism must be connected to the worlds we are embodied within. Since there are multiple ways to describe nature, any or all of which could possess truth, the only viable realism available to us is what they call a plural realism. They took a different path, but one that ends in what I have called, following both Michael Moorcock and William James, a multiverse. We live in different worlds, we practice different knowledges, but all of us – including the squirrels – live in the same multiverse.

The opening image is by and copyright to Stephen Dalton, and is part of the ARKive project. As ever, no copyright infringement is intended and I will take the image down if asked.


How Many Real Worlds Are There?

Build Better Human Beings.Elijah BurgherA premise of modern thought is that there is only one real world. Against this are various forms of relativism that would claim that there are no worlds that could justifiably be called real. But there is a third option between the two: there could be many real worlds.

I need to be clear that these many real worlds are not the ones that physicists call a multiverse, having borrowed the term from novelist Michael Moorcock. In the physicist’s multiverse there is one real world and many possible worlds. What I’m talking about here is what William James called a multiverse, half a century before Moorcock: that rather than a universe, our existence is better understood as comprised of many real worlds.

Immediately we hit an impasse, because our very notion of ‘real’ gives us reasons to expect one and only one real world. We are, to use Wittgenstein’s phrase, held captive by a picture. In particular, if what we think of as ‘the real world’ is a gigantic, universe-sized box full of vacuum and atoms, it can be hard to see how there could be more than one real world. But even if there were only one real world, it could not possibly be like this image of space and matter; the physicists of the early twentieth century showed that extension was not absolute, but relative, and that existence was about which possibilities occur. Relativity and quantum mechanics call for a new understanding of the physical universe, since our current model descends from the era of Newton and begins to seem implausible. We must look again at what we mean by ‘real’.

Consider, as a stepping point, what happens if we replace the idea of a box of matter as the basis for what exists with the idea of a set of events and their relations. This is a concept that comes from Alfred North Whitehead’s wrestling with the philosophical consequences of early twentieth century physics. What immediately comes into play is a capacity to distinguish relationships between specific events from relationships that will apply to all events of a particular kind. To say what happens when we burn wood (the general claim) is different from talking about a particular tree that burns after a lightning strike (a specific claim). Our sense here is still of regularities: the general claims constrain the specific claims; the relationships between events behave in an orderly fashion. We can see immediately that we do not need the box of matter idea to conduct empirical research. The web of events can serve the same role.

Now imagine a confrontation between two alien races who have different images for existence along these lines. The Boxers look at the universe as a vast container of atoms, while the Eventers see it as a network of events. Both species find that their equivalent to scientists can, all other matters being equal, make accurate predictions wherever their theoretical models are strong – yet the actual models might be quite different in their concepts and terms. We would still expect that, where a mathematical model could be produced, that the numbers would come out the same for both species, but the kind of things either is thinking about would still be radically different.

The conventional way of explaining this situation is that the subjective worlds of the Boxers and Eventers ‘track the truth’ of the objective world. The objective world here is essentially a superset, within which all subjective worlds are nested (as dependent upon the objective world for their existence). This objective world is a comfortable concept for the Boxers, because it equates to there only being one ‘true’ configuration of the atoms in the universe. You could focus on some smaller region in space and time, but the Laws of the box would remain the same.

However, what is the objective world to the Eventers? The superset of all events is not a single object like the universe-box, because each event is in itself a cluster of events, as encountered by anything else according to its own unique conditions for its existence. There are, in fact, many different ways of understanding the sets of events, and hence many true configurations of events. Indeed, it is possible to ‘stitch together’ a completely consistent tapestry of events that includes all the space and time within the Boxer’s universe and still have a vast reserve of consistent event-sets ‘left over’. Eventers are not so likely to talk about ‘an objective world’ at all: they are far more likely to recognise that there are ‘objective worlds’ for every conceivable kind of entity.

This shows just one way of understanding the idea of multiple real worlds. The key point is that what makes one singular real world seem plausible is the image of a box to be filled in; of knowledge as a jigsaw to be completed, piece by piece. The image of a network of events instead suggests different possible ‘stitches’ between sets of events, and patterns between events that depend upon the particular focus in each case. The view attributed here to the Boxers is how we talk about the work of the sciences, but the view attributed to the Eventers better describes what scientists actually do. This is a point made, in different ways, by Whitehead, Bruno Latour, and Isabelle Stengers, and its implications take some pondering.

What the Boxer-view gives us is a reason for faith in the processes of the sciences being capable of rendering reliable witnesses out of the different entities we encounter, to use a turn of phrase Latour suggests for what scientific researchers do (i.e. allow inanimate objects to ‘bear witness’). It reflects the truth of the idea that what happened could be definitively settled once it has occurred. There is a way to put everything together coherently – but we always have to bear in mind that we don’t ever actually have a grasp of this coherent totality, and could never know we had even if we did!

What the Eventer-view gives us is a better understanding of why scientists are able to produce reliable witnesses – it brings into focus the tremendous work of not only establishing what to examine, but devising methods of translating the observed events such that they can secure this reliability in the eyes of anyone who cares to join the investigation. As Stengers notes, when we talk about what is ‘objective’, we are referring to the questions that can be answered reliably by a certain experimental apparatus, which then open up new questions. Understood this way, there is no need to invoke an ‘objective world’ to explain the work of the sciences. 

However, the Eventer-view also hints that scientists might not possess the only skills capable of producing reliable witnesses. In many cases – electrons, distant galaxies, DNA – scientists and their tools have the best chances of producing an adequate translation. But when it comes to, say, living in the Amazon rainforest, methods of observation must share the stage with the practices of living, which can bear upon a real world without that world having to be that of the Boxer-view – nor necessarily contradicting the state of perfect knowledge that image implies. Even a monkey who lives in the rainforest knows more of the relationships between its events than can reasonably be dismissed as ‘not real’, even though their knowledge in that regard might be limited.

The picture that holds us captive becomes visible when we imagine what is real and comprehend it as a single consistent arrangement of matter, and a single consistent set of propositions that accord with it. But relativity and quantum mechanics don’t suggest this kind of configuration for the physical elements of existence, and perhaps more importantly our intense focus on real distract us from the fact that this singular real world is imagined. It is precisely because it is imagined that a plurality of worlds need not contradict the inviolability of events: once we see that worlds are situated in their conditions, and cannot easily be totalised (and certainly not by imagining matter as a basis for doing so), it makes less sense to be talking about a singular world as the locus of the real. The real always exceeds us, no matter who or what we are, a point made in a rather different fashion by Alain Badiou.

If there was only one real world, we would have to conclude that no-one could know it, although they might be inspired by the possibility of something eventually knowing it. If there was no real world, even the possibility of knowing would be extinguished. But from a perspective grounded upon events or processes, everyone (everything!) can know something of the real without anyone ever possessing the whole of it. This is why we will find multiple real worlds if we look closely at what actually happens rather than focussing on the question of how all witnesses can be made to tally with a hypothetical ideal universe of knowledge. Of course, from any one single perspective, it can still be tricky to ascertain which aspects of your world are real and which are otherwise without seeking reliable witnesses, of any kind. But this ambiguity doesn’t detract from the reality of each world, nor should it. There is no universe, and never was. The fact of our living together in our different worlds should have made it clear that it was always a multiverse.

The opening image is Build Better Human Beings by Elijah Burgher, which I found at rivercrossings.org. As ever, no copyright infringement is implied and I will take the image down if asked.


The Art Word

An open letter to Jeroen Stout responding to his blog-letter Discourses: Reflecting on the A Word with Chris Bateman at his Tumblr as part of the Republic of Bloggers. Further replies welcome from anyone!

Banksy Street ArtDear Jeroen,

If setting aside the word ‘art’ means opening up a treasure trove of aesthetic relations obscured by it, then ditching art would indeed seem to be desirable. After all, this is exactly what I argue in the case of ‘game’, another umbrella term where the confusion between aesthetics and ethics breeds discontent. But can’t we have the best of both worlds? Once we recognise that we are dealing with sets (in the mathematical sense) and not simple ‘black-or-white’ Boolean logic, do we need to give up ‘art’ or ‘game’? Besides, what if the words themselves gave us something that the more nuanced discussions you allude to could not get?

Your missive arrives at a most excellent moment, for I have finished writing my new paper for the British Journal of Aesthetics but have not yet submitted it (although I will have by the time you read this). Thoughts about what might or might not qualify as art are its specific subject, for it asks “Can a rollercoaster be art?” To my charmed and mischievous delight, there is something within it that matches what you deride as “a little weasel-game” of dividing art into ‘high art’ and ‘low art’. Looking at the attempts to define art, they can be divided into those that propose a concept of art in a strong sense (which is expressly valorised) and those who instead offer a weak sense (which is only implicitly valorised).

The musician Brian Eno, for instance, in this year’s Peel Lecture, defined art in a weak sense as “everything you don't have to do”. He admitted this meant that boob jobs are art in this sense. I find in this a wonderful parallel to Nöel Carroll’s criticism of Arthur Danto’s later theory of art, which Carroll suggests would support condom packets as art. Such is the normal business of a weak sense of art, which is certainly related to the concept of ‘low art’, although I think a clear distinction can be made. ‘Low art’ is expressly derogatory – it is intended to add magical lustre to ‘high art’ by contrast. Weak senses of art are not derogatory: Eno means no disrespect to boob jobs in choosing to cast a wide net. Indeed, he implicitly valorises them, even if it is only to a small degree.

So I shall see your weasel and raise you a statue, for my only explicit disagreement with what you write here is the idea that terms such as ‘statue’ are “benign” since something can be a “poor statue and a statue still”. This entails the idea popularised by the logical positivists, and largely set in motion by G.E. Moore, of a distinction between facts and values. It is true that we may recognise something as a statue and still judge it poor. But consider the art teacher who, facing the formless splat of pottery her student has wrought, remarks “it’s only barely a statue” or “I thought I asked you to make a statue?”

Judgements never cleave perfectly into fact and value, an issue Kant accidentally put into play with the now universally deployed concepts of objective and subjective, and one I try to address in Chaos Ethics. You will not build a device for detecting what is or is not a statue, nor does a human judgement in this regard fall into pure subjectivity. The concepts of language are always practices, as Wittgenstein realised. What is or is not a statue is not a fact, nor is it a purely ‘subjective’ value. It is something that is explained solely by the practices of sculpture, and if we had no familiarity with these we could not use the term coherently.

One problem with ‘high art’ as a concept is that it has tended to accord with the notion of specific practices (such as sculpture) getting a free pass into strong senses of art. I certainly have no interest in this. This brings me to the question of why the term ‘art’ could matter at all. If you are correct that we would benefit from folding up the umbrella term and embracing “an open-ended series of ideas” – and on the latter point we concur – what exactly is the use of the art word? Especially since, much as with ‘game’, the umbrella not only blocks our view but all too quickly turns to a weapon. (I made the same allusion regarding gender last week: this is not a coincidence). Is there something the art word does that we can’t do without?

What makes ‘art’ an indispensable term is precisely its role as an umbrella that collects together disparate practices that cannot be collated into a coherent definition without substantial violence to the way it is used. Its usage, as Wittgenstein attests, is its meaning. Here we cross from the individualist pragmatism that animated your argument to the question of the institutions of art. These institutions are far from trivial elements of our cultures, and include a vast host of diverse organisations and practices that defend not one but many conflicting strong concepts of art. What one gallery (to pick upon the most physical of art institutions) deems art in the strong sense, and hence worthy of exhibition, will not match the judgements of another gallery. But no gallery can get to the decision to display anything without accepting some conceptual framework for art in the strong sense. Hence Rancière’s assertion that art is what reveals the arts to us, as mentioned in my previous letter to you.

Now as a sheer point of political actuality, we would all be greatly impoverished by the elimination of galleries – not least of all because the consequence of this would be that the great artworks (however conceived) would become solely the preserve of the wealthy few. (I find a worrying shadow of this in Wu-Tang Clan’s decision to make and sell an album to a single purchaser, which I find fairly appalling, although understandable). But the word ‘art’ is not just the mythology animating the gallery system that ensures the poor can share in the work of artists, it is also a mythos that gives politicians reasons not to remove art from the curriculum, and that encourages the wealthy or politically connected to support those institutions that will pursue creative projects for reasons beyond profit or utility.

Returning to the question of the relationship between games and art, I can in this context give the clearest indication of why this matters. For Tale of Tales, who made this intersection their home for a marvellous decade, could not have pursued any of their works without the support of the Belgian arts council. Here is the corporeal consequence of the intangible spectre of the art word: there are no art councils without it. The institutions that support the creation of public aesthetic works depend upon the notion of ‘art’ to justify their existence. And, if I am frank, the same is equally true of the mythology of ‘science’, which in many respects is far more problematic at this time than that of ‘art’ – and I say this without in any way contradicting my life-long love of the work of scientists. Institutions accrete around the broad terms that give them meaning. As such, we must be careful not to look behind the curtain if there is something for which we should like the great wizard to deploy its mystique.

The variegated tapestry of art institutions defend the myriad practices of artists from being crushed beneath systems of governing that will commit billions to bombing the poor abroad with ever more complex weaponry, and lock us all into uncritical acceptance of unjust infrastructures that ensure poverty and forms of death so prevalent our so-called ‘news’ services don’t even bother to report upon them. Art may be too weak a word to stop any of this from happening, but its institutions are one of the few places where this grotesquely destructive obsession with narrowly-conceived utility can be resisted. You may say all this could happen without ever speaking the name ‘art’. I am radically unconvinced of that.

With love and respect,

Chris.

Happy Winter Solstice! More nonsense in the Gregorian New Year.


Wisdom in Practice

An open letter to Chris Billows responding to his blog-letter Modern Philosophy and its Loss of Wisdom at The Journals of Doc Surge as part of the Republic of Bloggers. Further replies welcome!

TreebulbDear Chris,

It is a source of some consternation to me that philosophy has as its original meaning ‘love of wisdom’, whilst many philosophers today are more interested in being smart than in being wise. Indeed, some seem to doubt that there is any meaning to the term ‘wisdom’. But to reason from this conclusion to a complete absence of wisdom in philosophy would be manifestly unjust.

I must thank you once again for writing to me. At a time when blogging feels less like a community practice and more akin to cooking a feast only to flush it immediately down the toilet, I especially treasure the discourse I have with those who commit to actually communicating – with me, or with anyone! Your argument consists of two parts: a blanket indictment of philosophy, and a pep rally for psychology. Pragmatically, I must focus here on the defence of philosophy. The main thing I will say about psychology is that you have rather cherry-picked your examples and thus demonstrated what psychologists call selection bias. An examination of psychology’s merits and missteps will have to await another occasion.

Your most general complaint – that many philosophers have parted company with wisdom – is a perspective we share. But you wish to make this an accusation against philosophy as a discipline. This I cannot support, since it was philosophers who gave me my clearest awareness of the problems in question. The core of your argument is that philosophy is excessively cognitive, to which I impishly reply: “Aye, hallelujah! Finally a field by nerds and for nerds!” But then, the same is largely true of the sciences, most definitely including psychology. The sciences, however, have largely lost the reflective qualities of philosophy because, sadly, Einstein’s generation of scientists were the last to accept that philosophy was an essential part of their job requirements. Ever since, the desire has been (as with Jonathan Haidt) to demolish any platform for philosophy as a discipline, which is an incoherent objective since we all must either conduct philosophy or be bound unknowingly to the philosophy of others.

To mount a complaint against philosophy upon the basis of it being ‘too cognitive’ seems like a misdiagnosis. Philosophy is a cognitive practice; its excellences are of the mind. You would hardly complain of sport that it was ‘too physical’! The real question here isn’t the core nature of the practices being exercised but the way those practices are integrated into lives and societies. Here, I feel, is the root of a genuine problem, and it is one that you accurately link to modern philosophy i.e. philosophy since Descartes. But Descartes’ philosophy didn't come from nowhere. This problem goes back in one form or another to the ancient Greeks. It is not by accident that Alfred North Whitehead characterised the history of European philosophy as “a series of footnotes to Plato.”

It is important at this point to separate influence from blame. Plato and Aristotle could not have remotely guessed at the consequences of their philosophy millennia later, and were far more interested in the more immediate fate of the Hellenic city states of which they were citizens. Still, it is Plato that gives us the roots of the rigid objectivity that you mistakenly attribute to all philosophy (and seem to overlook in the context of the sciences). His allegory of the cave presents clearly this idea that the philosopher is able to get outside of the local conditions of life, see ‘the truth’, then return to local life with the truths in question safely packed away like a sandwich. This metaphysical view gets taken up into Christianity (Plato’s followers’ fingerprints are all over the Gospel of John), and from there gives birth to the entire range of modern sciences, including psychology. Bruno Latour points out that today it is the scientist who is credited with this power to magically collect the truth, which loses none of its problems after being exported from philosophy.

However, this account is an incomplete story since it ignores the fact that for the majority of Christianity’s time on our planet it was Aristotle, not Plato, that was its major philosophical influence. It is only the rise of the sciences, and hence of positivism, that put Plato in ascendance and Aristotle in decline. I will not say that what was taken from Aristotle in the Middle Ages was as good as it could be, but I will say that in contemporary ethics (at the very least) the philosophical descendants of Aristotle – primarily the virtue ethicists – are the most embodied, situated, and generally un-Platonic of the voices within that particular discipline.

When you paint a picture of breaking away from monolithic truth into pluralistic happiness, you seem to make two questionable judgements. Firstly, in equating the former with the church you obscure the fact that Christians – including philosophers like Kant and Kierkegaard – were the driving force in these cosmopolitan movements, at least until the twentieth century. It is precisely because the kind of pluralism we currently have can trace its lineage to Plato through Christianity that plurality has become more of a moral problem than we usually recognise (a topic I pursue in Chaos Ethics). But still, let’s not forget that Christians were part of the solution to absolutism, as well as part of the original problem.

Secondly, you equate pluralism with happiness. Yet pluralism is not in itself a source of happiness, and its only plausible merit is in defending a diversity of paths towards happiness. But we are not happy today, and greater pluralism is not able to make us happy without a substantial overhaul of what that is taken to mean. This, once again, is a task for philosophy, which is not to suggest philosophers have anything like exclusive authority over it. We certainly don't! But some of us like to think we could help, at the very least.

You also raise the question of emotion in philosophy. This is very much a live issue in the discipline, as a glance at any summary of this topic will show. Neither is this situation new: up until the Victorian mobilisation of the sciences, talk of “the passions” (as then known) had a central role in the work of almost everyone in philosophy. Then we outsourced this work to psychology in ‘the divorce’ (when philosophy and psychology became separate fields), with decidedly mixed results.

Nonetheless, the majority of the philosophers who inspire me are those whose work intersects with their lived experiences and which could not be mistaken for coldly objective – people like Alasdair MacIntyre, Mary Midgely, Alain Badiou, Jacques Rancière, or even your countryman Charles Taylor, who very honourably attempted a career in politics so that he might have a chance to put his philosophy into practice. All of these philosophers, along with Ivan Illich (who inspired Taylor) argue against narrowly conceived notions of expertise and the unquestioned authority of experts, and many (particularly Rancière) are keen to resist the idea that distant intellectualism is what will help solve social problems. I might add that Badiou and Midgley are influenced by Plato, whose work is far more complex and nuanced than even I give credit. As ever, the realities of the situation are never as simple as they seem from the outside.

Philosophy is intelligence in theory and wisdom in practice. It can seem to be a coldly cerebral activity, but only if you mistake its theoretical and conceptual skills for its application. Philosophers make many mistakes, because they are fundamentally human – much like scientists. Unlike scientists, however, philosophers are frequently keen that the mistakes they make as humans are explored, rather than hidden behind veils such as blind peer review and faith in objectivity. Wittgenstein in particular was fascinated by our deeply human foibles, and in his later work became concerned that his own ideas would be ignored because of the generally positivistic bent of the time. He was wrong, as it happened: his philosophy significantly influenced psychology.

When philosophers fail in practice, it is usually because they have misunderstood which practices they are embedded in. Many, for instance, are still trying to debug Kant's astonishing attempt to bring about peace through international cosmopolitanism. I fear the attempt to deploy wisdom at the level of the State is now doomed because at that scale wisdom is fundamentally impossible. To escape this trap requires philosophical revolutions of the kind philosophers, artists, and those blessed by a certain madness undertake. Perhaps you will consider joining us?

With love and respect,

Chris.

Why not share your perspective? I’d love to hear from you, whomever you might be!


The Game of Art

This is a reply to Jeroen’s letter about giving up the A-word, published on Monday this week, as part of the Republic of Bloggers.

Seven Wonders Puzzle (detail)Dear Jeroen,

To give up the word ‘art’ is not, it is clear, to give up any of the benefits of enjoying artworks. So why defend a notion of ‘art’ at all?

Let me begin by thanking you profusely for your missive, which has helped to shake me out of recent doldrums regarding blogging. This alone almost makes Twitter, where most of our exchanges occur, seem a more justified element of my existence. Your game has great interest for me but is not quite a new one. Indeed, I must begin by quoting Jacques Rancière, who states: “The discontent with aesthetics is as old as aesthetics itself.” This will require further elaboration, about which more anon, for having cited this I must dovetail your game with a more earnest version of the same that was published in New Literary History four years ago.

In “Doing Without Art”, literary scholar Steven Connor begins with the aforementioned quotation, before launching into a diatribe that I would describe as a far less playful predecessor to your game. He too compares the use of the concept of ‘art’ to magic (specifically, to magical thinking) and insists that not only could we do without it that really we ought to do so. Playing your game is one thing, expecting it to have a force upon others is another – and not coincidentally, I think, this is precisely what aesthetic judgement does to us, as Kant brilliantly deduced when he inadvertently kicked off what Rancière would term ‘the aesthetic regime of art’. The connection between magic and moral (or moral-like) imperatives is a dead give away that you and Connor are both positivists, and thus propelled by different winds to those that fill my own sails.

Those whose faith in the sciences is stronger than mine generally feel morally impelled to give up magic and magical thinking. What can sometimes be overlooked in this regard is the ways that all language is magical in its sheer metaphorical dependence, a point brilliantly brought home by philosopher of mathematics Stephen Yablo, whom I draw from often and especially in Imaginary Games. So if one is compelled to give up ‘art’ because of its untestable, anti-positivistic nonsense, one ought to be prepared for just how much must be thrown from the stricken balloon. Farewell nations and cities, for a start, you are merely an abstraction with no firm grounding. So long personal identity – surely just a narrative device, as Daniel Dennett has deftly argued (like a dozen religions before him!). Oh, and goodbye Science too – that most magical of words, the thing that unites an impossibly diverse collection of research practices into a coherent whole. It must go. But what positivist can make this final cut without a twinge of regret...?

This parallel between Art and Science is not coincidental, and also takes us back to Kant, whose philosophical analyses undergird an incomprehensibly wide array of contemporary ideas. In both cases, we are tying together a panoply of practices within a guiding principle of unity – and in both cases, what that principle might be is not actually that clear, frequently borders upon the circular, and yes, is often rather magical. Reading Foucault and appreciating his methods for tracing the histories of practices (as I wrote about earlier this year in Foucault’s Archaeology) has given me stronger appreciation for what I had only sensed before, and Foucault is also Rancière’s guide when it comes to the question of art. I mentioned above Rancière’s observation that discontentment with the concept of ‘art’ is as old as aesthetics. He quotes the following:

It is time we got completely rid of that expression which, ever since Kant, is ever and always to be read in the writings of amateurs of philosophy, even though its absurdity has often been recognized.... Aesthetics has become a veritable qualitas occulta – hidden behind this incomprehensible word there are many nonsensical assertions and vicious circles in arguments that should have been exposed long ago.

This reads just like Steven Connor’s argument that I compared to your game, yet this prose was written around 1810 by August Wilhelm von Schlegel, a century earlier. The problem, of course, is that the practices within which ‘art’ or ‘aesthetics’ take their meaning are fluid precisely because they deal with human experiences that are neither entirely private nor truly open to rigorous dissection. If we want to get a handle upon this problem, we need to trace matters differently, using methods like Foucault’s archaeology.

In this vein, Rancière identifies three regimes that have defined our understanding of artworks throughout Western history. Firstly, the ethical regime of images, which I shall skip over for brevity, then the representative regime of arts – which links up with your comment about the Elizabethan arts – and finally the aforementioned aesthetic regime of art that Kant initiates with his marvelous Critique of Judgement. I cannot do this conception justice here, but the important point is that in moving from a list of representative arts to the question of aesthetic experiences, Kant starts a very new game – the one in which ‘art’ is the key term, and the one to which your counter-game reacts.

Rancière says it more concisely than I can: “Indeed, ‘art’ is not the common concept that unifies the different arts. It is the apparatus that renders them visible.” This is the very purpose of playing the game of ‘art’ – and indeed, as my own research has revealed, the parallel game of defending a conception of ‘game’. In both cases, aesthetic values are revealed by the ways that people include (and thus valorise) certain things in their category, and exclude (and thus slight or denigrate) other things.

These games are among the most wonderful that we humans play, and even though I have forsworn the question of ‘what is a game?’ in order to better understand games I feel the need to play the game of ‘art’ in order to secure – to a greater degree than I have already attempted – the lauded status of ‘art’ for certain games. This game is as engaging for me as yours is to you, although perhaps the stakes are slightly higher. The future directions of an entire cluster of media might be open to influence through our participation, and to fail to act seems to tacitly endorse the endless pornification of play that currently dominates. I don't know... maybe I am just under art’s spell, but for me the fact that ‘art’ is magic is precisely its appeal.

With great respect,

Chris.

The opening image is a detail from Seven Wonders Puzzle by Brandi Strickland, which I found here on her website, brandistrickland.com. As ever, no copyright infringement is intended and I will take the image down if asked.

Jeroen replied with Discourses: Reflecting on the A-word with Chris Bateman. Other replies and comments always welcome.


Knowing That We Know

An open letter replying to Chris at The Journals of Doc Surge as part of the Republic of Bloggers. Further replies welcome!

brain-vs-heartDear Chris,

How do we know that we are right, or that something is true, or that what we think is actually knowledge? Or, to come at this issue from a different conception: what is the emotional component of knowing? Despite a gap of four centuries, our understanding of the world still owes a philosophical debt to the French philosopher René Descartes, who perhaps more than anyone else in the Western traditions of thought wrestled with questions of doubt and certainty. It is Descartes who first raises the sceptical challenge: if I cannot eliminate all doubts, how can I be certain?

Many thanks for your blog letter Validation, Conviction, and Doubt, from June this year, in which you pick up the themes of knowledge as a practice that I introduced in the Why the Wikipedia Knows Nothing serial, and run off in rather surprising directions. I have spent a considerable amount of time pondering the points you raise here, not because I disagree with your general position, but because I want to be clear in my own mind as to the relationship between our emotions and our knowledge. Part of the issue here is that the conventional view of knowledge that I critique in Why the Wikipedia Knows Nothing makes knowing into an entirely rational and testable affair, and this is a disastrous way of understanding this subject since it means we can never know anything, and our conception of what it means to know does not reflect the experiences of knowledge at all.

You cite Jonathan Haidt, a psychologist I have a rather conflicted relationship with, in terms of the idea that our intuitions precede our strategic reasoning. Haidt has been a brilliant voice in the intersection between liberal and conservative politics in the United States, and this aspect of his work I adore. I am rather less impressed with his relationship to philosophy, and particularly moral philosophy. Haidt was a philosophy undergraduate, and quit this degree to pursue another direction. Reading between the lines, I suspect he had violent disagreements with one or more of his philosophy teachers and still bears a chip on his shoulder. His writing (particularly in the book you cite) has so lost perspective on the subject of philosophy that he reads as an almost vindictive critic, bending over to try to justify why philosophy isn’t needed. So, inevitably, when he has to discuss philosophical topics he goes horribly awry – because he knows so little about the topic!

Haidt makes the claim that because our intuitions precede our strategic reasoning about moral judgements that rational philosophy cannot be trusted on ethical matters since it does not describe how people actually engage with moral subjects. I am in a agreement with him that the force of our moral intuitions is greater than that of our ethical reasoning, and also that rational philosophy is problematic, but this latter claim needs some qualification. Rationalism has its roots in the episteme (to borrow Foucault’s term), or conditions-of-knowing, that Descartes gives us. It begins with the conception of knowledge as a necessary entertaining of all doubts, in order to overcome them all and reach certainty – a kind of epistemic Battle Royale. But the principle purveyor of this kind of rationalism are today are the sciences, and thus Haidt very quickly paints himself into a corner. The knowledge he would have us accept is indelibly connected to the rationalism he would have us question. How can we trust the testimony of someone who cannot reliably account for his own knowledge?

At root, Haidt is still operating in the epistemic space opened up by Descartes that involves splitting the world into mind and body or, as Kant refines this, into subject and object. We have gained a great deal from these perspectives, but they also risk misleading us. There is a constant danger of equating subject (mind) with doubt, and object (body) with knowledge – a perspective readily apparent in the insistence that we must think objectively in order to get at the truth i.e. that we must think like something that cannot think in order to reach the facts. Thus we get to a distorted understanding of the sciences where the scientist is merely a priest-like conduit for knowledge that magically arrives from the objective world, distorting the impressive work that scientists actually conduct to get the world to ‘speak’ to us (as the suffix ‘-ology’ implies).

When we look at knowledge as a practice, this whole scheme of understanding becomes inverted – because it is evident that objects cannot know, and that knowledge is something that only beings with minds can possess, and they can only acquire it through interacting with the body of the universe (with the sole exception of logic and mathematics). What made it seem plausible that knowledge could be equated with objects was the idea that knowledge was simply a matter of accurately reporting the status and relationships of objects. But scientists do not learn about the world this way: they develop practices that turn objects into reliable witnesses; they make the world speak. In doing so, they trust their emotions in order to judge when they have an effective practice – and when they don’t (as Haidt quite often doesn’t), their emotions deceive them as to the importance and reliability of what they are claiming.

The whole process of knowing – in all its many guises – is inseparable from its emotional element. Our inquisitiveness or desire compels us to begin to acquire knowledge, either from learning an existing practice (as in the case of most knowledge) or by trying to develop a new practice (as researchers in the sciences do). Our excitement, commitment, curiosity – or stubbornness! – helps us maintain the repetition required to establish the habit upon which our practices are built. And the experience of triumph or satisfaction when we are able to execute a practice correctly – when our knowledge is shown to work in the way we expect – is foundational to knowing that we finally know how to do something, even if we sometimes need someone else to confirm for us that we are doing it right. Even (especially!) in the sciences, knowledge and emotion are intimately entwined. As Pascal suggested: “We know the truth, not only by the reason, but also by the heart.”

Your discussion of conviction and doubt goes further than I am able to adequately address in this letter, and displays a quintessentially Buddhist appreciation for the wisdom of doubt. I think, perhaps, that conviction is not an aspect of knowledge, as such, and falls under Hannah Arendt’s warning that whenever we resolve to take action, we are committing to a course of events the outcomes of which we cannot actually know. She calls the capacity to take action “the most dangerous of all human abilities and possibilities” and suggests, in 1961:

The totalitarian systems tend to demonstrate that action can be based on any hypothesis and, in the course of consistently guided action, the particular hypothesis will become true, will become actual, factual reality. The assumption which underlies consistent action can be as mad as it pleases; it will always end in producing facts which are then “objectively” true. What was originally nothing but a hypothesis, to be proved or disproved by actual facts, will in the course of consistent action always turn into a fact, never to be disproved.

Here is the danger of believing that knowledge is passively produced by excluding mind from matter: when we divorce knowledge from the emotions that guide the process of beginning to know, we banish doubt and glorify certainty by simply ceasing to know and calling it truth. It is this risk – that of premature certainty, of manufacturing truths to support any action – that is an ever-present danger in all our lives.

With love and respect,

Chris.

Any and all replies welcome.