Wisdom in Practice

An open letter to Chris Billows responding to his blog-letter Modern Philosophy and its Loss of Wisdom at The Journals of Doc Surge as part of the Republic of Bloggers. Further replies welcome!

TreebulbDear Chris,

It is a source of some consternation to me that philosophy has as its original meaning ‘love of wisdom’, whilst many philosophers today are more interested in being smart than in being wise. Indeed, some seem to doubt that there is any meaning to the term ‘wisdom’. But to reason from this conclusion to a complete absence of wisdom in philosophy would be manifestly unjust.

I must thank you once again for writing to me. At a time when blogging feels less like a community practice and more akin to cooking a feast only to flush it immediately down the toilet, I especially treasure the discourse I have with those who commit to actually communicating – with me, or with anyone! Your argument consists of two parts: a blanket indictment of philosophy, and a pep rally for psychology. Pragmatically, I must focus here on the defence of philosophy. The main thing I will say about psychology is that you have rather cherry-picked your examples and thus demonstrated what psychologists call selection bias. An examination of psychology’s merits and missteps will have to await another occasion.

Your most general complaint – that many philosophers have parted company with wisdom – is a perspective we share. But you wish to make this an accusation against philosophy as a discipline. This I cannot support, since it was philosophers who gave me my clearest awareness of the problems in question. The core of your argument is that philosophy is excessively cognitive, to which I impishly reply: “Aye, hallelujah! Finally a field by nerds and for nerds!” But then, the same is largely true of the sciences, most definitely including psychology. The sciences, however, have largely lost the reflective qualities of philosophy because, sadly, Einstein’s generation of scientists were the last to accept that philosophy was an essential part of their job requirements. Ever since, the desire has been (as with Jonathan Haidt) to demolish any platform for philosophy as a discipline, which is an incoherent objective since we all must either conduct philosophy or be bound unknowingly to the philosophy of others.

To mount a complaint against philosophy upon the basis of it being ‘too cognitive’ seems like a misdiagnosis. Philosophy is a cognitive practice; its excellences are of the mind. You would hardly complain of sport that it was ‘too physical’! The real question here isn’t the core nature of the practices being exercised but the way those practices are integrated into lives and societies. Here, I feel, is the root of a genuine problem, and it is one that you accurately link to modern philosophy i.e. philosophy since Descartes. But Descartes’ philosophy didn't come from nowhere. This problem goes back in one form or another to the ancient Greeks. It is not by accident that Alfred North Whitehead characterised the history of European philosophy as “a series of footnotes to Plato.”

It is important at this point to separate influence from blame. Plato and Aristotle could not have remotely guessed at the consequences of their philosophy millennia later, and were far more interested in the more immediate fate of the Hellenic city states of which they were citizens. Still, it is Plato that gives us the roots of the rigid objectivity that you mistakenly attribute to all philosophy (and seem to overlook in the context of the sciences). His allegory of the cave presents clearly this idea that the philosopher is able to get outside of the local conditions of life, see ‘the truth’, then return to local life with the truths in question safely packed away like a sandwich. This metaphysical view gets taken up into Christianity (Plato’s followers’ fingerprints are all over the Gospel of John), and from there gives birth to the entire range of modern sciences, including psychology. Bruno Latour points out that today it is the scientist who is credited with this power to magically collect the truth, which loses none of its problems after being exported from philosophy.

However, this account is an incomplete story since it ignores the fact that for the majority of Christianity’s time on our planet it was Aristotle, not Plato, that was its major philosophical influence. It is only the rise of the sciences, and hence of positivism, that put Plato in ascendance and Aristotle in decline. I will not say that what was taken from Aristotle in the Middle Ages was as good as it could be, but I will say that in contemporary ethics (at the very least) the philosophical descendants of Aristotle – primarily the virtue ethicists – are the most embodied, situated, and generally un-Platonic of the voices within that particular discipline.

When you paint a picture of breaking away from monolithic truth into pluralistic happiness, you seem to make two questionable judgements. Firstly, in equating the former with the church you obscure the fact that Christians – including philosophers like Kant and Kierkegaard – were the driving force in these cosmopolitan movements, at least until the twentieth century. It is precisely because the kind of pluralism we currently have can trace its lineage to Plato through Christianity that plurality has become more of a moral problem than we usually recognise (a topic I pursue in Chaos Ethics). But still, let’s not forget that Christians were part of the solution to absolutism, as well as part of the original problem.

Secondly, you equate pluralism with happiness. Yet pluralism is not in itself a source of happiness, and its only plausible merit is in defending a diversity of paths towards happiness. But we are not happy today, and greater pluralism is not able to make us happy without a substantial overhaul of what that is taken to mean. This, once again, is a task for philosophy, which is not to suggest philosophers have anything like exclusive authority over it. We certainly don't! But some of us like to think we could help, at the very least.

You also raise the question of emotion in philosophy. This is very much a live issue in the discipline, as a glance at any summary of this topic will show. Neither is this situation new: up until the Victorian mobilisation of the sciences, talk of “the passions” (as then known) had a central role in the work of almost everyone in philosophy. Then we outsourced this work to psychology in ‘the divorce’ (when philosophy and psychology became separate fields), with decidedly mixed results.

Nonetheless, the majority of the philosophers who inspire me are those whose work intersects with their lived experiences and which could not be mistaken for coldly objective – people like Alasdair MacIntyre, Mary Midgely, Alain Badiou, Jacques Rancière, or even your countryman Charles Taylor, who very honourably attempted a career in politics so that he might have a chance to put his philosophy into practice. All of these philosophers, along with Ivan Illich (who inspired Taylor) argue against narrowly conceived notions of expertise and the unquestioned authority of experts, and many (particularly Rancière) are keen to resist the idea that distant intellectualism is what will help solve social problems. I might add that Badiou and Midgley are influenced by Plato, whose work is far more complex and nuanced than even I give credit. As ever, the realities of the situation are never as simple as they seem from the outside.

Philosophy is intelligence in theory and wisdom in practice. It can seem to be a coldly cerebral activity, but only if you mistake its theoretical and conceptual skills for its application. Philosophers make many mistakes, because they are fundamentally human – much like scientists. Unlike scientists, however, philosophers are frequently keen that the mistakes they make as humans are explored, rather than hidden behind veils such as blind peer review and faith in objectivity. Wittgenstein in particular was fascinated by our deeply human foibles, and in his later work became concerned that his own ideas would be ignored because of the generally positivistic bent of the time. He was wrong, as it happened: his philosophy significantly influenced psychology.

When philosophers fail in practice, it is usually because they have misunderstood which practices they are embedded in. Many, for instance, are still trying to debug Kant's astonishing attempt to bring about peace through international cosmopolitanism. I fear the attempt to deploy wisdom at the level of the State is now doomed because at that scale wisdom is fundamentally impossible. To escape this trap requires philosophical revolutions of the kind philosophers, artists, and those blessed by a certain madness undertake. Perhaps you will consider joining us?

With love and respect,


Why not share your perspective? I’d love to hear from you, whomever you might be!

The Game of Art

This is a reply to Jeroen’s letter about giving up the A-word, published on Monday this week, as part of the Republic of Bloggers.

Seven Wonders Puzzle (detail)Dear Jeroen,

To give up the word ‘art’ is not, it is clear, to give up any of the benefits of enjoying artworks. So why defend a notion of ‘art’ at all?

Let me begin by thanking you profusely for your missive, which has helped to shake me out of recent doldrums regarding blogging. This alone almost makes Twitter, where most of our exchanges occur, seem a more justified element of my existence. Your game has great interest for me but is not quite a new one. Indeed, I must begin by quoting Jacques Rancière, who states: “The discontent with aesthetics is as old as aesthetics itself.” This will require further elaboration, about which more anon, for having cited this I must dovetail your game with a more earnest version of the same that was published in New Literary History four years ago.

In “Doing Without Art”, literary scholar Steven Connor begins with the aforementioned quotation, before launching into a diatribe that I would describe as a far less playful predecessor to your game. He too compares the use of the concept of ‘art’ to magic (specifically, to magical thinking) and insists that not only could we do without it that really we ought to do so. Playing your game is one thing, expecting it to have a force upon others is another – and not coincidentally, I think, this is precisely what aesthetic judgement does to us, as Kant brilliantly deduced when he inadvertently kicked off what Rancière would term ‘the aesthetic regime of art’. The connection between magic and moral (or moral-like) imperatives is a dead give away that you and Connor are both positivists, and thus propelled by different winds to those that fill my own sails.

Those whose faith in the sciences is stronger than mine generally feel morally impelled to give up magic and magical thinking. What can sometimes be overlooked in this regard is the ways that all language is magical in its sheer metaphorical dependence, a point brilliantly brought home by philosopher of mathematics Stephen Yablo, whom I draw from often and especially in Imaginary Games. So if one is compelled to give up ‘art’ because of its untestable, anti-positivistic nonsense, one ought to be prepared for just how much must be thrown from the stricken balloon. Farewell nations and cities, for a start, you are merely an abstraction with no firm grounding. So long personal identity – surely just a narrative device, as Daniel Dennett has deftly argued (like a dozen religions before him!). Oh, and goodbye Science too – that most magical of words, the thing that unites an impossibly diverse collection of research practices into a coherent whole. It must go. But what positivist can make this final cut without a twinge of regret...?

This parallel between Art and Science is not coincidental, and also takes us back to Kant, whose philosophical analyses undergird an incomprehensibly wide array of contemporary ideas. In both cases, we are tying together a panoply of practices within a guiding principle of unity – and in both cases, what that principle might be is not actually that clear, frequently borders upon the circular, and yes, is often rather magical. Reading Foucault and appreciating his methods for tracing the histories of practices (as I wrote about earlier this year in Foucault’s Archaeology) has given me stronger appreciation for what I had only sensed before, and Foucault is also Rancière’s guide when it comes to the question of art. I mentioned above Rancière’s observation that discontentment with the concept of ‘art’ is as old as aesthetics. He quotes the following:

It is time we got completely rid of that expression which, ever since Kant, is ever and always to be read in the writings of amateurs of philosophy, even though its absurdity has often been recognized.... Aesthetics has become a veritable qualitas occulta – hidden behind this incomprehensible word there are many nonsensical assertions and vicious circles in arguments that should have been exposed long ago.

This reads just like Steven Connor’s argument that I compared to your game, yet this prose was written around 1810 by August Wilhelm von Schlegel, a century earlier. The problem, of course, is that the practices within which ‘art’ or ‘aesthetics’ take their meaning are fluid precisely because they deal with human experiences that are neither entirely private nor truly open to rigorous dissection. If we want to get a handle upon this problem, we need to trace matters differently, using methods like Foucault’s archaeology.

In this vein, Rancière identifies three regimes that have defined our understanding of artworks throughout Western history. Firstly, the ethical regime of images, which I shall skip over for brevity, then the representative regime of arts – which links up with your comment about the Elizabethan arts – and finally the aforementioned aesthetic regime of art that Kant initiates with his marvelous Critique of Judgement. I cannot do this conception justice here, but the important point is that in moving from a list of representative arts to the question of aesthetic experiences, Kant starts a very new game – the one in which ‘art’ is the key term, and the one to which your counter-game reacts.

Rancière says it more concisely than I can: “Indeed, ‘art’ is not the common concept that unifies the different arts. It is the apparatus that renders them visible.” This is the very purpose of playing the game of ‘art’ – and indeed, as my own research has revealed, the parallel game of defending a conception of ‘game’. In both cases, aesthetic values are revealed by the ways that people include (and thus valorise) certain things in their category, and exclude (and thus slight or denigrate) other things.

These games are among the most wonderful that we humans play, and even though I have forsworn the question of ‘what is a game?’ in order to better understand games I feel the need to play the game of ‘art’ in order to secure – to a greater degree than I have already attempted – the lauded status of ‘art’ for certain games. This game is as engaging for me as yours is to you, although perhaps the stakes are slightly higher. The future directions of an entire cluster of media might be open to influence through our participation, and to fail to act seems to tacitly endorse the endless pornification of play that currently dominates. I don't know... maybe I am just under art’s spell, but for me the fact that ‘art’ is magic is precisely its appeal.

With great respect,


The opening image is a detail from Seven Wonders Puzzle by Brandi Strickland, which I found here on her website, brandistrickland.com. As ever, no copyright infringement is intended and I will take the image down if asked.

Jeroen replied with Discourses: Reflecting on the A-word with Chris Bateman. Other replies and comments always welcome.

The Love That Too Loudly Speaks Its Name

My good and excellent virtual friend Jeroen, a stalwart of my discourses, if not perhaps of this blog, sent me this email letter about 'art' (here referred to as "the A-word"), which I take as part of the Republic of Bloggers. I shall reply later this week, and other replies are always welcome too.

Jeroen D StoutDear Chris,

I do hope this e-mail finds you well. It has been rather a while since I e-mailed you, and I believe we were in a discussion that sadly I lost track of in some bout of work. I have been enjoying what tweets you sent and thought perhaps I could hook in on your current venture to start some new train of thought? You were interested in at some point making the e-mails public, so I will write with this in mind, if you might be interested again.

Of course the subject is: the A-Word. I have now sworn to never say it again.

Naturally it is merely a game I play "to not say it", but it is an interesting game. So far the only problem I have had not saying the A-Word is in discussing the A-Word, and even that has never come about without me trying to explain what I am doing. More interesting is people's general reaction of some bewilderment and lack of understanding, eerily frequently followed by them not using the word either, as-if that is what the game of conversation now demands. I feel this symbolises exactly what I find so unappealing and destructive about the word; and it is what made me hope to hear your thoughts: I think the word is almost entirely magic, which overshadows what small amount of use it has. I suspect the word is mostly used to debate the word itself; and to imply importance (or lack of it) in a tremendously abstract sense.

Is it possible that it is not so much a question of what the A-Word is, but rather why the A-Word is? I think I come to this from seeing the usage of the word in Shakespearean English, to mean a learned skill, which I will acknowledge is hard to replace and is an important concept (for instance, "the physician's A in healing people"). In a sense it describes something external to itself there; and the 19th century started using it as a general form of products of such skills still did. Is it a 20th century invention to think of the word (hitherto-fore never needed through-out centuries of incredible craftsmanship) as meaningful in itself? To the point where works that are called so are referential to the thing itself? As-if it were a cult, with its own rites, behaviours and customs, which slightly maliciously has drawn into itself (for no required reason) past works, branded by its own name, so that it can never truly be obsolete?

I see this perhaps not so much from the angle of "what words ought to exist", and rather from the idea that some words (concepts, rites) are particularly successful at self-replication, regardless of importance. An actual meme, if I you will.

My experiment (and joy) in playing the game of not saying it is seeing whether I use the word because it exists and I am tempted to use it, involuntary thereby validating an archaic concept from the 20th century (that has no bearing to the history of ages)... or whether the word has some use I cannot anticipate and find myself unable to express my thoughts at some point.

I would be curious to hear about your own ventures of the word,

And wishing you well,


My reply will appear later this week.

Knowing That We Know

An open letter replying to Chris at The Journals of Doc Surge as part of the Republic of Bloggers. Further replies welcome!

brain-vs-heartDear Chris,

How do we know that we are right, or that something is true, or that what we think is actually knowledge? Or, to come at this issue from a different conception: what is the emotional component of knowing? Despite a gap of four centuries, our understanding of the world still owes a philosophical debt to the French philosopher René Descartes, who perhaps more than anyone else in the Western traditions of thought wrestled with questions of doubt and certainty. It is Descartes who first raises the sceptical challenge: if I cannot eliminate all doubts, how can I be certain?

Many thanks for your blog letter Validation, Conviction, and Doubt, from June this year, in which you pick up the themes of knowledge as a practice that I introduced in the Why the Wikipedia Knows Nothing serial, and run off in rather surprising directions. I have spent a considerable amount of time pondering the points you raise here, not because I disagree with your general position, but because I want to be clear in my own mind as to the relationship between our emotions and our knowledge. Part of the issue here is that the conventional view of knowledge that I critique in Why the Wikipedia Knows Nothing makes knowing into an entirely rational and testable affair, and this is a disastrous way of understanding this subject since it means we can never know anything, and our conception of what it means to know does not reflect the experiences of knowledge at all.

You cite Jonathan Haidt, a psychologist I have a rather conflicted relationship with, in terms of the idea that our intuitions precede our strategic reasoning. Haidt has been a brilliant voice in the intersection between liberal and conservative politics in the United States, and this aspect of his work I adore. I am rather less impressed with his relationship to philosophy, and particularly moral philosophy. Haidt was a philosophy undergraduate, and quit this degree to pursue another direction. Reading between the lines, I suspect he had violent disagreements with one or more of his philosophy teachers and still bears a chip on his shoulder. His writing (particularly in the book you cite) has so lost perspective on the subject of philosophy that he reads as an almost vindictive critic, bending over to try to justify why philosophy isn’t needed. So, inevitably, when he has to discuss philosophical topics he goes horribly awry – because he knows so little about the topic!

Haidt makes the claim that because our intuitions precede our strategic reasoning about moral judgements that rational philosophy cannot be trusted on ethical matters since it does not describe how people actually engage with moral subjects. I am in a agreement with him that the force of our moral intuitions is greater than that of our ethical reasoning, and also that rational philosophy is problematic, but this latter claim needs some qualification. Rationalism has its roots in the episteme (to borrow Foucault’s term), or conditions-of-knowing, that Descartes gives us. It begins with the conception of knowledge as a necessary entertaining of all doubts, in order to overcome them all and reach certainty – a kind of epistemic Battle Royale. But the principle purveyor of this kind of rationalism are today are the sciences, and thus Haidt very quickly paints himself into a corner. The knowledge he would have us accept is indelibly connected to the rationalism he would have us question. How can we trust the testimony of someone who cannot reliably account for his own knowledge?

At root, Haidt is still operating in the epistemic space opened up by Descartes that involves splitting the world into mind and body or, as Kant refines this, into subject and object. We have gained a great deal from these perspectives, but they also risk misleading us. There is a constant danger of equating subject (mind) with doubt, and object (body) with knowledge – a perspective readily apparent in the insistence that we must think objectively in order to get at the truth i.e. that we must think like something that cannot think in order to reach the facts. Thus we get to a distorted understanding of the sciences where the scientist is merely a priest-like conduit for knowledge that magically arrives from the objective world, distorting the impressive work that scientists actually conduct to get the world to ‘speak’ to us (as the suffix ‘-ology’ implies).

When we look at knowledge as a practice, this whole scheme of understanding becomes inverted – because it is evident that objects cannot know, and that knowledge is something that only beings with minds can possess, and they can only acquire it through interacting with the body of the universe (with the sole exception of logic and mathematics). What made it seem plausible that knowledge could be equated with objects was the idea that knowledge was simply a matter of accurately reporting the status and relationships of objects. But scientists do not learn about the world this way: they develop practices that turn objects into reliable witnesses; they make the world speak. In doing so, they trust their emotions in order to judge when they have an effective practice – and when they don’t (as Haidt quite often doesn’t), their emotions deceive them as to the importance and reliability of what they are claiming.

The whole process of knowing – in all its many guises – is inseparable from its emotional element. Our inquisitiveness or desire compels us to begin to acquire knowledge, either from learning an existing practice (as in the case of most knowledge) or by trying to develop a new practice (as researchers in the sciences do). Our excitement, commitment, curiosity – or stubbornness! – helps us maintain the repetition required to establish the habit upon which our practices are built. And the experience of triumph or satisfaction when we are able to execute a practice correctly – when our knowledge is shown to work in the way we expect – is foundational to knowing that we finally know how to do something, even if we sometimes need someone else to confirm for us that we are doing it right. Even (especially!) in the sciences, knowledge and emotion are intimately entwined. As Pascal suggested: “We know the truth, not only by the reason, but also by the heart.”

Your discussion of conviction and doubt goes further than I am able to adequately address in this letter, and displays a quintessentially Buddhist appreciation for the wisdom of doubt. I think, perhaps, that conviction is not an aspect of knowledge, as such, and falls under Hannah Arendt’s warning that whenever we resolve to take action, we are committing to a course of events the outcomes of which we cannot actually know. She calls the capacity to take action “the most dangerous of all human abilities and possibilities” and suggests, in 1961:

The totalitarian systems tend to demonstrate that action can be based on any hypothesis and, in the course of consistently guided action, the particular hypothesis will become true, will become actual, factual reality. The assumption which underlies consistent action can be as mad as it pleases; it will always end in producing facts which are then “objectively” true. What was originally nothing but a hypothesis, to be proved or disproved by actual facts, will in the course of consistent action always turn into a fact, never to be disproved.

Here is the danger of believing that knowledge is passively produced by excluding mind from matter: when we divorce knowledge from the emotions that guide the process of beginning to know, we banish doubt and glorify certainty by simply ceasing to know and calling it truth. It is this risk – that of premature certainty, of manufacturing truths to support any action – that is an ever-present danger in all our lives.

With love and respect,


Any and all replies welcome.

The Politeness of Tutorials

Over on ihobo today, my reply to two missives from Jed Pressgrove and Chris Billows examining the role of the tutorial in videogames in terms of different motivating ideals of politeness. Here’s an extract:

The challenge in creating an adequate tutorial is the complete absence of knowledge we possess of the actual people who will be learning to play our game. Pitch the level of detail too low, and there will be players confused by what is expected of them. Provide too much detail and those players who are skilled in figuring things out will be irritated... What is particularly bemusing about creating tutorials is that if you watch a player learn to play a game from over their shoulder, you might not need to say more than a sentence or two in order to put them on the right track. But this is because we have the intelligence to interpret the problems a player encounters and provide appropriate guidance: there is no adequate way to transfer this skill to a computer!

You can read the entirety of The Politeness of Tutorials over on ihobo.com.

Blog Republic Round-up

Thrilled to report that the blog is not dead, it is just under pressure from conventional social media. I have recently been enjoying my greatest extent of cross-blog conversations since the previous decade – and I’m loving it! Here’s what’s been happening…

More nonsense next week.

Horror and Punishment

Over on ihobo today, my response to Jed Pressgrove’s recent criticism of 2001’s Silent Hill 2. While I concede many of his points, it is substantially to defend the game (and the team behind it) that I wrote this reply. Here’s an extract:

When my wife and I went on to play Silent Hill 2, it annoyed me with its obvious staged linearity and almost total absence of what I, at the time, considered characteristic of game design. Where was the open structure of the first game? The tightly constructed progression? Why am I so constrained in almost everything I do now?  So it was with some surprise that,  after a few more playthroughs and considerable reflection, Silent Hill 2 eventually came to stand out as an exceptional case of game narrative. Indeed, I am hard pressed to find any game prior to 2001 that fulfils its narrative ambitions to the extent of this game – which is not to say that it is an unqualified success on all fronts. But then, my general view of game narrative prior to 2001, when Silent Hill 2 was released, is rather negative. There are signs of what might be possible… but they are rare, and almost always dragged down by an overbearing emphasis on puzzles or combat.

You can read the entirety of Horror and Punishment over on ihobo.com.