Previous month:
January 2017
Next month:
March 2017

High Tech, Low Fidelity

Untitled BanksyThe virtue of fidelity is perhaps the hardest idea for anyone to appreciate in this age of robots. The concept of faithfulness and loyalty to a person, cause, or ideal feels like a relic from a time before our own, and attempts to espouse the merits of this otherwise simple concept flounder on the twin moral calamities we face. Understanding why fidelity is something valuable – indeed, invaluable – means defending against both of the catastrophic corruptions of ethical life that have afflicted our time, and showing how both are rooted in an abandoning of fidelity for something lesser. Yet before it can be argued that fidelity is worthwhile, we must recognise that we currently lack it, and appreciate how this came about.

A brief warning is required. When we enter the fault lines of our ways of thinking, it can be difficult to remain open to new perspectives. Moral horror, what psychologists call cognitive dissonance, makes it hard to even listen to a new perspective that strikes us as beyond belief. Thus to talk about fidelity and freedom in the paradoxical way I am about too is likely to set in motion all manner of resistances. Objections will loom large and make it easy to miss the point I am trying to make. By all means consider the arguments against what I am presenting; I would expect no less. But always leave open the possibility that no matter what I may have got wrong, there might be an essential truth at the heart of my argument worth grasping.

To understand the subtle problem I wish to outline, we must be able to appreciate the cybernetic networks (or cybergs) that we are enmeshed within, and how they relate to the moral catastrophes of our time. Consider first the disaster of individualism that tells us that we only have to be loyal to ourselves, and celebrates breaking free of tradition as a triumph of the individual will. There are indeed situations worth celebrating here, and movies like Footloose and Bend It Like Beckham rely upon this for their drama: parents enforcing religious standards bar their children from behaving in certain ways. Yet these stories serve to re-illuminate Christian and Sikh practices; Footloose is as much a lesson for Christians as it is a celebration of individual freedom, and while Sikh traditions are a smaller part of Bend It Like Beckham, the same general point remains.

Conversely, individualist doctrine serves to valorise any exit from religious traditions as a victory for freedom. Yet this escape is of the most fragmentary kind: those who make a clean break from their family and its locally-social network merely transfer from one small cyberg into the dispersed gigacybergs that absorb us all. Without ties to other practices, those founded upon people, the result is a chimerical existence as a free individual, one whose freedom consists solely in the choices of consumption being placed before us. This is the disaster of individualism at its core: we say we value individuality, yet foster a way of living that sacrifices any more substantial  freedom in favour of merely pre-prescribed market decisions.

We have misunderstood liberty as individual choice, and in so doing have lost any grasp over authentic freedom. This mistake is so subtly concealed from view that our typical understanding of our situation is the inverse of what a closer examination reveals. For we think that breaking free of the traditions and practices of our parents or extended family is the mark of freedom because we remove from our lives an influence that seems to impede our autonomy. Yet the core practices of both individual and group remain the same after this alleged ‘split’: we are still embedded within the cybernetic networks of money, digital media, medicine, cars, and transportation – the spaces where we are assured of our individuality through participation with a common culture of movies, games, scientific discovery, news and so forth that tells us we are free because we can change job and move to a new city – that we are free, in other words, because we are nomads, unbound from tradition.

Except, of course, for the traditions of money, media, medicine, technology, and transportation, which are (in the case of money) as old as the religious traditions that it is supposedly an expression of freedom to break with. The point being that we have equated our capacity for infidelity as freedom because we do not see any expression of liberty in the possibility of disentangling ourselves from the various cybergs that enclose us. Moreover, we do not even see that possibility at all. It is entirely beyond our thought.

Within the cybernetic networks that enfold us we possess only the minimum freedom of choice within a closed market, never the mark of authentic freedom that would come from the liberty to disengage from the cyberg itself. The car makes this clear: please, choose which motorized vehicle you wish to use to participate in transportation... you must do so if you are not to be radically disadvantaged by being unable to move around in the ways required by our planetary infrastructure. It is, paradoxically, only the human who lives outside the road network who possesses an authentic freedom of movement – the physical nomadism of the Tuareg shows a genuine personal freedom when measured against the cultural nomadism of the city-dweller, who claims their infidelity as freedom, while having no choice but to engage in transportation, money, and so forth.

Yet this is done willingly; it does not seem like a lack of choice because accepting all of these enmeshing networks is clearly desirable to us all, since the values by which we gauge our decisions to participate are taken to be beyond question. This is the moral disaster of consequentialism: we know unequivocally that outcomes matter (this is beyond doubt), and thus utility – the capacity to bring about desired outcomes – is the yardstick of all choices. Who could doubt that better technology is more useful than the tools remaining the same, or that living longer is better than dying young, or that getting between two points faster is better than doing so slower, or that superior entertainment beats boredom? Who could doubt that more money is better than less?

We weigh outcomes and think that this is the measure that cannot be doubted. But when this is the only way we evaluate things, we are being shallow sighted. Yes, once we level transportation down to mere time between points, its a relatively simple question of what is better. Yet this hides the way roads replace other, more egalitarian spaces; how pedestrians and cyclists must choose between going further or gambling with their lives; how both the cost and the impact of living is raised by mandatory car ownership. Do we want to give up cars? Probably not. But we ought to look at them more closely than measuring them by expense, fuel consumption, and speed. You don’t just buy a car, you buy into the automotive infrastructure that the car requires to work. You buy into the car’s cyberg and all that entails, every submerged aspect that is buried beneath the imagined ideal of faster travel and the fantasy of driving conveyed by automobile commercials. A higher top speed won’t get you to your destination any sooner when you spend most of your journey stuck in traffic, as cities are learning far too slowly.

So too medicine: no-one can look at the frequency that women died in childbirth just a century or so ago, nor the number of children that perished as mere babes, and not feel compelled to speak out against those who, for instance, opt out of vaccinations that benefit everyone. But again, we are shallow sighted about medicine. The medical establishment causes almost as many problems as it cures, not least of which because this view of life – as permanently extensible as long as we have the right tools – sets us up for certain failure. Unlimited health care asks that we identify every problem as a disease, to research a cure, and to administer all cures to all people. Yet this is an ever-growing task, becoming more and more expensive and either distributing the best health care only to the wealthy, or gradually bankrupting nations that attempt to pay for everything for everyone. Do we want to give up medicine? Absolutely not. But still, we shall all eventually die, whatever we do, and medicine does not help us deal with that inevitability – it is rather our most effective way of hiding from it.

Money is the subtlest trap of them all, because the idea that more money is better is so effective at concealing the radical unimportance of money to well being. That is not the same as saying poverty is great – by definition, those living in poverty don’t have enough. But what standard do we aim for? The moment you are living inside the car cyberg, your cost of living just skyrocketed because transportation is taken as expected for each individual instead, as humans once took for granted, as a community good. So you no longer need just food and shelter, now you need a car too. And the more money you have, the worse it becomes... second houses, sports cars, private jets – expense rises to match income, ensuring dissatisfaction. Worse still, the large accumulations of money become cybergs all of their own: capital chases its tail in a game of perpetual commercial deployment that has little benefit to anyone but the venture capitalists playing with the networks of extreme wealth.

Now the point here is not the critique of these specific cybergs – although there are dozens of unthought, undiscussed problems and crises hidden just beneath the surface. It is that our supposed expressions of individualism – breaking from tradition, leaving the family business, moving to another city or country – do not adequately represent personal freedom since whether we stay within or break away from our childhood culture, we remain inside all the gigacyberg networks of our time. This is not even a new critique: Ivan Illich already outlined these problems (albeit from a different perspective) in the 1970’s: we are beset by radical monopolies, dominating systems that replace freedom with choices set upon their own rigid terms. 

Christian, Hindu, atheist, Sikh, Buddist, pagan, Muslim... all are taking part in the same practices – technology and all it’s messy fruits. The decision to break with your parent’s practices is at the same time the utter continuation of the medical, transportational, commercial practices that you acquired from your parents. So individualism is a crisis not because it is wrong to want to be free, but because we say we are free merely because we can vary the minutiae of our lives and beliefs and do not recognise how the common basis of evaluation remains unchanged. Individualism destroys fidelity because ultimately it is unbreakably allied to consequentialism in its narrow guise of utility.

High technology, low fidelity. We always go for the better, the new, the taste of the future over loyalty to the past. We do so because we are cultural nomads, and we have all chosen to live in essentially the same ways – not out of loyalty, but out of a bitter necessity borne of our dependence upon the cybergs towards which we cannot even claim fidelity. For how can we be faithful to something that we cannot even imagine the possibility of leaving?

The opening image is by Banksy. As ever, no copyright infringement is intended and I will take the image down if asked by Banksy, which seems unlikely.

Next week: Faith in What?


Babich and Bateman: Touching Robots

Last week, the tyrannosaur’s hands. This week, philosophers Babette Babich and Chris Bateman talk about hands upon computers and the illusions of interaction.

IMG_7514Babette Babich: The German name for cell phone is ‘Handy,’ which seems to be because having hands means we like to have things at hand, and we like to do things with our hands – pretty much all the time.  And this is ubiquitous: here I include a photo of the Princeton philosopher Alexander Nehamas, taken in Athens at the last World Philosophy Congress, consulting with his cell phone.

Chris Bateman: I recall how uncomfortable I became thinking about whether my hands were or were not in my pockets... bringing this to my conscious attention as a late-teen made me ask questions about what I was doing with my hands, and why.

BB: Once upon a time, part of growing up was learning what to do with one’s hands: meaning nothing whatever. There were pockets but one wasn’t meant to have one’s hands in one’s pockets. Briefcases and handbags but one also was not meant to be rummaging around in them. And books, but then, in the company of others, except among strangers on a train of course, and so on, or in school, but in a meeting, during conversation, one was to use one’s hands to hold the book, without fidgeting and without reading it.

CB: The book, more than anything in my childhood, was the primary thing my hand was holding. Now, although I still read a great deal of books, the thing more often being held in my hand is a smartphone, or as I like to call it a ‘pocket robot’. And there’s a strong parallel here, because I used to read books as I walked down the street to work (a skill requiring considerable practice, and requiring strong peripheral vision if you are not to step in something untoward!)... for some reason, walking with the smartphone in my hand bothers me more than the book did – and I don’t really know if this is because of differences in the experiences of book versus robot, or because I have changed in the almost twenty years between. In both cases, for me, the item is an escape for me, out of perceptual reality and into the theatre of my mind – and in this, as Graeme Kirkpatrick pointed out in respect of videogame players and their controllers, my hands are out-of-mind. The smartphone, perhaps, makes it harder to ignore our hands than the book did.

BB: Pocket robot! I love this expression, because this personal robotic dimension is very surely part of the reason we are addicted to our phones! And the peripheral vision challenge is part of the allure of Pokémon Go, I think we will see more enhancements to come beyond Pokémon reality ‘skins’: imagine, and I know you are better at this than I could ever be as a game designer, but if one could outfit the world around one at will, say, dressing people on the streets in medieval garb?  What fun!  A personal holodeck to be configured at will. At the very least it could improve a blind date experience, maybe best without the other person knowing about it.  Augmented reality, Miniver Cheever style! I defer to your judgment, but I can’t help wondering if the controller is not also a technological extension of the hand, just as we manage to use our keyboards that way, have for years… and I will bracket the comparison to Athanasius Kircher à la Siegfried Zielinski just to get back to analytic philosophy’s own preoccupation with hands, think Peter Godfrey-Smith and his octopus mind.

ControllersCB: You absolutely correct, the game controller functions as a prosthetic hand extension, indeed, I should say, game controllers, as these too went through something of a Cambrian Explosion in the early arcade – trackballs, the myriad buttons of Defender, toy guns, joysticks from lollipops to aircraft yokes – before steady commercial pressure stabilised the twin stick controller that is the standard form for most game-literate players today. As VR comes in now, the pressure of the channel dug by this comfortable design now becomes a problem, for players have learned that their right hand adjusts their view (and most players are not conscious of this while doing so because it is habitual), but with a headset to dump you into the visual field now the neck must be used instead. The result is both confusion, because the hand has learned and doesn’t rapidly unlearn, and disorientation, even nausea. I think of Wittgenstein’s ‘if a lion could speak, we would not understand him’, and think this might apply to our own hands.

BB: The Wittgenstein connection (although I also make the case with reference to Merleau-Ponty) is central – think of Wittgenstein’s On Certainty. Our hands are part of the way we communicate to others.  The genius of the single, so-called gesture-enhanced or multi-touch modes, is that the desire to use our hands plays into this, and is part of what can count as ‘intuitive’ design. Think of dating apps, swipe left (or right). In each case, with or without the app, our smartphones and tablets let us touch, worse yet, they require us to do so, in order to use them optimally, not that all of us do. But, and this is the phenomenological take away, owing to our hand involvement, we cannot but interact with our devices: we are not merely using them to talk to others, we are talking to them, and they are talking back. One of my students at Juilliard (you will recall that they were all ‘kind enough’ to make some comments on the last blog, ‘required’ as you pointed out that these comments were), wrote a term paper on the phenomenon of autofill (and the point appears in one of the comments). This is the talking back that is autocorrect, the autofill completion, not necessarily qua response as much as an opting in to a replacement, whereby autofill speaks on our behalf. This is clearly the future and at the same time we have been opting in to allowing our machines to speak for us ever since we started using typewriters (this is a Kittleresque argument, who in turn borrows from Nietzsche and other early adoptors) and answering machines, allowing the machine to intercede for us, to take messages as a secretary would in our absence and not less in our presence, screening calls on our behalf.

CB: Absolutely, we are talking to our machines and we are simultaneously ignoring them, just as it was once (and not all that long ago) acceptable to instruct black servants without ever thinking of that as conversation, or even in many cases to think of them as people. This righty shocks our sensibilities now that the issues of race are almost painfully and embarrassingly in focus, and while I am highly doubtful of a similar revolution in moral perspective happening with respect of computerised devices (which are nowhere near sentience, contra the late Justin Leiber), the time will come that they too will come into view. One of my principal philosophical interests at the moment is our relationships with our robots, and the ways that we dismiss the significance of this. Because, to give a simple example, when our robot summons us with a chime, we take it in hand and respond to its summons (Hegel’s Herrschaft und Knechtschaft [master and servant] again...). But this chimed summons will interrupt our engagement with others around us; it takes us out of our social space in a way that feels very different to the people who watch it happen. I have found, since first having this come to my attention, that it is an oddly shocking situation that those absent-and-distant people, when mediated by a robot, are perceived to deserve more respect than those present with us.

BB: Telephone obedience, quite Pavlovian, corresponds to the compulsion we feel to respond to a ringing phone, chime, or beep. The mischief is, the problem is thinking that we are really interacting. Thus if someone designs a really excellent sex robot, and it meets the Turing test that we have yet to devise, call this the Pinocchio test: a cartoon or a fairy tale that tells us that if there is a difference (more Leibniz) that makes no difference, we have attained godlike powers or what is just as good, an ideal companion, Galatea (and of course Galatea is Pygmalion’s male artist’s ideal of an ideal woman). But note that just that ideal would, for some folk, be an ideal friend: this would be, and now we can go back to Aristotle, someone who is everything we wish them to be, who responds as we would like, when we like, just as long as we would like.  This could be an ideal chess partner or, if we can multiply players, the perfect rugby match. Maybe we can get a real-life version of Monty Python’s Philosopher’s Football. Or, beyond a gaming partner, and this seems to get the lion’s share of attention, for obvious reasons, there was an entire conference at Goldsmith’s just before Christmas, just a month ago now, on robot sex – I wrote a small essay for the occasion on ‘teledildonics’ – or to be vanilla about it, and Hollywood has already gotten there first with the film Robot and Frank, just a companion for one’s elderly relatives in need of a caretaker.

Robot and Frank

CB: That we would happily mistake a robot for a person at least strikes me as a superior mistake than to mistake a person for a robot or, as sprang mischievously from Descartes view of animals as clockwork automata, to fail to see an animal for a being. And here is an ironic end to a story that spans four centuries, because from mistaking animals as automata we now come to mistaking automata as people – a situation not entirely helped, in my estimation, by reducing everything to objects or (which amounts to much the same thing) telling people they are only an illusion. If I pretend that we can reduce all things to one kind – call it object, or unit, or body, or whatever – it only brings into clearer relief for me the differences between beings and things, which helps demonstrate how the idea of subjects and objects that descends from Descartes through Kant holds such force, even today. Yet the smartphone and other robots are the things that feel most like beings, because of their capacity for independent function. A robot, quite unlike a watch, is fun to be with.

The dialogue continues next week: Your Plastic Pal Who’s Fun To Be With


Should Your Laptop Say Please?

Please insert a disk into drivePoliteness is not merely an arcane code of conduct, it serves to smooth over the rough edges of human interaction by making requests more tactful, and thus less irritating. Yet as cyborgs we are not good at displaying tact towards one another, and if our robots could exhibit cyber-tact, does this mean your laptop should start saying ‘please’?

One of the unexpected side-effects of linking most of the human race together in a cybernetic communication and data retrieval network has been throwing everyone, regardless of background or circumstances, into random contact. Because the internet was conceived primarily as an institutional tool for combining computational capacities i.e. for networking robots and not humans, the collision of people it has facilitated can only be considered unanticipated. We are still far from prepared for the consequences.

When a cyborg understands others and acts considerately towards them they display tact. This is a virtue that can mean many different things in many different situations, but the core habit behind them all is an attentiveness to the emotional impact of speech and behaviour. Politeness can be seen as an aspect of tactful behaviour – indeed, the easiest part of tact to master, since it is so formulaic. But politeness is a fairly narrow virtue while tact is broad and versatile, having the beneficial quality of helping both those who master it and those it is displayed towards. The corresponding debilities are bluntness, which marks a disregard for courtesy or an inflexible obsession with truth, and tactlessness, which manifests through a failure to correctly anticipate the interests of other cyborgs. Tact need not entail lying; honesty is not at task here, but rather awareness of the effects of language and action upon others.

The internet has made tact far harder to master. When you deal solely with the people from your local culture you usually appreciate what you can or can’t get away with saying without causing offence. In our digital public spaces, however, someone from New York or Paris can collide with someone from rural Georgia or a remote part of Micronesia. This inherent culture clash is concealed by the indirectness of online connections (the vagueness of the digital other), and leads to substantially worse bluntness than happens in face-to-face interactions. The mask of anonymity here, as with kindness and respect, only makes the situation worse.

Tact manifests both in what is said and what remains unspoken or untyped. There is substantial overlap in this regard with respect and cyber-respect, but while respect is probably a requirement for tact, it is possible to respect another cyborg without displaying tact. Furthermore, attempts to enforce tact tend to end in a lack of respect. Thus while providing suitable warnings is a thoughtful expression of tact, it can never be entirely ethical to forcibly demand such warnings mandatorily. To do so is demand respect by denying respect, a peculiar contemporary moral blindness that comes from practicing the rules-focussed ethics of ‘rights talk’ in a complete absence of appreciation for the ethical traditions that lead to rights claims (that is, to fall prey of the moral disaster of individualism).

Robots display personal cyber-tact when they act considerately towards their humans in terms of the triggering of information and do not pursue unwanted displays of media or information. Pop-ups are a classic example of cyber-tactlessness, as are embedded videos that play when accidentally touched while scrolling through text (the BBC news website is especially bad for this). Our robots are inherently cyber-blunt (although they needn’t be): when was the last time your laptop said ‘please’ when it wanted to download and install an update? Not that long ago, computers said ‘please’ when you had to insert a disc into a drive (see the image above): now, they just bully you into updating whether you want to or not.

Cyber-tact can also hypothetically manifest socially, when a robot encourages its human to behave with tact. It is far from clear that this ever happens in practice, and all the problems of maintaining respect against the mask of anonymity apply with tact. The root problem here is that concepts such as politeness, consideration, or toleration require a social imagination, something that beings of various kinds are capable of, but well beyond the programmatic capabilities of robots. This means any viable form of social cyber-tact must leverage human capabilities in order to work.

Designing robot systems to augment tact presents a significant challenge. Suppose a social network were to attempt to train its humans in tact by adding a policing system, such that tactless or blunt remarks were flagged by the community as such. The net result of this would rapidly devolve into carnage, since humans in digital public spaces will always abuse systems that are capable of causing harm. Of course, not everyone does so – but it only takes a small proportion of people to make a minor design flaw into a disaster.

A classic example occurred in the design of The Sims Online game. In the early version of this, players could declare other players ‘trustworthy’ or ‘untrustworthy’. However, a group of players calling themselves the ‘Sims Mafia’ realised they could use this feature to shakedown new players – threatening to blackball them as ‘untrustworthy’ if they didn’t give them all their in-game money. The design of ‘public karma’ systems (as they are known) has avoided dealing with negative scores for precisely this reason, not to mention that humans will abandon tainted account credentials if necessary in what has been called ‘karma bankruptcy’.

Now it may seem that this is irrelevant to the question of cyber-tact: couldn’t you just have the robot provide a positive tact score? Yes, this would be the minimal case for cyber-tact. A positive tact system records when people report that others have been tactful, but necessarily such humans must be already capable of tact. The robot has displayed cybervirtue, but merely through tracking human virtue and thus encouraging the use of tact that a human already possessed. But precisely our problem is that the kind of tact we now need exceeds our prior experience. What is most needed in terms of cyber-tact is a way for a robot to teach its human how to act tactfully in the cultural collision of the internet. It is far from clear this design question is actually soluble.

Whereas designing for social cyber-respect may be a matter of giving up the mask of anonymity, social cyber-tact seems to be more challenging. In both cases, however, the design of robots can at least aim at personal cybervirtue, by (for example) affording their humans adequate control over what they see or read, defending against unwanted displays of media, and supplicating when requesting an inconvenience (instead of demanding, as is far more common). If we think of our robots as ‘neutral tools’, the idea that virtue could be applied to their function is lost on us. Yet we do not use a computer like a hammer (except when we are especially irate!) and we are more entitled than it may seem to expect it to say ‘please’ when it wants to do something that we do not.

More cybervirtues next week.


Babich and Bateman: The Tyrannosaur's Hands

Last week, the self-satisfying qualities of social media. This week, philosophers Babette Babich and Chris Bateman talk about dinosaur hands.

Vollbildaufzeichnung 22.12.2015 170053Babette Babich: To say just one thing about this bodying forth [introduced last week] along with slow ways to pour coffee, it is worth pointing out that we can, indeed, point things out. We can do that in rather a good many ways, nod with our chins or noses, raise eyebrows (do let us think of the late Alan Rickman, because of what he could do with an eyebrow, and he himself gave the palm to Dame Maggie Smith in the same regard), or nudge something with an elbow (to be Gilbert and Sullivan about it) or for a Manchester reference, with a knee and so on, but usually we point a digit, a finger, sometimes in the Facebook iconography, that somebody, should someday think of tracing back to its patently imperialist association with the Roman Empire: a thumb.

I recently tweeted about Randall Jarrell’s The Bat Poet, “with wings,” as Jarrell wrote about his little misfit bat poet (and here I develop a response nascent in the reply offered by one member of the twitterati to my tweet), suggesting that there might be an answer to Thomas Nagel avant la lettre. Where bats have wings, these are their hands, so the comparative anatomy would have it, and it makes a difference to think here of the sheer having of hands.

Chris Bateman: The pointing out interests me as a capacity because, for instance, dogs are perfectly capable of understanding us when we point, yet they lack hands. When they need to gesture in a direction, they must use their whole body – a faculty that gives the Pointer breed its name. Our capacity to point with our hands goes beyond a simple compass reading; gesture is a whole other language of its own (and sign language thus essentially a development of that). Yet it strikes me that neither pointing nor gesturing actually require fingers...

BB: What is at issue is not the number of digits and such, not as in pop anthropology and physiology, the having of opposable thumbs and all that that is meant to have brought us, to wit various and sundry connections with comprehension and apprehension and the having of things in the palm of one’s hand, as it were.

When I was studying biology at university, way back in the last three decades of the last century, in the mid-1970s, professors teaching earth science still insisted to us in lectures that plate tectonics was an unproven theory, dismissing Wegener’s continental drift as had been done to his frustration throughout his life.  In courses in Comparative Anatomy and in Ornithology I read beyond classroom lectures to discover the then-speculative connection between dinosaurs and birds – including the economic arguments that larger dinosaurs could not have been, simply given that they moved at all, poikilothermic, cold-blooded. What convinced me concerning warm-blooded dinosaurs was the fossil record, not at all of the well-known archaeopteryx but rather of a find in Russia (as Russians like to name things), Sordes pilosus (hairy filth, hairy devil as it was then translated), the Latin gives us the Rickmanian resonance once again. But if a pterodactyl has fur or fur-like feathers that will serve, as in the comparative and cognate case of a bat's wings to be sure, to protect core body heat that can be lost in the surface area of wingspan, much else follows. I used to run around campus tweeting the way one tweeted before there was Twitter, imitating Tyrannosaurus rex, tweet, tweet, tweet, in a very deep voice: I did this with friends as part of a game, my boyfriend, who was much taller, was better at tweeting like T. rex. What follows for science is all about everything we cannot know as we have no trace of it, nothing of integument, little of feathers, little of fur, no reptilian scales, nothing of colouring, all things lost to the fossil record, apart from sheerly, literally glorious finds (like the recent amber discovery of a tail, complete, to be sure with fur, or as most reports describe it, with feathers, and other more recondite surface finds like Sordes pilosus).

CB: This image of you and your friends playing at tweeting tyrannosaur is not going to leave me very quickly! As an avid junior palaeontologist myself (admittedly, my ‘field work’ as a nine year old merely accumulated a veritable treasury of ammonites…), what struck me was the rapid manner in which the status of fossils changed. I remember, for instance, a brief period in the 1980s when archaeopteryx was a hoax owing, I think in part, to the excellent British astronomer Fred Hoyle. Stephen Jay Gould, at the end of that decade, put the Burgess Shale into the spotlight – probably the only time a rock strata has been famous! – as a panoply of oddities and the Simon Conway Morris (who I spoke to briefly for The Mythology of Evolution) disputed this interpretation. Soon after the book was closed on the bird-dinosaur connection you refer to, which seemed to go from heresy to orthodoxy in record time! Every dinosaur instantly went from crocodile-kin to bird-ancestor almost overnight (although, of course, those two are not mutually exclusive…)

imageBB: Brilliant! And we are probably still ensconced in that orthodox trend! But there are other questions: how did T.rex actually eat? After solving the energetic problems of getting up from sleep, and and having the energy to run at all, never mind the tweeting, T. rex, and paleoanatomists debated this at some length in the literature, would have had trouble putting anything in its mouth – and its feet don't seem, like a raptor's feet to be for grasping....meaning that it would have had to use its hands. But how it  consume its prey?  What else are we missing? I am thus fond of imagining that T. rex did not merely have little hands but perhaps the little hands are just what remains of a variation on wings, like the baleen of a whale’s jaws, or as a bird is a better analogue than a cetacean, as complement to jaws that would allow them to function like a pelican's beak. But it could also involve other anatomical extensions, like the cockscomb of a rooster or the flaring ruff of a desert lizard, there would, so I thought, there could well have been extra bits. All we see are the bones for little vestigial hands, as we suppose the appendix to be vestigial (what we are learning about the gut and its associated flora is likely to make that attribution as wrong-headed as our views on continental drift), still where would Japanese monster movies be without Godzilla’s little hands? But these ‘hands’ could also be differently articulated, and might be quite enough as basis for cartilage and other extensions, or some other adaptation related to the thermodynamic eating demands of being a large land animal, from which could grow what were the effective ‘wings’ of the thunder lizard, not used for flight but gathering prey. In addition to his bass tweet, tyrannosaurus might have run through the forest canopy or along the veldt, sweeping everything in its path into a great drag net of feathered, curved wings: gathered and scooped into those huge jaws. 

Think pac-man with feet.

For human beings, our having hands as we do probably gets in the way of imagining T. rex at all (we find it hard to understand that a bat’s wings are, to a great extent, the bat’s ‘hands’). But above all, beyond flights of fanciful palaeontology, the German name for cell phone is ‘Handy,’ which seems to be because having hands means we like to have things at hand, and we like to do things with our hands – pretty much all the time.

The dialogue continues next week: Touching Robots


Top Ten Cybergs

Purple Cybernetic FlightEvery purposeful network of beings and things forms a cyberg, where (like an iceberg) we only see a fraction of the entailed network and the rest lurks beyond our awareness most of the time. The complete inventory of beings and things entailed within each of these cybernetic networks would be challenging to enumerate, but we can approximate the scale of each cyberg by counting just the number of one kind of entity within it e.g. the number of humans, the number of computers.

To qualify as a cyberg at all, we must be dealing with a network that spans its entire breadth with some kind of active relation, even if merely one of potential. A nation is a good example: not every citizen knows every other citizen yet they are linked by a shared bureaucracy that integrates them all into one functional network. It is not enough for there to have been a common network of production – no matter how many people own a penknife, penknife-wielders do not have any ongoing relationship. Conversely, the exchange of media effectively links television stations and thus viewers such that while individual TV stations are modestly sized cybergs by contemporary standards, they aggregate into something far more substantial. (Religions are something of a borderline case in this regard, but I shall set these aside for now.)

In the list that follows, cybergs are listed in order of the size of a single indexed entity, either humans or devices. Everything listed is a gigacyberg, with no fewer than a billion entities embroiled in its network. This list is not intended to be definitive but merely indicative – it points to the general situation today from a perspective we would not normally consider.

Runners Up

A number of megacybergs narrowly missed the top ten, including the European Union (743 million), movies (about 800 million), and guns (875 million). More than 360,000 people die each year as a result of the gun cyberg, but this is by no means the most fatal of our cybernetic networks. If this list included religions, Christianity would be the number three gigacyberg (2.3 billion), Islam would be ranked jointly with Microsoft (1.5 billion), and the Hindu traditions would be a close runner up (900 million).

Joint 9th: Tencent and Google (1 billion)

Chinese internet giant Tencent and search colossus Google both have about a billion humans in their cyberg. Whereas Tencent does not lead Chinese search (that honour goes to Baidu) it has a tremendously diverse network of internet services, including the wildly successful competitive game service League of Legends. Google dominates search globally – but even this only allows it to squeak into the world’s biggest cybergs if we take its quoted figures as accurately gauging its scale. Pragmatically, the reach of the Google cyberg is probably greater than this conservative estimate – but it feels somehow fitting to show this young upstart beginning its climb towards the top of the heap...

8th: Cars (1.2 billion)

It is possible to drive completely around the world thanks to the extent that the car-human cyborg has emerged as the dominant lifeform on our planet. We have completely changed the ecology of almost every ecological biome by installing the infrastructure required to make cars a viable form of transportation. This is the world’s deadliest cyberg, taking more that 1.25 million human lives annually, and that figure does not include war deaths some would attribute to the oil industry that feeds this network.

7th and 6th: India and China (1.3 and 1.4 billion)

The only nations to qualify for this top ten list, India and China each have more than four times the population of the United States, and nearly twice the population of the European Union. China is the wealthier cyberg, with an economy four times the size of India’s, but both wield significant destructive power via their hundreds of nuclear weapons. However, they have less than 2.5% of the world’s nuclear stockpile, since the US and the Russian Federation hold 45% and 48% of the world’s nuclear weapons, a quantity far beyond any rational consideration.

5th: Microsoft (1.5 billion)

Despite no longer being the centre of attention in technology circles, Microsoft’s cyberg is 50% bigger than the certifiable size of Google’s, thanks to the continuing dominance of Windows, which has a 90% market share in desktops and laptops. That said, these are now only 20% of the robot market, which is dominated by smartphones (where Google enjoys 87% of the market). Microsoft is a cyberg in decline, unable to adequately break into the pocket robot marketplace, but jealously guarding its hold over other industrial cybergs.

4th: Television (1.6 billion)

That television enjoys only a marginal numerical advantage over Microsoft is a sign of how completely the computer has has positioned itself as the cybernetic successor to the notorious boob tube. Yet there is another lesson here: the television is not ubiquitous, being a cyberg that extends through only 20% of the planet’s population.

3rd: Facebook (2 billion)

Here again we get a sense of the power of the digital cybergs... it has taken a little over a decade for Facebook to become the first definitive 2 billion human cyberg owned by one corporate entity. By leveraging human social instincts – and largely by accident, for it was not originally designed to operate as a surrogate for relationships – Facebook has aggregated more humans into one walled garden than anything else.

2nd: The Internet (3.5 billion)

It is distributed, beyond outright control (but certainly open to influence) and is the largest electronic cyberg on our planet. The internet... so significant, most dictionaries think it deserves a capital letter, like a nation. But this is a cyberg on a scale beyond national bureaucracies, a network that links half the planet’s humans to almost all the planet’s computers. Cisco claims there were 8.7 billion devices connected to the internet in 2012. As cybergs go, this one is the most spectacular in scale and potential. Yet it is still arguably outstripped by at least one larger cyberg...

1st: Money (7.3 billion)

This was the first cybernetic network, the first technical system to spread around our planet as both practice and tacit relations. As humans have grown more populous, so too has money spread with us – including into the virtual spaces of the internet, where this cyberg now lives as much or more than it does in the pockets of its humans. It seems positively simplistic next to the other gigacybergs, yet it engulfs almost every human; I have estimated that only 1-2% of the population of our planet are not caught up in the commercial cybernetic system. The sheer ubiquity of money as a concept is so complete that politics hinges more around budgetary numbers than about questions of how to live. This is one of our first technologies, as old as civilisation – and it remains our most successful.

More cybervirtue next week.


Babich and Bateman: Mediaddiction

Last week, the discussion about corporate venality passed sideways into a diagnosis of US politics and the commercial system propping it up. This week, philosophers Babette Babich and Chris Bateman turn to the moral ambiguity of social media.

clip_image002Chris Bateman: You also suggest social media is rooted in a kind of masturbatory (if you’ll forgive the allusion) self-satisfaction, self-enclosure. Like dogs begging for treats, we become self-conditioned to seek the strokes of trivial recognition that social media hands out – and there’s nothing genuinely social about this. Yet you and I remain on Twitter despite our awareness of this problematic situation. Are we trapped? Naïve? Self-deceived? Or is there a counterpoint to this problem that justifies colluding with mediated culture? What, if anything, is the alternative?

Babette Babich: This is a grand question, it is, as I do suggest and I do mean this, perhaps also a certain benefit of social media that it has this self-satisfying character, though I also spoke more neutrally of a kind of social media autism. Sherry Turkle looks at this issue as she has written several books on the matter as her own thought has evolved and she thinks, and a lot of cognitive psychologists concur, that it might be better, in a word, simplistic as it can be, as she suggests this, if we were to put down our phones.

CB: Which we have made impossible by becoming habituated to our cyborg existence as robot-with-human, since the smartphone is nothing but a robot slave which, in line with Hegel’s famous critique, we as masters are dependent upon.

BB: We hack the imprecations of modern digital culture on our psyche in our lives by means of these little objects and the cybernetic reaches, as it were, its full conclusion, its ultimate consequences with this little device. We have hands.

CB: Not to mention eyes. The eye and the hand are our passage between worlds, worlds sustained by imagination (such as the worlds of videogames, or for that matter movies or paintings or novels) or worlds sustained by corporeal practices – including the kind of practical world that has been rendered endangered by the systemic dependency on production we all accept and cannot question. I have never forgotten meeting a blind girl who played the videogames that the company I worked for at the time made, games that had been designed without any thought that someone without vision might play them. Yet she did. She essentially substituted patience for seeing. Which is ironic, because the prevalence of social media today is the substitution of seeing for patience, about which nobody has any vestige – yet spectacle, pre-generated visions, video distractions... for these, we have an unquenchable appetite.

BB: What a beautiful analogy, especially the fast short-circuit to immediate gratification and its demands. This is the way addiction works. The dark problem with the dream of Facebook founder Mark Zuckerberg and his Oculus Rift and thus and indirectly that of whoever feeds him tidbits from whatever military-corporate government security arm there may be (the surveillance arm may be utterly ‘invisible’ but we know it is omnipresent and we know, or can suppose, its interests), is the built in, downloadable chip, or the very acoustic signal alone — I talk about this in The Hallelujah Effect, it is what effects the effect — hacked into our minds, our consciousness.

CB: The cyberpunk novelists were deeply into exploring this theme, with varying degrees of success, but the lesson of the early 21st century has been that you don’t need the cybernetics actually inside your flesh, you don’t need a neural splice or a data port, because hands and eyes are already a sufficient interface to enter into another world, a robot-mediated world, the ‘consensual hallucination’ William Gibson foreshadowed. Sterling’s imaginative future of conflict between those who favour genetic enhancement and those who favour software enhancement overreached the mark: we needed much less than expected to fall prey to the ‘near future’.

BB: Hands, eyes, and ears! This is the reason The Hallelujah Effect focuses on the acoustic – and if I were writing this book today rather than four years ago now, in addition to all the things you are mentioning, I would probably try to integrate a review of the ASMR augment [Autonomous Sensory Meridian Response], for those who can ‘feel’ it, and this too is a kind of ‘masturbatory’ thing. (There are various versions of the acronym floating around on the internet, and one refers to a non-ASMR Magic Fades album, Augmented Sapiosexual Misanthropic Relationships.) To my mind, the ASMR YouTube phenomenon, specifically the work of video performance artists – and there are several I feel worth noting [for example, here, here and here] but to name those names is worth its own blog discussion (probably in another and expressly acoustic context) – might in fact be further connected in the spirit of the anthropology and sociology and psychology of social media with the Asexual movement. Thus ASMR has nothing to do with the erotic despite the popular press’s habit (BBC’s Nick Higham has been trying to explore this [e.g. here, and again here]) of invoking ‘brain orgasms.’ Such journalistic prose gets a lot of attention but misses the point of ‘entrainment’ as this concerns me, as does the first response to the phenomenon suggested by Liverpool neuroscientist, Frances McGlone, as Higham quotes him. But to miss the connection with entrainment means missing utterly the interface point you very importantly emphasized above, Chris. Acoustic brain entrainment has been a research topic in the military for years (happens to be the disturbing but valuable take-away from the Scots philosopher – and performance artist! AKA Kode9 – Steve Goodman in his book, Sonic Warfare). But the miss may also be no accident inasmuch as official cognitive science refuses to recognize ASMR, so much so that current research on it is done by teams of grad students (nary a supervisor in sight) – and not too many teams of grad students at that. At the same time – this is a “normal science” phenomenon in Kuhn’s sense of the term – one can wonder whether a failure to recognize a phenomenon counts as proof against ontological standing or as an indication of a failure of scientific currency? Here the problem is that not everyone has an ASMR response, just as not everyone is colorblind. Thus there is a partial parallel with the debate on synaesthesia which was also for a long time roundly denied as a phenomenon for similar reasons. And, in addition to the non-universality of the phenomenon, there is also the general trouble we have with nuance and complexity in complex physical systems. Take the example of nutrition science. When I was young, and oddly this conviction remains in force and no amount of research seems able to shake it, nutritionists argued that a calorie was a calorie was a calorie in order to deny that table sugar was as such, that is: qua disaccharide (where glucose, which is what the body uses for energy, is a monosaccharide), a particularly bad thing, which even sugar lovers, and I am one, know it to be by direct experience: one lives the phenomenon, captivating high (or nervous absorption, however it works for you) followed by an almost predictable crash, and then there is dental health as well as the tendency to gain weight, adiposity, attested to a century ago by Brillat-Savarin.

CB: There is sometimes a pressure in research communities to find the simplest explanation – a calorie is a calorie – as if this was the highest goal of the sciences. It’s Occam’s Razor gone wild, throwing out every relevant circumstance in the pursuit of the elegance of simplicity. So I would counter the original suggestion that non sunt multiplicanda entia sine necessitate, “entities must not be multiplied without necessity” (which apparently Occam got from John Punch’s commentary on Duns Scotus) with what I like to call Occam’s Nemesis: necesse est ponere plures, “it is necessary to posit plurality”. In my estimation you are not demonstrating much in the way of expertise if you can only represent complex situations inadequately.

Flat WhiteBB: And so the question: does expert denial constitute idiocy or does it simply reflect the durability of the received view? The hermeneutic phenomenological approach that matters to me as a continental philosopher can be useful here. One can philosophize about apricot cocktails with it, the great beauty of the thing is that this includes other cocktails, and beer in addition to sucrose, as well as different kinds of coffee, as I am a great one for thinking about the virtues of coffee and philosophy, but not less travel, as coffee is a multifarious thing, a culture that cannot simply be translated into Starbucks’ parlance or indeed and lamentable prevalence but is sedimented into the variations of a worldview. Took me years, and I am still not sure I am right about it, and I do live part-time in Winchester, to parse what a flat white might be.

CB: It is striking how coffee practices are automatically amalgamated into the commercial system. The flat white from 1980s Australia; the cortado from Spanish and Portuguese culture... If a new way of drinking coffee was discovered in the Amazonian rain forest tomorrow, I would expect to be drinking it in a coffee chain by the end of the month (and for someone somewhere to be complaining that they didn’t have it yet).

BB: You see, the things an American can learn! But experience tells us that that it might not catch on: Starbucks tried to do the slow pour, but as a New Yorker I can attest that Starbucks is slow enough as it is, and the Japanese take on coffee requires a little more time than we tend to have (still: I love it because of its metonymic – this is a joke, like my mention of the title alone in Heidegger’s Analytic as if that alone would suffice – association with the supposed Coriolis effect, as if the entire earth were somehow involved in the brewing of your coffee, or in the pouring of water over one’s tea). As a hermeneutic phenomenologist, one attends to the lived world but not less to the embodiment of living what is lived in that lifeworld and for the sake of that. Thus someone like Heidegger could remind us not only of the life of the lifeworld but of very vortices of the world as such (Heidegger speaks of worldhood and with-world) and the bodying forth of that life in the living of it.

CB: Phenomenology always leads me back to the imagination, and your remarks on Heidegger here reminds me of Theseus’ famous speech in A Midsummer Night’s Dream, which also has some bearing on the social media phenomena:

And as imagination bodies forth
The forms of things unknown, the poet’s pen
Turns them to shapes and gives to airy nothing
A local habitation and a name.
Such tricks hath strong imagination,
That if it would but apprehend some joy,
It comprehends some bringer of that joy.
Or in the night, imagining some fear,
How easy is a bush supposed a bear!

The dialogue continues next week: Touching Robots


Tip of the Cyberg

CybergDoes technology simply increase human capabilities? Or have we radically misjudged the extent and complexity of the ever-growing abundance of tools around us?

The astonishing advances in technological prowess in the prior century or so give an impression of infinite capabilities – the closest secular thought gets to imagining omnipotence. Thus we have no difficult envisioning (if we suspend critical judgment) techno-immortality, sentient robots, or interstellar travel. Indeed, science fiction is replete with these imaginary grails. This way of thinking about our tools – as personal enhancement – radically misleads us in several important ways (many of which I discuss in Chaos Ethics), but perhaps the most striking is the sense that equipped with any technology we act autonomously. This is always a subtle deceit.

Science fiction helps bring this confusion into focus. In Star Trek, the communicator, universal translator, phaser, transporter, and tricorder all do one thing perfectly (except when drama requires otherwise), to the extent that a Starfleet officer equipped with these things can appear anywhere, talk to anyone, scan anything to know what it is and what condition it is in, and – when push comes to shove – stun or kill on demand. All these capabilities occur literally at the push of a button. Where do these miracle tools come from? How does they work? It doesn’t matter; it’s high technology (hi-tech) – which is strikingly parallel to the magic-laden worlds of high fantasy. Arthur C. Clarke’s adage that any sufficiently advanced technology is indistinguishable from magic should raise more questions than it does... even in the context of sword and sorcery, we are entitled to ask: where does the magic come from? What is expended when it is used? What are the unseen limitations to its usage?

In the Terminator movie franchise, mankind in the future has been driven to the brink of extinction by robotic killing machines humanity made without thinking. That much of its setting is not hard to believe, particularly when you see the effortlessness with which the armed drone made battlefield honour obsolete. Yet against the backdrop of a total collapse of civilisation and killer robots prowling everywhere,the Resistance movement in the future depicted by Terminator: Salvation somehow maintains safe houses, feeds the survivors, even operate fighter planes. The aeroplane sits in our mind like the tricorder and communicator – autonomous once paired with a human. But as Bruno Latour never tires of reminding us: airplanes do not fly, it is airlines that fly. In stark contradistinction with what we see in a Terminator movie, no plane takes to the air without their logistical supply chains bringing fuel, their air traffic control networks managing flight paths, their support personnel performing essential maintenance.

Technology is not magic, and even fictional portrayals of magic are not as autonomous as we imagine our tools make us. There is a stark difference between hammers, binoculars, and a wind-up torch on the one hand and computers, cars, and airplanes on the other. While both sets of examples are manufactured by complex meshes of people and things, the latter list also require a complex network just to operate, a point brought into clear focus by the actor-network theory developed by Michel Callon and Bruno Latour. If a cyborg is what occurs when an organism is cybernetically enhanced by a tool like a hammer, we can call the network that produces and maintains the more complicated cyborgs like cars-and-their-drivers or computers-and-their-users, a cyberg.

The iceberg famously has only 10% of its mass above the waterline, and thus only its top is visible to the casual observer. So it is with the cyberg – the cybernetic network required by the more convoluted of our technologies. We see only the cyborg – the car and it’s driver – and not the cyberg that makes it possible. When it comes to technology we are perpetually shallow sighted: we see only the ‘surface’ of the network, so flat that it can be expressed as a one-dimensional array or list (car, driver, fuel, road). If we manage somehow to become more deep-sighted, we can recognise the relations and dependencies that give the cyberg it’s network-qualities (ore mines, smelting mills, factories, oil rigs, refineries and far more besides). These dependencies rapidly become tangential and obscure: an oil rig has scuba divers who repair the metal structure when it corrodes with arc welders entirely unique to their profession, but who is deep sighted enough to think of the factories making hyperbaric welding kits or compressed air tank regulators when looking at a car?

It is the cyberg that defines our technological situation, more so than the scientific research projects that we (somewhat naively) see as feeding directly into new tools, like the magician conjuring a new alchemical potion out of thin air, having expended nothing but time. What is more, we can measure our depth into cyberg existence by looking at the numbers of people and things involved in the cybernetic network. A hammer made a millennia ago involved a miner and a blacksmith, a mule and a horse, a mine, a furnace and trees; no more than about a hundred beings and things were entailed in this early cyberg example. A functionally identical hammer today would entail a network of ten thousand beings and things, or even a hundred thousand.

Our cybergs get bigger, deeper, wider, and as they do our autonomy recedes even while the imagined scope of our autonomy grows. This is part of the technological blindness I have previously called cyberfetish and am here marking as shallow-sightedness; our strange capacity to see only the powers and potentials of our new tools, but to overlook or underjudge their consequences. Precisely because we have moved from tools that could be made by individuals or villages to tools that require nations or corporations to build and maintain, we live at a time where the cyberg is the measure of both possibility and catastrophe.

Although I have introduced the idea of a cyberg through the extended frameworks behind a specific tool, the concept behind these cybernetic meshes applies whenever beings and things are linked together into extended networks. When Benedict Anderson observed that the printing press allowed the imagined communities we call nations to form, his argument can be read as saying that nations are cybergs. Every corporation is a cyberg, constituted slightly differently from nations, but in the last half century rivaling and exceeding them for power and influence. Every one of us is embroiled and imbricated in cybernetic networks of such scope and influence as to make a mockery of our mythos of technological empowerment. For when it comes to our tools, the enhancement of our personal agency is truly just the tip of the cyberg.

Next week: Top Ten Cybergs


Kawaii Hyper Capitalism

PeeqoOver at the O Creative Studio website from wonderful Barcelona, Víctor Navarro Remesal becomes the first writer to join me in mulling over cybervirtue outside of Only a Game. Víctor’s piece, entitled The Rebellion of Robot Mates, discusses the charming image-flashing robot Peeqo, and the implications of this kind of design. It’s a great read, and full of animated gifs that demonstrate how Peeqo’s gif face expresses itself.

Here’s an extract:

Its design thins the thickness of everyday life by convincing us that everything should be festive, and I don’t know if I want a Dr. Who GIF in an ATM machine. Besides, with its chutzpah, Peeqo has managed to place another camera and another microphone inside our house, and keeps on compiling our data, a little bit more sugar for the Big Data pill and the Always Online motto. I think of Meitu hiding spy code and the sweetness of Peeqo acquires a new hue: that of kawaii hyper capitalism.

You can read the entirety of The Rebellion of Robot Mates over at the O Creative Studio website. Vive la república de los bloggers!


Babich and Bateman: Monopoly and Other Games

Last week, a discussion about corporate venality and Ivan Illich’s ‘machine’. This week, philosophers Babette Babich and Chris Bateman turn to the problems of US politics.

imageBB: Our relation to industrial or corporate capitalism seems, at least in certain of its elements, to be a version of the faith one might have in the Irish Lottery, that or a kind of contact fetishism: we seem to think we must be beholden to millionaires all the way down (as if contact with or enthusiastic support of the wealthy might be the secret ingredient to waking up wealthy some fine day). I like your feudal vision of this indenture [discussed last week], perhaps there could be game design that might be thus inspired – if the great Jane McGonigal’s game reflections draw out the advantages of games for life and even for measurable pain management, maybe we can have a game for the economy, the opposite of Ge Jin’s ‘gold farmers’ (like Jane, he had also been briefly at Fordham, though I met him at UCSD) a game that might help us understand real-life economics. Of course, as you know, such exist, like Bertell Ollman’s board game for Marxism, Class Struggle (which was actually released as a board game — and I am grateful to Tracy Strong for tipping me off about this, although, and this is also how metonymy works, Tracy remembered Ollman’s game under the name of the more memorable, because rather higher profile board game, Anti-Monopoly by the San Francisco political theorist Ralph Ansbach).

CB: Since you have invoked Monopoly, I am honour bound to comment on the rather sordid history of this game. For it originates, as these days people are more aware of, as a modification to Elizabeth Magie’s 1904 The Landlord’s Game. Magie, struck by how children had an innate sense of fairness, thought that if a game made it clear how unjust property ownership was, it would allow a future generation to produce a fairer world. The game rules, in fact (linked to above), contain some remarkable clauses such as: “The Landlord’s Game is based on present prevailing business methods. This the players can prove for themselves; and they can also prove what must be the logical outcome of such a system, i.e., that the land monopolist... is monarch of the world.” There is also a rule allowing players to vote in a Single Tax, which allows land ownership revenue to be used for everyone’s benefit. It’s a remarkable design – and an even more remarkable story, for of course the ending is that it did not enjoy success in that form but instead became the design template for the hymn to capitalism that is Charles Darrow et al’s 1935 game Monopoly. It’s a game, frankly, that game designers hate because it is shockingly unbalanced – yet it has its fans, if for no other reason than its millionaire power fantasy has a near-universal appeal.

BB: My point is less about riffs upon or reinventions of Monopoly than the critically phenomenological observation that we do not buck the system. Thus it is worth remembering that the same system one worries about under the reign of the crass and still crasser regime of Trump-style crony capitalism was unchallenged in eight years under Obama, whose main virtue now seems to be that he was classier about it. Obama bailed Wall Street out effectively to the very same dollar amount that Bush had done just prior to his own assumption of office, as if to send a signal. And as power is passed from one administrative regime to the next, the most critical voices (though not heard on mainstream news channels) raise questions about Obama’s drone habit…

CB: It continues to shock me, both that it happens, and that people do not appear to be horrified about it.

BB: Sloterdijk’s Terror from the Air could use an update on the bombs dropped by the US under Obama – a fairly silent war conducted without report, behind the scenes, unwitnessed, to which we can only add current anxieties about the very same legacy of the military industrial complex that will continue under Trump whose only promise appears to be to bring us more military projects for less investment.

CB: This is a problem that can be out in the open or hidden away. In the US, the demand for the best equipment for the troops (who are now permanently deployed, in stark denial of the intentions of the Founding Fathers) pours shocking money into programmes that culminate with planet-killing bombs or murderous robots. The UK, with its smaller size, does not engage so directly with industrial weapons research and settles instead for a thriving business selling weapons abroad. We rival China for arms exports, although the two together sell less than a third of what the US manages to send abroad. It doesn’t matter who you vote for in this regard, the CIA and similar agencies keep pushing along their projects and agendas more-or-less regardless of who sits in the White House.

imageBB: Note that we didn’t quite have a choice in the last election which was less a matter of Trump or Clinton (and the very fact that Sanders was closed out of the election is part of this non-choice), not only because of the workings of the ‘Hallelujah Effect’ but also because the popular vote has never been what decides an American presidential election: we did not invent the electoral college at Trump’s behest. And despite this, professors of political science who teach the rules for American elections in their classes by day, moonlight on Facebook and Twitter saying the same things one can hear on Fox News and CNN regarding the dangers of Trump, the virtues of the popular vote, and the importance of blaming Jill Stein for Clinton’s defeat. After the election the same debates re the popular vote continued, shifting blame still to Jill Stein or else to Russia. But hacking is an issue that has nothing to do with Russia, it is an old question in a digital age (thank you Diebold) so clichéd by 2016 that ten years ago there was an HBO documentary on hacking American elections, Hacking Democracy. At issue in the documentary – it’s worth seeing – is less that hacking happens as it does than that, and this is digitally and philosophically very intriguing, when it does it is undetectable: it is a difference that literally makes no difference: it is undetectable, unless you know it is being done, you can neither detect it nor rule it out.

CB: As a friend of hackers, I might dispute that claim – a good hacker can produce a trail of breadcrumbs in situations that otherwise seem immaculate. But please go on!

BB: Still, what may be more disturbing is that stealing elections, the very idea, is so entrenched in US politics that political scientists take it for granted and factor it into their discussions of the popular vote, as if there were no other way about it.  Thus if Bush steals an election from Gore, we shrug, or at least we let the Supreme Court do the shrugging for us. When Clinton’s campaign does related things contra Sanders, we shrug: backroom politics, what are ya gonna do? Thus in the face of all that past shrugging the most surprising thing is perhaps mainstream media complaints regarding a putative Putin hack, post Trump.  Hacks to one side, what is evident is that we work ourselves into a frenzy over the supposed choice between two non-choices while ridiculing anyone who claims that either one comes down to the same. We do this even after eight years of explaining on both social media and in academic conference lectures and corridors that Obama could not keep his campaign promises because of Republican opposition and corporate and lobbying forces in Washington – the same Republican concerns Obama sought to work with, the same corporate interests Obama bailed out.

CB: We are so focussed on the person who sits in a particular office, that it manages to obscure the larger system they are incorporated into, and which they cannot change. It always comes back to the same thing for me: how was Obama unable to stop drone assassinations? Attacks that killed vast numbers of innocents and that were by no means the ‘precision strikes’ they were intended to be. Indeed, these shameful practices – which in my view dishonour the very troops that US citizens, and indeed myself, have such respect for – flourished under Obama’s watch. And not, I suspect, because he was entirely in support of them, although sometimes I wonder...

BB: Note that we have already said all the trigger things needed for the automatic associations that drive these debates and to inspire counter claims: everyone knows where to come down on these and related issues.

CB: This is the problem with moral horror, as I call it in Chaos Ethics; the cognitive dissonance of politics and ethics: the moment you are triggered, as it has become popular to say, all possibility of discussion has already ended. We fight over these flashpoint situations without there being any possibility of that conflict doing anything but entrench our ‘enemies’, making productive dialogue impossible. It is all too easy to simply give in to cynicism and conclude that there is no point making any kind of effort at all. At which point, Illich’s machine has most certainly won out. Corporate venality, as you have eloquently put it, is all that is left.

BB: I take very seriously your question as to what one might then do.  I don’t know. It seems to me that a great deal might be attained if one might finally come to see that there was a problem to begin with, in all its complexity and not less in its persistence. As Nietzsche once reflected – and his formulation is more salient than standard reflections on akrasia [ἀκρασία] – knowing better does not remove the conundrum: it does not mean one will do things otherwise, and it is far from the beginning of liberation: there is necessity, ananke [ἀνάγκη], all the way down.

The dialogue continues next week: Mediaddiction