Babich and Bateman: Touching Robots

Last week, the tyrannosaur’s hands. This week, philosophers Babette Babich and Chris Bateman talk about hands upon computers and the illusions of interaction.

IMG_7514Babette Babich: The German name for cell phone is ‘Handy,’ which seems to be because having hands means we like to have things at hand, and we like to do things with our hands – pretty much all the time.  And this is ubiquitous: here I include a photo of the Princeton philosopher Alexander Nehamas, taken in Athens at the last World Philosophy Congress, consulting with his cell phone.

Chris Bateman: I recall how uncomfortable I became thinking about whether my hands were or were not in my pockets... bringing this to my conscious attention as a late-teen made me ask questions about what I was doing with my hands, and why.

BB: Once upon a time, part of growing up was learning what to do with one’s hands: meaning nothing whatever. There were pockets but one wasn’t meant to have one’s hands in one’s pockets. Briefcases and handbags but one also was not meant to be rummaging around in them. And books, but then, in the company of others, except among strangers on a train of course, and so on, or in school, but in a meeting, during conversation, one was to use one’s hands to hold the book, without fidgeting and without reading it.

CB: The book, more than anything in my childhood, was the primary thing my hand was holding. Now, although I still read a great deal of books, the thing more often being held in my hand is a smartphone, or as I like to call it a ‘pocket robot’. And there’s a strong parallel here, because I used to read books as I walked down the street to work (a skill requiring considerable practice, and requiring strong peripheral vision if you are not to step in something untoward!)... for some reason, walking with the smartphone in my hand bothers me more than the book did – and I don’t really know if this is because of differences in the experiences of book versus robot, or because I have changed in the almost twenty years between. In both cases, for me, the item is an escape for me, out of perceptual reality and into the theatre of my mind – and in this, as Graeme Kirkpatrick pointed out in respect of videogame players and their controllers, my hands are out-of-mind. The smartphone, perhaps, makes it harder to ignore our hands than the book did.

BB: Pocket robot! I love this expression, because this personal robotic dimension is very surely part of the reason we are addicted to our phones! And the peripheral vision challenge is part of the allure of Pokémon Go, I think we will see more enhancements to come beyond Pokémon reality ‘skins’: imagine, and I know you are better at this than I could ever be as a game designer, but if one could outfit the world around one at will, say, dressing people on the streets in medieval garb?  What fun!  A personal holodeck to be configured at will. At the very least it could improve a blind date experience, maybe best without the other person knowing about it.  Augmented reality, Miniver Cheever style! I defer to your judgment, but I can’t help wondering if the controller is not also a technological extension of the hand, just as we manage to use our keyboards that way, have for years… and I will bracket the comparison to Athanasius Kircher à la Siegfried Zielinski just to get back to analytic philosophy’s own preoccupation with hands, think Peter Godfrey-Smith and his octopus mind.

ControllersCB: You absolutely correct, the game controller functions as a prosthetic hand extension, indeed, I should say, game controllers, as these too went through something of a Cambrian Explosion in the early arcade – trackballs, the myriad buttons of Defender, toy guns, joysticks from lollipops to aircraft yokes – before steady commercial pressure stabilised the twin stick controller that is the standard form for most game-literate players today. As VR comes in now, the pressure of the channel dug by this comfortable design now becomes a problem, for players have learned that their right hand adjusts their view (and most players are not conscious of this while doing so because it is habitual), but with a headset to dump you into the visual field now the neck must be used instead. The result is both confusion, because the hand has learned and doesn’t rapidly unlearn, and disorientation, even nausea. I think of Wittgenstein’s ‘if a lion could speak, we would not understand him’, and think this might apply to our own hands.

BB: The Wittgenstein connection (although I also make the case with reference to Merleau-Ponty) is central – think of Wittgenstein’s On Certainty. Our hands are part of the way we communicate to others.  The genius of the single, so-called gesture-enhanced or multi-touch modes, is that the desire to use our hands plays into this, and is part of what can count as ‘intuitive’ design. Think of dating apps, swipe left (or right). In each case, with or without the app, our smartphones and tablets let us touch, worse yet, they require us to do so, in order to use them optimally, not that all of us do. But, and this is the phenomenological take away, owing to our hand involvement, we cannot but interact with our devices: we are not merely using them to talk to others, we are talking to them, and they are talking back. One of my students at Juilliard (you will recall that they were all ‘kind enough’ to make some comments on the last blog, ‘required’ as you pointed out that these comments were), wrote a term paper on the phenomenon of autofill (and the point appears in one of the comments). This is the talking back that is autocorrect, the autofill completion, not necessarily qua response as much as an opting in to a replacement, whereby autofill speaks on our behalf. This is clearly the future and at the same time we have been opting in to allowing our machines to speak for us ever since we started using typewriters (this is a Kittleresque argument, who in turn borrows from Nietzsche and other early adoptors) and answering machines, allowing the machine to intercede for us, to take messages as a secretary would in our absence and not less in our presence, screening calls on our behalf.

CB: Absolutely, we are talking to our machines and we are simultaneously ignoring them, just as it was once (and not all that long ago) acceptable to instruct black servants without ever thinking of that as conversation, or even in many cases to think of them as people. This righty shocks our sensibilities now that the issues of race are almost painfully and embarrassingly in focus, and while I am highly doubtful of a similar revolution in moral perspective happening with respect of computerised devices (which are nowhere near sentience, contra the late Justin Leiber), the time will come that they too will come into view. One of my principal philosophical interests at the moment is our relationships with our robots, and the ways that we dismiss the significance of this. Because, to give a simple example, when our robot summons us with a chime, we take it in hand and respond to its summons (Hegel’s Herrschaft und Knechtschaft [master and servant] again...). But this chimed summons will interrupt our engagement with others around us; it takes us out of our social space in a way that feels very different to the people who watch it happen. I have found, since first having this come to my attention, that it is an oddly shocking situation that those absent-and-distant people, when mediated by a robot, are perceived to deserve more respect than those present with us.

BB: Telephone obedience, quite Pavlovian, corresponds to the compulsion we feel to respond to a ringing phone, chime, or beep. The mischief is, the problem is thinking that we are really interacting. Thus if someone designs a really excellent sex robot, and it meets the Turing test that we have yet to devise, call this the Pinocchio test: a cartoon or a fairy tale that tells us that if there is a difference (more Leibniz) that makes no difference, we have attained godlike powers or what is just as good, an ideal companion, Galatea (and of course Galatea is Pygmalion’s male artist’s ideal of an ideal woman). But note that just that ideal would, for some folk, be an ideal friend: this would be, and now we can go back to Aristotle, someone who is everything we wish them to be, who responds as we would like, when we like, just as long as we would like.  This could be an ideal chess partner or, if we can multiply players, the perfect rugby match. Maybe we can get a real-life version of Monty Python’s Philosopher’s Football. Or, beyond a gaming partner, and this seems to get the lion’s share of attention, for obvious reasons, there was an entire conference at Goldsmith’s just before Christmas, just a month ago now, on robot sex – I wrote a small essay for the occasion on ‘teledildonics’ – or to be vanilla about it, and Hollywood has already gotten there first with the film Robot and Frank, just a companion for one’s elderly relatives in need of a caretaker.

Robot and Frank

CB: That we would happily mistake a robot for a person at least strikes me as a superior mistake than to mistake a person for a robot or, as sprang mischievously from Descartes view of animals as clockwork automata, to fail to see an animal for a being. And here is an ironic end to a story that spans four centuries, because from mistaking animals as automata we now come to mistaking automata as people – a situation not entirely helped, in my estimation, by reducing everything to objects or (which amounts to much the same thing) telling people they are only an illusion. If I pretend that we can reduce all things to one kind – call it object, or unit, or body, or whatever – it only brings into clearer relief for me the differences between beings and things, which helps demonstrate how the idea of subjects and objects that descends from Descartes through Kant holds such force, even today. Yet the smartphone and other robots are the things that feel most like beings, because of their capacity for independent function. A robot, quite unlike a watch, is fun to be with.

The dialogue continues next week: Your Plastic Pal Who’s Fun To Be With


Should Your Laptop Say Please?

Please insert a disk into drivePoliteness is not merely an arcane code of conduct, it serves to smooth over the rough edges of human interaction by making requests more tactful, and thus less irritating. Yet as cyborgs we are not good at displaying tact towards one another, and if our robots could exhibit cyber-tact, does this mean your laptop should start saying ‘please’?

One of the unexpected side-effects of linking most of the human race together in a cybernetic communication and data retrieval network has been throwing everyone, regardless of background or circumstances, into random contact. Because the internet was conceived primarily as an institutional tool for combining computational capacities i.e. for networking robots and not humans, the collision of people it has facilitated can only be considered unanticipated. We are still far from prepared for the consequences.

When a cyborg understands others and acts considerately towards them they display tact. This is a virtue that can mean many different things in many different situations, but the core habit behind them all is an attentiveness to the emotional impact of speech and behaviour. Politeness can be seen as an aspect of tactful behaviour – indeed, the easiest part of tact to master, since it is so formulaic. But politeness is a fairly narrow virtue while tact is broad and versatile, having the beneficial quality of helping both those who master it and those it is displayed towards. The corresponding debilities are bluntness, which marks a disregard for courtesy or an inflexible obsession with truth, and tactlessness, which manifests through a failure to correctly anticipate the interests of other cyborgs. Tact need not entail lying; honesty is not at task here, but rather awareness of the effects of language and action upon others.

The internet has made tact far harder to master. When you deal solely with the people from your local culture you usually appreciate what you can or can’t get away with saying without causing offence. In our digital public spaces, however, someone from New York or Paris can collide with someone from rural Georgia or a remote part of Micronesia. This inherent culture clash is concealed by the indirectness of online connections (the vagueness of the digital other), and leads to substantially worse bluntness than happens in face-to-face interactions. The mask of anonymity here, as with kindness and respect, only makes the situation worse.

Tact manifests both in what is said and what remains unspoken or untyped. There is substantial overlap in this regard with respect and cyber-respect, but while respect is probably a requirement for tact, it is possible to respect another cyborg without displaying tact. Furthermore, attempts to enforce tact tend to end in a lack of respect. Thus while providing suitable warnings is a thoughtful expression of tact, it can never be entirely ethical to forcibly demand such warnings mandatorily. To do so is demand respect by denying respect, a peculiar contemporary moral blindness that comes from practicing the rules-focussed ethics of ‘rights talk’ in a complete absence of appreciation for the ethical traditions that lead to rights claims (that is, to fall prey of the moral disaster of individualism).

Robots display personal cyber-tact when they act considerately towards their humans in terms of the triggering of information and do not pursue unwanted displays of media or information. Pop-ups are a classic example of cyber-tactlessness, as are embedded videos that play when accidentally touched while scrolling through text (the BBC news website is especially bad for this). Our robots are inherently cyber-blunt (although they needn’t be): when was the last time your laptop said ‘please’ when it wanted to download and install an update? Not that long ago, computers said ‘please’ when you had to insert a disc into a drive (see the image above): now, they just bully you into updating whether you want to or not.

Cyber-tact can also hypothetically manifest socially, when a robot encourages its human to behave with tact. It is far from clear that this ever happens in practice, and all the problems of maintaining respect against the mask of anonymity apply with tact. The root problem here is that concepts such as politeness, consideration, or toleration require a social imagination, something that beings of various kinds are capable of, but well beyond the programmatic capabilities of robots. This means any viable form of social cyber-tact must leverage human capabilities in order to work.

Designing robot systems to augment tact presents a significant challenge. Suppose a social network were to attempt to train its humans in tact by adding a policing system, such that tactless or blunt remarks were flagged by the community as such. The net result of this would rapidly devolve into carnage, since humans in digital public spaces will always abuse systems that are capable of causing harm. Of course, not everyone does so – but it only takes a small proportion of people to make a minor design flaw into a disaster.

A classic example occurred in the design of The Sims Online game. In the early version of this, players could declare other players ‘trustworthy’ or ‘untrustworthy’. However, a group of players calling themselves the ‘Sims Mafia’ realised they could use this feature to shakedown new players – threatening to blackball them as ‘untrustworthy’ if they didn’t give them all their in-game money. The design of ‘public karma’ systems (as they are known) has avoided dealing with negative scores for precisely this reason, not to mention that humans will abandon tainted account credentials if necessary in what has been called ‘karma bankruptcy’.

Now it may seem that this is irrelevant to the question of cyber-tact: couldn’t you just have the robot provide a positive tact score? Yes, this would be the minimal case for cyber-tact. A positive tact system records when people report that others have been tactful, but necessarily such humans must be already capable of tact. The robot has displayed cybervirtue, but merely through tracking human virtue and thus encouraging the use of tact that a human already possessed. But precisely our problem is that the kind of tact we now need exceeds our prior experience. What is most needed in terms of cyber-tact is a way for a robot to teach its human how to act tactfully in the cultural collision of the internet. It is far from clear this design question is actually soluble.

Whereas designing for social cyber-respect may be a matter of giving up the mask of anonymity, social cyber-tact seems to be more challenging. In both cases, however, the design of robots can at least aim at personal cybervirtue, by (for example) affording their humans adequate control over what they see or read, defending against unwanted displays of media, and supplicating when requesting an inconvenience (instead of demanding, as is far more common). If we think of our robots as ‘neutral tools’, the idea that virtue could be applied to their function is lost on us. Yet we do not use a computer like a hammer (except when we are especially irate!) and we are more entitled than it may seem to expect it to say ‘please’ when it wants to do something that we do not.

More cybervirtues next week.


Babich and Bateman: The Tyrannosaur's Hands

Last week, the self-satisfying qualities of social media. This week, philosophers Babette Babich and Chris Bateman talk about dinosaur hands.

Vollbildaufzeichnung 22.12.2015 170053Babette Babich: To say just one thing about this bodying forth [introduced last week] along with slow ways to pour coffee, it is worth pointing out that we can, indeed, point things out. We can do that in rather a good many ways, nod with our chins or noses, raise eyebrows (do let us think of the late Alan Rickman, because of what he could do with an eyebrow, and he himself gave the palm to Dame Maggie Smith in the same regard), or nudge something with an elbow (to be Gilbert and Sullivan about it) or for a Manchester reference, with a knee and so on, but usually we point a digit, a finger, sometimes in the Facebook iconography, that somebody, should someday think of tracing back to its patently imperialist association with the Roman Empire: a thumb.

I recently tweeted about Randall Jarrell’s The Bat Poet, “with wings,” as Jarrell wrote about his little misfit bat poet (and here I develop a response nascent in the reply offered by one member of the twitterati to my tweet), suggesting that there might be an answer to Thomas Nagel avant la lettre. Where bats have wings, these are their hands, so the comparative anatomy would have it, and it makes a difference to think here of the sheer having of hands.

Chris Bateman: The pointing out interests me as a capacity because, for instance, dogs are perfectly capable of understanding us when we point, yet they lack hands. When they need to gesture in a direction, they must use their whole body – a faculty that gives the Pointer breed its name. Our capacity to point with our hands goes beyond a simple compass reading; gesture is a whole other language of its own (and sign language thus essentially a development of that). Yet it strikes me that neither pointing nor gesturing actually require fingers...

BB: What is at issue is not the number of digits and such, not as in pop anthropology and physiology, the having of opposable thumbs and all that that is meant to have brought us, to wit various and sundry connections with comprehension and apprehension and the having of things in the palm of one’s hand, as it were.

When I was studying biology at university, way back in the last three decades of the last century, in the mid-1970s, professors teaching earth science still insisted to us in lectures that plate tectonics was an unproven theory, dismissing Wegener’s continental drift as had been done to his frustration throughout his life.  In courses in Comparative Anatomy and in Ornithology I read beyond classroom lectures to discover the then-speculative connection between dinosaurs and birds – including the economic arguments that larger dinosaurs could not have been, simply given that they moved at all, poikilothermic, cold-blooded. What convinced me concerning warm-blooded dinosaurs was the fossil record, not at all of the well-known archaeopteryx but rather of a find in Russia (as Russians like to name things), Sordes pilosus (hairy filth, hairy devil as it was then translated), the Latin gives us the Rickmanian resonance once again. But if a pterodactyl has fur or fur-like feathers that will serve, as in the comparative and cognate case of a bat's wings to be sure, to protect core body heat that can be lost in the surface area of wingspan, much else follows. I used to run around campus tweeting the way one tweeted before there was Twitter, imitating Tyrannosaurus rex, tweet, tweet, tweet, in a very deep voice: I did this with friends as part of a game, my boyfriend, who was much taller, was better at tweeting like T. rex. What follows for science is all about everything we cannot know as we have no trace of it, nothing of integument, little of feathers, little of fur, no reptilian scales, nothing of colouring, all things lost to the fossil record, apart from sheerly, literally glorious finds (like the recent amber discovery of a tail, complete, to be sure with fur, or as most reports describe it, with feathers, and other more recondite surface finds like Sordes pilosus).

CB: This image of you and your friends playing at tweeting tyrannosaur is not going to leave me very quickly! As an avid junior palaeontologist myself (admittedly, my ‘field work’ as a nine year old merely accumulated a veritable treasury of ammonites…), what struck me was the rapid manner in which the status of fossils changed. I remember, for instance, a brief period in the 1980s when archaeopteryx was a hoax owing, I think in part, to the excellent British astronomer Fred Hoyle. Stephen Jay Gould, at the end of that decade, put the Burgess Shale into the spotlight – probably the only time a rock strata has been famous! – as a panoply of oddities and the Simon Conway Morris (who I spoke to briefly for The Mythology of Evolution) disputed this interpretation. Soon after the book was closed on the bird-dinosaur connection you refer to, which seemed to go from heresy to orthodoxy in record time! Every dinosaur instantly went from crocodile-kin to bird-ancestor almost overnight (although, of course, those two are not mutually exclusive…)

imageBB: Brilliant! And we are probably still ensconced in that orthodox trend! But there are other questions: how did T.rex actually eat? After solving the energetic problems of getting up from sleep, and and having the energy to run at all, never mind the tweeting, T. rex, and paleoanatomists debated this at some length in the literature, would have had trouble putting anything in its mouth – and its feet don't seem, like a raptor's feet to be for grasping....meaning that it would have had to use its hands. But how it  consume its prey?  What else are we missing? I am thus fond of imagining that T. rex did not merely have little hands but perhaps the little hands are just what remains of a variation on wings, like the baleen of a whale’s jaws, or as a bird is a better analogue than a cetacean, as complement to jaws that would allow them to function like a pelican's beak. But it could also involve other anatomical extensions, like the cockscomb of a rooster or the flaring ruff of a desert lizard, there would, so I thought, there could well have been extra bits. All we see are the bones for little vestigial hands, as we suppose the appendix to be vestigial (what we are learning about the gut and its associated flora is likely to make that attribution as wrong-headed as our views on continental drift), still where would Japanese monster movies be without Godzilla’s little hands? But these ‘hands’ could also be differently articulated, and might be quite enough as basis for cartilage and other extensions, or some other adaptation related to the thermodynamic eating demands of being a large land animal, from which could grow what were the effective ‘wings’ of the thunder lizard, not used for flight but gathering prey. In addition to his bass tweet, tyrannosaurus might have run through the forest canopy or along the veldt, sweeping everything in its path into a great drag net of feathered, curved wings: gathered and scooped into those huge jaws. 

Think pac-man with feet.

For human beings, our having hands as we do probably gets in the way of imagining T. rex at all (we find it hard to understand that a bat’s wings are, to a great extent, the bat’s ‘hands’). But above all, beyond flights of fanciful palaeontology, the German name for cell phone is ‘Handy,’ which seems to be because having hands means we like to have things at hand, and we like to do things with our hands – pretty much all the time.

The dialogue continues next week: Touching Robots


Top Ten Cybergs

Purple Cybernetic FlightEvery purposeful network of beings and things forms a cyberg, where (like an iceberg) we only see a fraction of the entailed network and the rest lurks beyond our awareness most of the time. The complete inventory of beings and things entailed within each of these cybernetic networks would be challenging to enumerate, but we can approximate the scale of each cyberg by counting just the number of one kind of entity within it e.g. the number of humans, the number of computers.

To qualify as a cyberg at all, we must be dealing with a network that spans its entire breadth with some kind of active relation, even if merely one of potential. A nation is a good example: not every citizen knows every other citizen yet they are linked by a shared bureaucracy that integrates them all into one functional network. It is not enough for there to have been a common network of production – no matter how many people own a penknife, penknife-wielders do not have any ongoing relationship. Conversely, the exchange of media effectively links television stations and thus viewers such that while individual TV stations are modestly sized cybergs by contemporary standards, they aggregate into something far more substantial. (Religions are something of a borderline case in this regard, but I shall set these aside for now.)

In the list that follows, cybergs are listed in order of the size of a single indexed entity, either humans or devices. Everything listed is a gigacyberg, with no fewer than a billion entities embroiled in its network. This list is not intended to be definitive but merely indicative – it points to the general situation today from a perspective we would not normally consider.

Runners Up

A number of megacybergs narrowly missed the top ten, including the European Union (743 million), movies (about 800 million), and guns (875 million). More than 360,000 people die each year as a result of the gun cyberg, but this is by no means the most fatal of our cybernetic networks. If this list included religions, Christianity would be the number three gigacyberg (2.3 billion), Islam would be ranked jointly with Microsoft (1.5 billion), and the Hindu traditions would be a close runner up (900 million).

Joint 9th: Tencent and Google (1 billion)

Chinese internet giant Tencent and search colossus Google both have about a billion humans in their cyberg. Whereas Tencent does not lead Chinese search (that honour goes to Baidu) it has a tremendously diverse network of internet services, including the wildly successful competitive game service League of Legends. Google dominates search globally – but even this only allows it to squeak into the world’s biggest cybergs if we take its quoted figures as accurately gauging its scale. Pragmatically, the reach of the Google cyberg is probably greater than this conservative estimate – but it feels somehow fitting to show this young upstart beginning its climb towards the top of the heap...

8th: Cars (1.2 billion)

It is possible to drive completely around the world thanks to the extent that the car-human cyborg has emerged as the dominant lifeform on our planet. We have completely changed the ecology of almost every ecological biome by installing the infrastructure required to make cars a viable form of transportation. This is the world’s deadliest cyberg, taking more that 1.25 million human lives annually, and that figure does not include war deaths some would attribute to the oil industry that feeds this network.

7th and 6th: India and China (1.3 and 1.4 billion)

The only nations to qualify for this top ten list, India and China each have more than four times the population of the United States, and nearly twice the population of the European Union. China is the wealthier cyberg, with an economy four times the size of India’s, but both wield significant destructive power via their hundreds of nuclear weapons. However, they have less than 2.5% of the world’s nuclear stockpile, since the US and the Russian Federation hold 45% and 48% of the world’s nuclear weapons, a quantity far beyond any rational consideration.

5th: Microsoft (1.5 billion)

Despite no longer being the centre of attention in technology circles, Microsoft’s cyberg is 50% bigger than the certifiable size of Google’s, thanks to the continuing dominance of Windows, which has a 90% market share in desktops and laptops. That said, these are now only 20% of the robot market, which is dominated by smartphones (where Google enjoys 87% of the market). Microsoft is a cyberg in decline, unable to adequately break into the pocket robot marketplace, but jealously guarding its hold over other industrial cybergs.

4th: Television (1.6 billion)

That television enjoys only a marginal numerical advantage over Microsoft is a sign of how completely the computer has has positioned itself as the cybernetic successor to the notorious boob tube. Yet there is another lesson here: the television is not ubiquitous, being a cyberg that extends through only 20% of the planet’s population.

3rd: Facebook (2 billion)

Here again we get a sense of the power of the digital cybergs... it has taken a little over a decade for Facebook to become the first definitive 2 billion human cyberg owned by one corporate entity. By leveraging human social instincts – and largely by accident, for it was not originally designed to operate as a surrogate for relationships – Facebook has aggregated more humans into one walled garden than anything else.

2nd: The Internet (3.5 billion)

It is distributed, beyond outright control (but certainly open to influence) and is the largest electronic cyberg on our planet. The internet... so significant, most dictionaries think it deserves a capital letter, like a nation. But this is a cyberg on a scale beyond national bureaucracies, a network that links half the planet’s humans to almost all the planet’s computers. Cisco claims there were 8.7 billion devices connected to the internet in 2012. As cybergs go, this one is the most spectacular in scale and potential. Yet it is still arguably outstripped by at least one larger cyberg...

1st: Money (7.3 billion)

This was the first cybernetic network, the first technical system to spread around our planet as both practice and tacit relations. As humans have grown more populous, so too has money spread with us – including into the virtual spaces of the internet, where this cyberg now lives as much or more than it does in the pockets of its humans. It seems positively simplistic next to the other gigacybergs, yet it engulfs almost every human; I have estimated that only 1-2% of the population of our planet are not caught up in the commercial cybernetic system. The sheer ubiquity of money as a concept is so complete that politics hinges more around budgetary numbers than about questions of how to live. This is one of our first technologies, as old as civilisation – and it remains our most successful.

More cybervirtue next week.


Tip of the Cyberg

CybergDoes technology simply increase human capabilities? Or have we radically misjudged the extent and complexity of the ever-growing abundance of tools around us?

The astonishing advances in technological prowess in the prior century or so give an impression of infinite capabilities – the closest secular thought gets to imagining omnipotence. Thus we have no difficult envisioning (if we suspend critical judgment) techno-immortality, sentient robots, or interstellar travel. Indeed, science fiction is replete with these imaginary grails. This way of thinking about our tools – as personal enhancement – radically misleads us in several important ways (many of which I discuss in Chaos Ethics), but perhaps the most striking is the sense that equipped with any technology we act autonomously. This is always a subtle deceit.

Science fiction helps bring this confusion into focus. In Star Trek, the communicator, universal translator, phaser, transporter, and tricorder all do one thing perfectly (except when drama requires otherwise), to the extent that a Starfleet officer equipped with these things can appear anywhere, talk to anyone, scan anything to know what it is and what condition it is in, and – when push comes to shove – stun or kill on demand. All these capabilities occur literally at the push of a button. Where do these miracle tools come from? How does they work? It doesn’t matter; it’s high technology (hi-tech) – which is strikingly parallel to the magic-laden worlds of high fantasy. Arthur C. Clarke’s adage that any sufficiently advanced technology is indistinguishable from magic should raise more questions than it does... even in the context of sword and sorcery, we are entitled to ask: where does the magic come from? What is expended when it is used? What are the unseen limitations to its usage?

In the Terminator movie franchise, mankind in the future has been driven to the brink of extinction by robotic killing machines humanity made without thinking. That much of its setting is not hard to believe, particularly when you see the effortlessness with which the armed drone made battlefield honour obsolete. Yet against the backdrop of a total collapse of civilisation and killer robots prowling everywhere,the Resistance movement in the future depicted by Terminator: Salvation somehow maintains safe houses, feeds the survivors, even operate fighter planes. The aeroplane sits in our mind like the tricorder and communicator – autonomous once paired with a human. But as Bruno Latour never tires of reminding us: airplanes do not fly, it is airlines that fly. In stark contradistinction with what we see in a Terminator movie, no plane takes to the air without their logistical supply chains bringing fuel, their air traffic control networks managing flight paths, their support personnel performing essential maintenance.

Technology is not magic, and even fictional portrayals of magic are not as autonomous as we imagine our tools make us. There is a stark difference between hammers, binoculars, and a wind-up torch on the one hand and computers, cars, and airplanes on the other. While both sets of examples are manufactured by complex meshes of people and things, the latter list also require a complex network just to operate, a point brought into clear focus by the actor-network theory developed by Michel Callon and Bruno Latour. If a cyborg is what occurs when an organism is cybernetically enhanced by a tool like a hammer, we can call the network that produces and maintains the more complicated cyborgs like cars-and-their-drivers or computers-and-their-users, a cyberg.

The iceberg famously has only 10% of its mass above the waterline, and thus only its top is visible to the casual observer. So it is with the cyberg – the cybernetic network required by the more convoluted of our technologies. We see only the cyborg – the car and it’s driver – and not the cyberg that makes it possible. When it comes to technology we are perpetually shallow sighted: we see only the ‘surface’ of the network, so flat that it can be expressed as a one-dimensional array or list (car, driver, fuel, road). If we manage somehow to become more deep-sighted, we can recognise the relations and dependencies that give the cyberg it’s network-qualities (ore mines, smelting mills, factories, oil rigs, refineries and far more besides). These dependencies rapidly become tangential and obscure: an oil rig has scuba divers who repair the metal structure when it corrodes with arc welders entirely unique to their profession, but who is deep sighted enough to think of the factories making hyperbaric welding kits or compressed air tank regulators when looking at a car?

It is the cyberg that defines our technological situation, more so than the scientific research projects that we (somewhat naively) see as feeding directly into new tools, like the magician conjuring a new alchemical potion out of thin air, having expended nothing but time. What is more, we can measure our depth into cyberg existence by looking at the numbers of people and things involved in the cybernetic network. A hammer made a millennia ago involved a miner and a blacksmith, a mule and a horse, a mine, a furnace and trees; no more than about a hundred beings and things were entailed in this early cyberg example. A functionally identical hammer today would entail a network of ten thousand beings and things, or even a hundred thousand.

Our cybergs get bigger, deeper, wider, and as they do our autonomy recedes even while the imagined scope of our autonomy grows. This is part of the technological blindness I have previously called cyberfetish and am here marking as shallow-sightedness; our strange capacity to see only the powers and potentials of our new tools, but to overlook or underjudge their consequences. Precisely because we have moved from tools that could be made by individuals or villages to tools that require nations or corporations to build and maintain, we live at a time where the cyberg is the measure of both possibility and catastrophe.

Although I have introduced the idea of a cyberg through the extended frameworks behind a specific tool, the concept behind these cybernetic meshes applies whenever beings and things are linked together into extended networks. When Benedict Anderson observed that the printing press allowed the imagined communities we call nations to form, his argument can be read as saying that nations are cybergs. Every corporation is a cyberg, constituted slightly differently from nations, but in the last half century rivaling and exceeding them for power and influence. Every one of us is embroiled and imbricated in cybernetic networks of such scope and influence as to make a mockery of our mythos of technological empowerment. For when it comes to our tools, the enhancement of our personal agency is truly just the tip of the cyberg.

Next week: Top Ten Cybergs


Kawaii Hyper Capitalism

PeeqoOver at the O Creative Studio website from wonderful Barcelona, Víctor Navarro Remesal becomes the first writer to join me in mulling over cybervirtue outside of Only a Game. Víctor’s piece, entitled The Rebellion of Robot Mates, discusses the charming image-flashing robot Peeqo, and the implications of this kind of design. It’s a great read, and full of animated gifs that demonstrate how Peeqo’s gif face expresses itself.

Here’s an extract:

Its design thins the thickness of everyday life by convincing us that everything should be festive, and I don’t know if I want a Dr. Who GIF in an ATM machine. Besides, with its chutzpah, Peeqo has managed to place another camera and another microphone inside our house, and keeps on compiling our data, a little bit more sugar for the Big Data pill and the Always Online motto. I think of Meitu hiding spy code and the sweetness of Peeqo acquires a new hue: that of kawaii hyper capitalism.

You can read the entirety of The Rebellion of Robot Mates over at the O Creative Studio website. Vive la república de los bloggers!


Cybervirtues: The Three Treasures

Ni Zan.Bamboo Branches (1306-1374)

Exploring cybervirtue involves taking traditional concepts of virtue and considering how they relate to our relationship with our robots and with the other cyborgs we live with. I wish to begin this process by experimenting with certain Chinese concepts. In the Taoist traditions, three particular virtues are highlighted by the ‘Old Master’ Lao Tzu (or laotsi) as the greatest and most important. Writing in the Tao Te Ching, chapter 67 he states in Arthur Waley’s popular translation:

Here are my three treasures. Guard and keep them! The first is pity; the second, frugality; the third, refusal to be ‘foremost of all things under heaven’.
For only he that pities is truly able to be brave;
Only he that is frugal is able to be profuse.
Only he that refuses to be foremost of all things
Is truly able to become chief of all Ministers.

While pity (or compassion) is something that many people today still recognise as a virtue, frugality (or simplicity) receives at best lip service as a desirable trait, and humility (refusal to be foremost) seems to have disappeared in a time of self-aggrandisement and digital vanity. Nonetheless, I set myself the task of working these Taoist virtues into cybervirtues, not as an attempt to embody Toaist virtue, as such – all translation is betrayal, as the Italian saying goes – but rather to explore cybervirtue by taking Lao Tzu as a stepping point.

The idea of simplicity or frugality strikes a chord with me as the virtue of restraint, that habit of being which is required to attain such a state. In a time of compulsive software and hardware dependency, the question of cyber-restraint is an important one. For humility, I substitute the related virtue of respect, which as descendants of the Enlightenment we are perhaps more open to valuing. To be humble is to avoid placing oneself above others, after all, and Kant’s mutual respect entails this – although what cyber-respect would entail requires some thought. Lastly, for compassion I favour kindness, since simply feeling pity lacks any active principle, and asking how we would illicit cyber-kindness strikes me as an important question for all of us cyborgs.

Restraint

To refrain from acting for good reasons, when there is a standing desire to act is to show restraint. A lack of restraint in the context of our robots means putting the robot in charge and settling for being a mere digital slave. We do not like to admit our debility here, although we will spot it more easily in others. We always have a reason for burying our heads in our phones, even to the exclusion of those around us whom we are supposedly present with. (Commuters, who are physically co-present but not expected to be mentally co-present, are perhaps a different case, although even here a moral argument might be made for restraint.)

Correspondingly, a robot displays cyber-restraint when it's functioning encourages restraint in its cyborg. This cybervirtue is almost exclusively internal; a robot's influence on its human either encourages restraint towards it or otherwise. Almost without fail, however, contemporary robots are designed to do the exact opposite of this: to make their humans pay more attention to them, to return again and again to the habitual activities that have been designed to get and keep attention. This may seem more a matter of software and content than hardware, yet when an iPhone offers to squeak and plink to get its human’s attention, it certainly not displaying cyber-restraint.

A robot is engaged in cyber-itch when it encourages impulsive engagement and thus makes its human merely semi-present in their own world, as all ‘smart phones’ are designed to do. This divided state will not usually trouble the human, who is often delighted to be distracted from their situation. (This is, as I have said elsewhere, the age of distraction.) Yet to yield to cyber-itch constitutes a debility when this semi-present state draws a human away from their life in an inopportune fashion, for instance, a mother not listening to her son because she is thinking about Words With Friends, or a teenager substituting the gratification of Facebook likes for the artful task of cultivating friendships.

Software-makers engender cyber-itch because it is the path to money; in so much as we participate in these commercial practices, we endorse their actions. Yet who among us can do otherwise? We are perpetually drawn into semi-presence, and defend our digital slavery with post-hoc justifications as to why it doesn’t matter or, even more absurdly, how it is positive and beneficial. In suggesting a cybervirtue of restraint, I invite us all to reassess our dependency upon our robots, and the partial engagement in our worlds this engenders.

Respect

To treat other cyborgs as equally worthwhile beings is to display respect. It is not enough to think or say that you respect others: respect is shown by actions (including speech), it is not some subterranean quality to be measured in secret. This virtue facilitates other virtues; to possess respect for others is to open up to other kinds of honourable behaviour. While it is (in principle) not hard to respect cyborgs that you like, maintaining respect for those you do not like is an especial moral challenge everyone of us faces, and few of us master – or even desire to do so.

A robot shows cyber-respect when it helps its cyborg act respectfully (which is rare), or acts with respect towards its human (which is merely uncommon). A key sign of external cyber-respect, i.e. respect encouraged between cyborgs, is facilitating consent and thus allowing each cyborg to make up its own mind about what it engages with. Software for sending spam, for instance, is a de facto case of the exact opposite since it expressly sets out to circumvent any attempt at consent. I will not claim this is unacceptable behaviour, but it is clearly not virtuous.

This corresponding debility, cyber-disdain, is extremely common. Robots frequently display cyber-disdain for their humans by forcing unwanted behaviour upon them. Mandatory downloads of operating systems, for example, are a notable violation of cyber-respect, especially when declining once has no effect upon the continuation of this behaviour (as with Apple’s iPhone’s remorseless downloading of iOS updates). This should not be confused with situations where the human’s expectations are not met, resulting in anger – this is a debility of the human, and an all too common one at that, not a debilitating quality of the robot.

Cyber-disdain in the internal sense occurs solely when the the network of cyborgs that crafted the robot curtail the opportunity for consent to its actions, as with the example of mandatory, space-eating downloads. In the external sense, disrespectful behaviours are encouraged between cyborgs. This is so shockingly common that it must now count as a debility in software-makers that they have not better defended against this possibility. Indeed, it has recently become apparent that search engine algorithms dip into cyber-disdain by indiscriminately aggregating content by links, as Carol Cadwalladr’s article attests. Here, the debility might appear tangential, but responsibility still belongs with the robots and their makers.

Indeed, our robots all too frequently encourage disdain in their humans - especially when anonymity is provided in digital public spaces. The mask provided invites actions unthinkable in a face-to-face interaction (as critiqued in Wikipedia Knows Nothing). This goes far beyond mere circumventing of consent, and into outright aggression and abuse. This is perhaps the most serious case of cyber-disdain facing us today, and moves our discussion into the dark territory of cyber-cruelty.

Kindness

Kindness shows in actions that provide support for another cyborg, including strangers. Forgiveness, generosity, helpfulness, and peace-making are all acts of kindness, and can be effortless manifested in cyborgs that express this virtue. To act with kindness is not the same as acting with respect – you can respect someone and decline to treat them kindly, or you can be kind towards someone you don’t actually respect (although this, perhaps, is rarer). Through kindness, we show compassion, and this is separate from allowing for consent and free choice, which are hallmarks of respect.

Cyber-kindness manifests in robots that either encourage compassionate thinking in their cyborg, or that offer kindness towards their human. The former is relatively rare, while the latter often backfires – as when robotic ‘helpfulness’ is anything of the kind. The most familiar kind of cyber-kindness is when a robot confirms that its human genuinely wanted to delete something. As irritating as confirmation boxes might be to some of us, in defending against accidental loss they manifest this cybervirtue. Similarly, the recycling bin that appears in the interface for many operating systems is a display of cyber-kindness, protecting against potentially distressing mistakes.

The opposite, cyber-cruelty, is markedly more common, and as with cyber-disdain is particularly likely when the robot facilitates the anonymity of its human in digital public spaces. Here, the very decision to permit anonymous presence could be judged an act of cyber-cruelty on behalf of the network creating the robots in question. Thus Twitter is a cyber-cruel digital public space, a virtual world where abuse proliferates under the mask of anonymity. Yet even public identities do not appear to avoid this cyber-debility, since Facebook also frequently descends into abusive behaviour. In distancing presence from human relations, our robots are paradigmatically cyber-cruel.

This, then, reinforces the concerns of cyber-itch outlined before: our robots lure us into semi-presence, and invite disdain for others by cloaking us in anonymity. Once removed from human relations this way the temptation to cruelty lurks, resisted by many, perhaps, but ever-present in the shadows of our digital public spaces, where we shall find so little cybervirtue and blame this, dishonestly, on everyone else. But we have made this dark half-world together – and we can, if we only care to, replace it with something better.

The opening image is Bamboo Branches, by the 14th century poet Ni Zan, part of the collection at the Palace Museum in Beijing.


Beyond Futile Outrage

Sarah Goodyear.RageWhy does our world suddenly seem to be filled with outrage, yet nothing changes? When our moral intuitions provoke anger, we voice our hatred or cynicism online and somehow feel that is enough. Nothing changes, since we have lost a common ethical backdrop against which we can adequately even discuss our ethics, let alone bring about any kind of change. Of the three ways of conducting moral thought, we have abandoned one of them, and corrupted the other two, thus our moral intuitions have lost their force, since the context that gave them meaning has broken down. The result is anger against other people who do not share our values, with no possibility of a productive dialogue that can bring about a new state of affairs. This is the paralysis brought on by outrage culture.

To bring about changes requires a common standard, and the problem with contemporary ethical thought is that we do not understand our moral mythos well enough to maintain a shared basis for judgement. As moral psychologists have reported, the tendency is for us to have an emotional response to a situation (e.g. outrage), then afterwards to dress it up in justifications (e.g. demonisation of a particular identity) – what Jonathan Haidt waggishly dubbed ‘the emotional dog and his rational tail’. However, I break with the psychologists who advance this theory at the conclusions they draw from it, which amount to perilous assumption that moral philosophy has no role, as well as questioning their research methods, which are in adequate to most of the conclusions being drawn. 

Haidt’s idea that our moral intuitions are embedded in our social connections is substantially correct, but it is not significantly contra to the views of any philosopher, as J.W. Gray and others have argued. What’s missing in Haidt’s social intuitionist model is how the social intuitions became set up: he has forgotten the role of history in establishing common standards of moral judgement, and once this is taken into account it becomes clear that contemporary morality is intimately connected to moral philosophy. Indeed, as Alasdair MacIntyre outlined in After Virtue thirty five years ago, the nature of this moral catastrophe is the lack of connection between our habits of judgement and their philosophical roots, which can be traced across the span of European history. Attempting to remove philosophy from consideration, as key moral psychologists such as Haidt argue, is to deepen the crisis, not resolve it, and Haidt ultimately ends up advancing an impotent argument for the status quo. We will get nowhere if we fail to situate the ethical crisis within its cultural history. 

The three key approaches to morality are concerned with the qualities of agents (virtue), the nature of actions (duty), and the consequences that result (outcomes). I contend that all three forms of moral thinking are vital, but it is important to remember that for the majority of the recorded history of our species, the concept of virtue has been the primary vehicle of morality. Whatever grasp individuals may or may not have had of their wider situation, the idea that it is a good thing to be brave, polite, patient, generous, or kind was easy to grasp – even if it was not always so easy to put into practice. MacIntyre’s After Virtue traces the history of virtues up to their contemporary near-demise, supplanted by two new moral systems devised in the Enlightenment. Fistly, Kant’s duty-based philosophy that leads to human rights, but then inexplicably devolves into posturing about ‘having a right’ in situations where nothing of the kind applies. (Here, Haidt is right: moral philosophy is not entailed in people’s moral judgements: but in this case that is precisely the problem.) Secondly, John Stuart Mill’s outcome-based utilitarianism that begins by aiming at ‘maximising the good’ yet leads to contemporary corporate consequentialism that merely maximises profit. What’s more, these kind of consequentialism, which judge purely by outcome, are systems that no longer resembles morality at all, as the late Derek Partfit accused. 

Thus we are beset by moral disasters, as we have all but lost one key way of thinking about ethics and broken the others such that otherwise laudable moral systems have become corrupted distortions of themselves. This is the nature of the two major disasters of contemporary ethics – the moral disaster of individualism, which confuses selfishness or paternalism for collective responsibility, and the moral disaster of consequentialism, which boils down complex situations to the point that decisions are easy to make, and in the process destroys the essential context of every ethical challenge. In terms of the disaster of individualism, there is an urgent need to repair our broken concepts of rights now that nations such as the United States and the United Kingdom have abandoned them while individuals still angrily invoke ‘their rights’ without any understanding of what that claim implies. There is an even more vital requirement to reconfigure the kind of consequentialist thinking that leads both nations and corporations to act in appalling ways because their definitions of what is good is reduced to the merely calculable, substituting expediency for any concept of ethics. Neither of these recovery projects has much hope of success without a substantial reboot of moral thinking, and the academic community cannot achieve this – not without engaging with the wider populace it has been regrettably isolated from.

Reawakening an interest in the qualities of moral agents might be the best chance of reconfiguring our devastated moral mythologies, because we can have productive discussions concerning virtues without requiring much in the way of theoretical meanderings. What’s more, virtues are qualities that form a set that no-one would expect everyone to possess, making it easier to foster virtues in practice since the moral standard that they set is attainable by everyone in at least the minimal case of expressing at least one virtue. Rules and consequences suggest an absolute, all-or-nothing approach to morality that seems to require (when not approached with care) superhuman powers. Yet virtues sit firmly within human experience, and the key objection against virtue ethics is their failure to be absolute, which only serves to show how hypnotised by the Enlightenment’s moral systems we have become. Besides, if we can rescue virtues, we can rescue rights and outcome-focussed ethics too. One step at a time.

Outrage culture is the product of people who project morality against others, and rarely use it to judge themselves. There is a natural tendency to do this, one that has been well known by philosophers and indeed religious teachers for many centuries. What is conspicuously absent today is what in Chaos Ethics I call moral representation, which is to say, shared reflection on our ethical values and judgements, something also suggested as necessary by Haidt’s social intuitionist theory. We need to represent our moral positions to each other in part because once our anger is engaged, productive discussion is blocked by moral horror (what psychologists term cognitive dissonance), and so the reflective aspect of morality – which is far more important than we usually recognise – never occurs. If there is a lesson we should take from moral psychology it is not that we should abandon the study of our historically-embedded moral systems but that we have failed to adequately maintain the social basis of moral judgement. We are no longer conducting any viable form of morality, and we really have become the caricature that Haidt imagines, dressing up our emotional reactions in convenient justifications.

To overcome this impasse requires new discourses, and it is my suggestion that we start by talking about virtues since we can recognise what is good in a person – or a cyborg – without it setting off our moral horror, which closes us off from difficult to accept ideas. Too much of our ethical practice has become a sustained outpouring of vitriol against what we perceive as evil; often, what we judge as evil without ever stopping to consider the complexities of the situations. Whether we are talking about those who voted to leave the EU in the United Kingdom, or those who voted for a popularist demagogue in the United States it is not enough to angrily shout ‘racism! racism!’ and expect this to suffice for politics – or for ethics. Perhaps if we can recover some perspective on the good, we can stop being so ineffectually obsessed with raging at the evil we see everywhere around us. Outrage culture is either impotent or blindly vengeful – neither of which are terribly helpful. To get beyond this, we have to learn to talk about morality in ways that can cross easily between any religious or secular tradition, and virtues might just provide a way of doing this.

This post is a edited from sections of the piece run last week, What is Cybervirtue?, which I have edited and re-run (as What is Cybervirtue? Version 2.0) without this discussion within it. The opening image is Rage by Sarah Goodyear, which I found here on her Artdoxa page. As ever, no copyright infringement is intended and I will take the image down if asked.


What is Cybervirtue? Version 2.0

Vitor.Elements-EarthIf virtues are the positive qualities of beings, what are the positive qualities of cyborgs? We call the admirable habits of humans ‘virtues’, so we can call the exemplary properties of the systems they form with their robots cybervirtues.

What I mean by a cybervirtue are those desirable qualities that a cyborg might possess, and what I mean by cyborg is any combination of beings and things that acts with a greater range of possibilities than either can achieve alone. Of particular interest to me at this time is the cyborg each of us forms with a robot, such as a laptop, a smartphone, or a desktop computer. If you are reading these words, you are a cyborg in the relevant sense since you could not have encountered what I am writing here without participating directly or indirectly in a network of humans and robots. The qualities of these networks, whether with just a single human and robot, or with a vast plurality of beings and things, is precisely what is at task when we think about cybervirtues.

So if virtues are the desirable habits of humans and other beings, cybervirtues are the equivalent properties humans possess as cyborgs. There are at least two senses that we can identify such qualities, and the current open forum here at Only a Game is interested in both of them. Firstly, the personal side of cybervirtue concerns the relationship between a robot and its human; the way your smart phone is designed (both in terms of its hardware and its software) governs its moral relationship with you. A simple example of such a personal cybervirtue is the recycling bin, which offers the kindness of protecting against the permanent loss of digital material by separating the decision to discard from the decision to make such discards permanent. Personal cybervirtues offer an internal sense of the term, internal to any given human-robot pairing.

On the other hand, social cybervirtues concern how the human-robot cyborg relates to other cyborgs, the external sense of the term. Here it is perhaps easier to demonstrate situations that show a lack of virtue, such as when anonymity in digital public spaces such as Twitter encourages appalling behaviour, especially (for some sad reason) towards female cyborgs. Yet the very presence of these machine-invoked moral debilities points to the possibility of cybervirtue in this external sense – the design of hardware and software to encourage virtuous behaviour in the cyborgs that result from the intimate relationship between a robot and its human.

What of autonomous robots? The capacity for a robot to take independent action once launched into a designed program of action somewhat conceals the way these are also cyborgs, always involving a human element in their constitution and operation. A cyborg (which is a contraction for cybernetic organism) could be constituted entirely by robots, provided ‘organism’ is taken metaphorically, as is often the case. But the question of whether there might eventually be robots made by other robots and entailing no assistance, direction, or maintenance by humans draws us away from the problem at hand. If such imagined future robots were beings in the relevant sense, they could possess virtues – and if they did not or could not, they would not be beings in any important sense.

Yet we do not need science fiction’s beloved sentient AI for robots to intrude upon ethics – both in terms of how they are designed, and how they effect human behaviour, computers have a significant moral aspect. What’s more, this is true of all tools for as Bruno Latour, Peter-Paul Verbeek, and Isabelle Stengers have all touched upon in various ways, things possess a moral agency too. It is not that our tools act without us, but through modifying our capabilities for action the things we use reconfigure the moral space we move within – the presence of a gun changes the moral potential of a situation; ultrasound introduces moral complexities to pregnancy that were previously absent; armed drones invite the leaders of nations to turn to assassination as a mere expediency. When we allow for the moral agency of things (even as just the modification of moral possibilities) the question of what is a virtue changes into something radically new and different, and that new perspective is precisely what I am seeking to explore.

Through the concept of cybervirtue I seek to draw attention both to the meaning of traditional virtues when considered against the backdrop of our vast networks of technology, and also to suggest ways in which the design of our robot’s hardware and software could be made to encourage virtue. Currently, this does not happen, but perhaps only because we are unaccustomed to thinking this way, and have never really thought about the design of computerised systems like this. Better design does not have to be about utility, which is a more problematic concept than we think; it could also be about encouraging virtue in humans, and cybervirtue in robot-human systems. It is up to us to create better cyborgs – either by changing our robots, or by changing ourselves.

The opening image is Elements: Earth by Vitor, which I picked out of his dormant website, The Fractal Forest, and is used with permission. This post is an edited version of What is Cybervirtue?, which ran earlier this week.


What is Cybervirtue?

Rashid Rana.What Lies Between Skin and FleshIf virtues are the positive qualities of beings, what are the positive qualities of cyborgs? We call the admirable habits of humans ‘virtues’, so we can call the exemplary properties of the systems they form with their robots cybervirtues.

For the majority of the recorded history of our species, the concept of virtue has been the primary vehicle of morality. Whatever grasp individuals may or may not have had of their wider situation, the idea that it is a good thing to be brave, polite, patient, generous, or kind was easy to grasp – even if it was not always so easy to put into practice. In 1981, Alasdair MacIntyre’s After Virtue traced the history of virtues up to their contemporary near-demise, supplanted by two new moral systems devised in the Enlightenment: Kant’s moral philosophy that leads to human rights, and John Stuart Mill’s utilitarianism that leads to contemporary corporate consequentialism – a system that no longer resembles morality at all, as the late Derek Partfit accused.

We are beset by moral disasters, in particular where a laudable moral system has become corrupted into a distortion of itself. This is the nature of the two major disasters of contemporary ethics – the moral disaster of individualism, which confuses selfishness or paternalism for collective responsibility, and the moral disaster of consequentialism, which boils down situations to the point that decisions are easy to make, and in the process destroys the essential context of every ethical challenge. In terms of the disaster of individualism, there is an urgent need to repair our broken concepts of rights now that nations such as the United States and the United Kingdom have abandoned them while individuals still angrily invoke ‘their rights’ without any understanding of what that claim implies. There is an even more vital requirement to reconfigure the kind of consequentialist thinking that leads both nations and corporations to act in appalling ways because their definitions of what is good is reduced to the merely calculable. But neither of these projects has much hope of proceeding without a substantial reboot of moral thinking, and the academic community cannot achieve this – not without engaging with the wider populace it has been regrettably isolated from.

Reawakening an interest in the qualities of moral agents might be the best chance of reconfiguring our devastated moral mythologies, because we can have productive discussions concerning virtues without requiring much in the way of theoretical meanderings. What’s more, virtues are qualities that form a set that no-one would expect everyone to possess, making it easier to foster virtues in practice since the moral standard that they set is attainable by everyone in at least the minimal case of expressing at least one virtue. Rules and consequences suggest an absolute, all-or-nothing approach to morality that seems to require (when not approached with care) superhuman powers. Yet virtues sit firmly within human experience, and the key objection against virtue ethics is their failure to be absolute, which only serves to show how hypnotised by the Enlightenment’s moral systems we have become. Besides, if we can rescue virtues, we can rescue rights and outcome-focussed ethics too. One step at a time.

However, there is a crucial new complexity that we must be able to take into account: moral agency can no longer be constrained to humans. On the one hand, we have thankfully abandoned the intellectual conviction that animals cannot behave ethically, a perspective that was dominant right up to the 1980s. Animals are moral beings too, they possess a form of morality that Marc Bekoff and Jessica Pierce call ‘wild justice’, and cannot be treated as entirely excluded from moral consideration, as was the prevailing belief until quite recently. The embargo on accepting the moral value of animals was ended through the efforts of philosophers like Peter Singer, Mary Midgley, and the aforementioned Alasdair MacIntyre, who (with different methods and different motives) all undermined the assumption that animals did not matter.

But that has not been the only sea change in moral thought, for now we recognise that things have a moral agency too, as Bruno Latour, Peter-Paul Verbeek, and Isabelle Stengers have all touched upon in various ways. It is not that our tools act without us, but through modifying our capabilities for action the things we use reconfigure the moral space we move within – the presence of a gun changes the moral potential of a situation; ultrasound introduces moral complexities to pregnancy that were previously absent; armed drones invite the leaders of nations to turn to assassination as a mere expediency. Allowing for the moral agency of things (even as just the modification of moral possibilities) changes the question of what is a virtue into something radically new and different.

What I mean by a cybervirtue are those desirable qualities that a cyborg might possess, and what I mean by cyborg is any combination of beings and things that acts with a greater range of possibilities than either can achieve alone. Of particular interest to me at this time is the cyborg each of us forms with a robot, such as a laptop, a smartphone, or a desktop computer. If you are reading these words, you are a cyborg in the relevant sense since you could not have encountered what I am writing here without participating directly or indirectly in a network of humans and robots. The qualities of these networks, whether with just a single human and robot, or with a vast plurality of beings and things, is precisely what is at task when we think about cybervirtues.

So if virtues are the desirable habits of humans and other beings, cybervirtues are the equivalent properties humans possess as cyborgs. There are at least two senses that we can identify such qualities, and the current Cybervirtue Campaign here at Only a Game is interested in both of them. Firstly, the personal side of cybervirtue concerns the relationship between a robot and its human; the way your smart phone is designed (both in terms of its hardware and its software) governs its moral relationship with you. A simple example of such a personal cybervirtue is the recycling bin, which offers the kindness of protecting against the permanent loss of digital material by separating the decision to discard from the decision to make such discards permanent. Personal cybervirtues offer an internal sense of the term, internal to any given human-robot pairing.

On the other hand, social cybervirtues concern how the human-robot cyborg relates to other cyborgs, the external sense of the term. Here it is perhaps easier to demonstrate situations that show a lack of virtue, such as when anonymity in digital public spaces such as Twitter encourages appalling behaviour, especially (for some sad reason) towards female cyborgs. Yet the very presence of these machine-invoked moral debilities points to the possibility of cybervirtue in this external sense – the design of hardware and software to encourage virtuous behaviour in the cyborgs that result from the intimate relationship between a robot and its human. Here, we must be careful to avoid the moral disaster of individualism in its guise of paternalism: it is not cybervirtuous to forcibly censor online swearing (although it may be prudent to do so in some situations), because doing so does not encourage virtuous habits in humans.

What of autonomous robots? The capacity for a robot to take independent action once launched into a designed program of action somewhat conceals the way these are also cyborgs, always involving a human element in their constitution and operation. A cyborg (which is a contraction for cybernetic organism) could be constituted entirely by robots, provided ‘organism’ is taken metaphorically, as is often the case. But the question of whether there might eventually be robots made by other robots and entailing no assistance, direction, or maintenance by humans draws us away from the problem at hand. If such imagined future robots were beings in the relevant sense, they could possess virtues – and if they did not or could not, they would not be beings in any important sense.

Through the concept of cybervirtue I seek to draw attention both to the meaning of traditional virtues when considered against the backdrop of our vast networks of technology, and also to suggest ways in which the design of our robot’s hardware and software could be made to encourage virtue. Currently, this does not happen: our perverse mockery of Enlightenment ideals has been used in precisely the opposite way it was envisioned; instead of empowering individual autonomy and mutual respect, a shallow individualism devoid of context bankrolls a commercial consequentialism where only profit matters. The corporations cannot be blamed for this; they act in precisely the way they have been designed. It is up to us to create better cyborgs – either by changing our robots, or by changing ourselves.

The opening image is What Lies Between Skin and Flesh by Rashid Rana, which I found here on his artnet page. As ever, no copyright infringement is intended and I will take the image down if asked.