Previous month:
December 2016
Next month:
February 2017

Cybervirtues: The Three Treasures

Ni Zan.Bamboo Branches (1306-1374)

Exploring cybervirtue involves taking traditional concepts of virtue and considering how they relate to our relationship with our robots and with the other cyborgs we live with. I wish to begin this process by experimenting with certain Chinese concepts. In the Taoist traditions, three particular virtues are highlighted by the ‘Old Master’ Lao Tzu (or laotsi) as the greatest and most important. Writing in the Tao Te Ching, chapter 67 he states in Arthur Waley’s popular translation:

Here are my three treasures. Guard and keep them! The first is pity; the second, frugality; the third, refusal to be ‘foremost of all things under heaven’.
For only he that pities is truly able to be brave;
Only he that is frugal is able to be profuse.
Only he that refuses to be foremost of all things
Is truly able to become chief of all Ministers.

While pity (or compassion) is something that many people today still recognise as a virtue, frugality (or simplicity) receives at best lip service as a desirable trait, and humility (refusal to be foremost) seems to have disappeared in a time of self-aggrandisement and digital vanity. Nonetheless, I set myself the task of working these Taoist virtues into cybervirtues, not as an attempt to embody Toaist virtue, as such – all translation is betrayal, as the Italian saying goes – but rather to explore cybervirtue by taking Lao Tzu as a stepping point.

The idea of simplicity or frugality strikes a chord with me as the virtue of restraint, that habit of being which is required to attain such a state. In a time of compulsive software and hardware dependency, the question of cyber-restraint is an important one. For humility, I substitute the related virtue of respect, which as descendants of the Enlightenment we are perhaps more open to valuing. To be humble is to avoid placing oneself above others, after all, and Kant’s mutual respect entails this – although what cyber-respect would entail requires some thought. Lastly, for compassion I favour kindness, since simply feeling pity lacks any active principle, and asking how we would illicit cyber-kindness strikes me as an important question for all of us cyborgs.

Restraint

To refrain from acting for good reasons, when there is a standing desire to act is to show restraint. A lack of restraint in the context of our robots means putting the robot in charge and settling for being a mere digital slave. We do not like to admit our debility here, although we will spot it more easily in others. We always have a reason for burying our heads in our phones, even to the exclusion of those around us whom we are supposedly present with. (Commuters, who are physically co-present but not expected to be mentally co-present, are perhaps a different case, although even here a moral argument might be made for restraint.)

Correspondingly, a robot displays cyber-restraint when it's functioning encourages restraint in its cyborg. This cybervirtue is almost exclusively internal; a robot's influence on its human either encourages restraint towards it or otherwise. Almost without fail, however, contemporary robots are designed to do the exact opposite of this: to make their humans pay more attention to them, to return again and again to the habitual activities that have been designed to get and keep attention. This may seem more a matter of software and content than hardware, yet when an iPhone offers to squeak and plink to get its human’s attention, it certainly not displaying cyber-restraint.

A robot is engaged in cyber-itch when it encourages impulsive engagement and thus makes its human merely semi-present in their own world, as all ‘smart phones’ are designed to do. This divided state will not usually trouble the human, who is often delighted to be distracted from their situation. (This is, as I have said elsewhere, the age of distraction.) Yet to yield to cyber-itch constitutes a debility when this semi-present state draws a human away from their life in an inopportune fashion, for instance, a mother not listening to her son because she is thinking about Words With Friends, or a teenager substituting the gratification of Facebook likes for the artful task of cultivating friendships.

Software-makers engender cyber-itch because it is the path to money; in so much as we participate in these commercial practices, we endorse their actions. Yet who among us can do otherwise? We are perpetually drawn into semi-presence, and defend our digital slavery with post-hoc justifications as to why it doesn’t matter or, even more absurdly, how it is positive and beneficial. In suggesting a cybervirtue of restraint, I invite us all to reassess our dependency upon our robots, and the partial engagement in our worlds this engenders.

Respect

To treat other cyborgs as equally worthwhile beings is to display respect. It is not enough to think or say that you respect others: respect is shown by actions (including speech), it is not some subterranean quality to be measured in secret. This virtue facilitates other virtues; to possess respect for others is to open up to other kinds of honourable behaviour. While it is (in principle) not hard to respect cyborgs that you like, maintaining respect for those you do not like is an especial moral challenge everyone of us faces, and few of us master – or even desire to do so.

A robot shows cyber-respect when it helps its cyborg act respectfully (which is rare), or acts with respect towards its human (which is merely uncommon). A key sign of external cyber-respect, i.e. respect encouraged between cyborgs, is facilitating consent and thus allowing each cyborg to make up its own mind about what it engages with. Software for sending spam, for instance, is a de facto case of the exact opposite since it expressly sets out to circumvent any attempt at consent. I will not claim this is unacceptable behaviour, but it is clearly not virtuous.

This corresponding debility, cyber-disdain, is extremely common. Robots frequently display cyber-disdain for their humans by forcing unwanted behaviour upon them. Mandatory downloads of operating systems, for example, are a notable violation of cyber-respect, especially when declining once has no effect upon the continuation of this behaviour (as with Apple’s iPhone’s remorseless downloading of iOS updates). This should not be confused with situations where the human’s expectations are not met, resulting in anger – this is a debility of the human, and an all too common one at that, not a debilitating quality of the robot.

Cyber-disdain in the internal sense occurs solely when the the network of cyborgs that crafted the robot curtail the opportunity for consent to its actions, as with the example of mandatory, space-eating downloads. In the external sense, disrespectful behaviours are encouraged between cyborgs. This is so shockingly common that it must now count as a debility in software-makers that they have not better defended against this possibility. Indeed, it has recently become apparent that search engine algorithms dip into cyber-disdain by indiscriminately aggregating content by links, as Carol Cadwalladr’s article attests. Here, the debility might appear tangential, but responsibility still belongs with the robots and their makers.

Indeed, our robots all too frequently encourage disdain in their humans - especially when anonymity is provided in digital public spaces. The mask provided invites actions unthinkable in a face-to-face interaction (as critiqued in Wikipedia Knows Nothing). This goes far beyond mere circumventing of consent, and into outright aggression and abuse. This is perhaps the most serious case of cyber-disdain facing us today, and moves our discussion into the dark territory of cyber-cruelty.

Kindness

Kindness shows in actions that provide support for another cyborg, including strangers. Forgiveness, generosity, helpfulness, and peace-making are all acts of kindness, and can be effortless manifested in cyborgs that express this virtue. To act with kindness is not the same as acting with respect – you can respect someone and decline to treat them kindly, or you can be kind towards someone you don’t actually respect (although this, perhaps, is rarer). Through kindness, we show compassion, and this is separate from allowing for consent and free choice, which are hallmarks of respect.

Cyber-kindness manifests in robots that either encourage compassionate thinking in their cyborg, or that offer kindness towards their human. The former is relatively rare, while the latter often backfires – as when robotic ‘helpfulness’ is anything of the kind. The most familiar kind of cyber-kindness is when a robot confirms that its human genuinely wanted to delete something. As irritating as confirmation boxes might be to some of us, in defending against accidental loss they manifest this cybervirtue. Similarly, the recycling bin that appears in the interface for many operating systems is a display of cyber-kindness, protecting against potentially distressing mistakes.

The opposite, cyber-cruelty, is markedly more common, and as with cyber-disdain is particularly likely when the robot facilitates the anonymity of its human in digital public spaces. Here, the very decision to permit anonymous presence could be judged an act of cyber-cruelty on behalf of the network creating the robots in question. Thus Twitter is a cyber-cruel digital public space, a virtual world where abuse proliferates under the mask of anonymity. Yet even public identities do not appear to avoid this cyber-debility, since Facebook also frequently descends into abusive behaviour. In distancing presence from human relations, our robots are paradigmatically cyber-cruel.

This, then, reinforces the concerns of cyber-itch outlined before: our robots lure us into semi-presence, and invite disdain for others by cloaking us in anonymity. Once removed from human relations this way the temptation to cruelty lurks, resisted by many, perhaps, but ever-present in the shadows of our digital public spaces, where we shall find so little cybervirtue and blame this, dishonestly, on everyone else. But we have made this dark half-world together – and we can, if we only care to, replace it with something better.

The opening image is Bamboo Branches, by the 14th century poet Ni Zan, part of the collection at the Palace Museum in Beijing.


Babich and Bateman: Corporate Venality

In this latest dialogue between philosopher and Nietzsche scholar Babette Babich and ‘outsider philosopher’ Chris Bateman, we discuss our relationship to corporate power and influence, the possibility of virtuous behaviour against a backdrop of pervasive technology, and living with robots.

imageChris Bateman: In your book The Hallelujah Effect, you draw out some of the ways that our apparently freely motivated actions in social media tend to devolve into propping up the corporate ownership of popular culture. Web 2.0’s much vaunted appointment of the masses to the role of ‘content creator’ becomes a kind of ‘free range egg’ alternative to ‘battery farmed’ corporate content creation – the content we ‘create’ is an unpaid corporate promotional service. You suggest this is the new venality, a corporate venality where we are effectively being psychologically ‘bribed’ to prop up corporate interests. And we all end up participating – even those of us who question this arrangement. Is this inevitable? How would someone find a virtuous path in this situation?

Babette Babich: This is a great question.

This is such a great question that I would be up for anyone who might suggest such a path out of the fly bottle.

I don’t think such a path exists, or at least and surely not without critique, which is why there is a need to focus on Adorno (even if Adorno’s tone can annoy a lot of readers, Horkheimer does not make it any better) or Günther Anders (part of Anders’ 1956, The Obsolescence of Humanity, “On Promethean Shame” is now available in English as part of Christopher Müller’s Prometheanism) or Marcuse or indeed Baudrillard. The problem is that as happy denizens of the capitalist world view, that is, as happy consumers dedicated to feeding the corporate machine all the gigantic profits it can eat, that the dynamic is just as self-detonating as Marx suggested long ago and as we have, quite empirically found, to our great distress, bubble after bubble, in the UK under Thatcher and since then, as if no lessons had been learnt, and they weren’t, and on-going again with Brexit and its subsidiary consequences.

CB: On Brexit, I shall have to defer comment, if only for brevity, although it is something I spend time quite some time thinking about, and remain quite conflicted.

BB: Let me also note – although I am mindful of the complexity deferred – that academic authors are perhaps more insulated from the same profit question though no less involved. I was just interviewed for a UK job (patently I was not offered the position or I would not be talking about it, and that complicity is worth a blog of its own) where I was asked point blank about just how I planned to bring corporate money to the university: I have no such plans or techniques for the same, and I said as much.  What I did not say was that I thought that the very idea violated academic integrity.  And yet there is no way to be an academic and not be involved with corporate money, corporate enhancement, corporate exchange, whether one intends this or not. Thus just to limit this complexity to something as seemingly innocuous as research – academics routinely write (and edit: recruit, and collate for journals and book collections, reads copy: one’s own, that of others) for no compensation, before, during, or after publication. Yet most academics who write for the sake of communication and engagement, the furthering of research, also vigorously protest file-sharing – a complex issue on which you have written and which remains very resistant to analysis at any level. Maybe this is because the digital ontology of our lives is happening, or unfolding, all around us.

CB: You say ‘corporate machine’, and ‘machine’ was specifically the metaphor Ivan Illich used to describe the way that Western society – the democratic, free market economy, rights-claiming ideological system – functioned, and he tried to warn countries that had not yet been pulled into it what they stood to lose. Because it seems, as you suggest here, that once inside there is no way out, no way back, because once you join an economy of specialisation and dependence upon manufacture and commerce you can never be self-sufficient in any tangible sense. The moment you are not growing your own food, the moment you lose those skills that once were the mainstay of existence, you are entirely dependent upon the system to sustain you – and the system now is the corporations just as in feudal times it was the aristocracy. Feudal capitalism would be another way to see it; millionaires swearing fealty to billionaires, and the peasants in thrall to their employers who – whether by bank or by capital investment – all depend upon millionaires or billionaires somewhere down the line.

imageBB: It was Illich’s metaphor and it still bears reflection, as I often connect it too to the Frankfurt School theorists and their culture industry.  I write occasionally  about Illich in connection with education and health care but I think his comments even more vital in connection with what he called conviviality, and this complex feudality is, I think part of that as we live semi-blind to the system that sustains us, as you say, and corporate feudality is a matrix all its own. Illich pointed out that what is at issue is more than the preservation of exotic cultures, Carmen Sandiego style (where in the world can one be free of corporate control?) if only because as he also argued, there is no place, to one side or the other of Brexit complexity or Trump wall-building insanity, where one is or could be free of corporate control.  Illich’s writing and sensibility has, if one opens oneself to reading him, the capacity to raise the tone, to remind us of conviviality, that we are very much in this together even if we need the spirit of an almost-saint, as I believe Illich was such, we certainly need the patience of one, even to pose such questions at all. Hence toward the end of his life, talking about his decision to shut down the centre he himself had founded in Cuernevaca, Illich spoke of the “cultivation of conspiracy,” meaning, as he gave a historical as well as etymological discussion, being close enough to others that we might be able to share their breath, to breathe with them. “Con-spiratio.”

CB: Finding Illich was transformational for me, because here were precisely the questions not being asked today, and (soberingly) being asked around the time that I was born. Having taken these questions to heart, though, I have felt remarkably isolated – because to think with Illich is to exile yourself from almost everyone else today, to challenge everything that most people unquestioningly take to be the best aspect of what we have (whether education, medicine, or transport). Worst of all for me is precisely that I do not feel close enough to others to share breath – far from the delight in my nomadic existence that gave me my company name (International Hobo Limited), now I feel an acute sense of the costs of that nomadic existence, but the habit is there now, and it is terribly difficult to break… I had not thought, before you mentioned it, of this other meaning of ‘conspiracy’, shared breath, but yes, I suppose I yearn for a conspiracy I feel able to belong to!

BB: Historical philology is captivating and one learns from Illich’s account, but you mentioned him to touch on the question less of spirit (and Illich’s conspiracy) than speculation and capital, specifically our unwitting involvement with it, an unwitting involvement that tends to become quite witting. Most of us at university will seek corporate sponsorship if we can. Hence a nearly universal response to Illich on breathing the breath of others – he was speaking of the original meaning of the kiss of peace – can be the same as a fairly universal response to Žižek in the time of Occupy Wall Street (I include here some of the photos I took of the movement, during a day of organized protests, with union support from all over the New York seaboard and mindful that Occupy has in the space of a few years passed into a cliché for a thoroughly defeated movement) as Žižek suggested then that we might, say, dismantle capitalism, that would be the ultimate Brexit. As is his wont, Žižek articulated our own answer for us in effort to get us to think about what we consider, this is a Lacanian conundrum, what we regard as or name the impossible.

The dialogue continues next week: Monopoly and Other Games


Beyond Futile Outrage

Sarah Goodyear.RageWhy does our world suddenly seem to be filled with outrage, yet nothing changes? When our moral intuitions provoke anger, we voice our hatred or cynicism online and somehow feel that is enough. Nothing changes, since we have lost a common ethical backdrop against which we can adequately even discuss our ethics, let alone bring about any kind of change. Of the three ways of conducting moral thought, we have abandoned one of them, and corrupted the other two, thus our moral intuitions have lost their force, since the context that gave them meaning has broken down. The result is anger against other people who do not share our values, with no possibility of a productive dialogue that can bring about a new state of affairs. This is the paralysis brought on by outrage culture.

To bring about changes requires a common standard, and the problem with contemporary ethical thought is that we do not understand our moral mythos well enough to maintain a shared basis for judgement. As moral psychologists have reported, the tendency is for us to have an emotional response to a situation (e.g. outrage), then afterwards to dress it up in justifications (e.g. demonisation of a particular identity) – what Jonathan Haidt waggishly dubbed ‘the emotional dog and his rational tail’. However, I break with the psychologists who advance this theory at the conclusions they draw from it, which amount to perilous assumption that moral philosophy has no role, as well as questioning their research methods, which are in adequate to most of the conclusions being drawn. 

Haidt’s idea that our moral intuitions are embedded in our social connections is substantially correct, but it is not significantly contra to the views of any philosopher, as J.W. Gray and others have argued. What’s missing in Haidt’s social intuitionist model is how the social intuitions became set up: he has forgotten the role of history in establishing common standards of moral judgement, and once this is taken into account it becomes clear that contemporary morality is intimately connected to moral philosophy. Indeed, as Alasdair MacIntyre outlined in After Virtue thirty five years ago, the nature of this moral catastrophe is the lack of connection between our habits of judgement and their philosophical roots, which can be traced across the span of European history. Attempting to remove philosophy from consideration, as key moral psychologists such as Haidt argue, is to deepen the crisis, not resolve it, and Haidt ultimately ends up advancing an impotent argument for the status quo. We will get nowhere if we fail to situate the ethical crisis within its cultural history. 

The three key approaches to morality are concerned with the qualities of agents (virtue), the nature of actions (duty), and the consequences that result (outcomes). I contend that all three forms of moral thinking are vital, but it is important to remember that for the majority of the recorded history of our species, the concept of virtue has been the primary vehicle of morality. Whatever grasp individuals may or may not have had of their wider situation, the idea that it is a good thing to be brave, polite, patient, generous, or kind was easy to grasp – even if it was not always so easy to put into practice. MacIntyre’s After Virtue traces the history of virtues up to their contemporary near-demise, supplanted by two new moral systems devised in the Enlightenment. Fistly, Kant’s duty-based philosophy that leads to human rights, but then inexplicably devolves into posturing about ‘having a right’ in situations where nothing of the kind applies. (Here, Haidt is right: moral philosophy is not entailed in people’s moral judgements: but in this case that is precisely the problem.) Secondly, John Stuart Mill’s outcome-based utilitarianism that begins by aiming at ‘maximising the good’ yet leads to contemporary corporate consequentialism that merely maximises profit. What’s more, these kind of consequentialism, which judge purely by outcome, are systems that no longer resembles morality at all, as the late Derek Partfit accused. 

Thus we are beset by moral disasters, as we have all but lost one key way of thinking about ethics and broken the others such that otherwise laudable moral systems have become corrupted distortions of themselves. This is the nature of the two major disasters of contemporary ethics – the moral disaster of individualism, which confuses selfishness or paternalism for collective responsibility, and the moral disaster of consequentialism, which boils down complex situations to the point that decisions are easy to make, and in the process destroys the essential context of every ethical challenge. In terms of the disaster of individualism, there is an urgent need to repair our broken concepts of rights now that nations such as the United States and the United Kingdom have abandoned them while individuals still angrily invoke ‘their rights’ without any understanding of what that claim implies. There is an even more vital requirement to reconfigure the kind of consequentialist thinking that leads both nations and corporations to act in appalling ways because their definitions of what is good is reduced to the merely calculable, substituting expediency for any concept of ethics. Neither of these recovery projects has much hope of success without a substantial reboot of moral thinking, and the academic community cannot achieve this – not without engaging with the wider populace it has been regrettably isolated from.

Reawakening an interest in the qualities of moral agents might be the best chance of reconfiguring our devastated moral mythologies, because we can have productive discussions concerning virtues without requiring much in the way of theoretical meanderings. What’s more, virtues are qualities that form a set that no-one would expect everyone to possess, making it easier to foster virtues in practice since the moral standard that they set is attainable by everyone in at least the minimal case of expressing at least one virtue. Rules and consequences suggest an absolute, all-or-nothing approach to morality that seems to require (when not approached with care) superhuman powers. Yet virtues sit firmly within human experience, and the key objection against virtue ethics is their failure to be absolute, which only serves to show how hypnotised by the Enlightenment’s moral systems we have become. Besides, if we can rescue virtues, we can rescue rights and outcome-focussed ethics too. One step at a time.

Outrage culture is the product of people who project morality against others, and rarely use it to judge themselves. There is a natural tendency to do this, one that has been well known by philosophers and indeed religious teachers for many centuries. What is conspicuously absent today is what in Chaos Ethics I call moral representation, which is to say, shared reflection on our ethical values and judgements, something also suggested as necessary by Haidt’s social intuitionist theory. We need to represent our moral positions to each other in part because once our anger is engaged, productive discussion is blocked by moral horror (what psychologists term cognitive dissonance), and so the reflective aspect of morality – which is far more important than we usually recognise – never occurs. If there is a lesson we should take from moral psychology it is not that we should abandon the study of our historically-embedded moral systems but that we have failed to adequately maintain the social basis of moral judgement. We are no longer conducting any viable form of morality, and we really have become the caricature that Haidt imagines, dressing up our emotional reactions in convenient justifications.

To overcome this impasse requires new discourses, and it is my suggestion that we start by talking about virtues since we can recognise what is good in a person – or a cyborg – without it setting off our moral horror, which closes us off from difficult to accept ideas. Too much of our ethical practice has become a sustained outpouring of vitriol against what we perceive as evil; often, what we judge as evil without ever stopping to consider the complexities of the situations. Whether we are talking about those who voted to leave the EU in the United Kingdom, or those who voted for a popularist demagogue in the United States it is not enough to angrily shout ‘racism! racism!’ and expect this to suffice for politics – or for ethics. Perhaps if we can recover some perspective on the good, we can stop being so ineffectually obsessed with raging at the evil we see everywhere around us. Outrage culture is either impotent or blindly vengeful – neither of which are terribly helpful. To get beyond this, we have to learn to talk about morality in ways that can cross easily between any religious or secular tradition, and virtues might just provide a way of doing this.

This post is a edited from sections of the piece run last week, What is Cybervirtue?, which I have edited and re-run (as What is Cybervirtue? Version 2.0) without this discussion within it. The opening image is Rage by Sarah Goodyear, which I found here on her Artdoxa page. As ever, no copyright infringement is intended and I will take the image down if asked.


Starting this week: Babich and Bateman, Dialogue II

blogupdateAs well as the continuation of the Cybervirtue Campaign (which has its first player now – the awesome Ari, who I’m thrilled to reconnect with), this week also sees the start of a new Babich and Bateman Dialogue, provisionally entitled Living with Machines. Starts on Thursday – hold on to your seats, as it gets off to a turbulent start! (There should be four of these dialogues in all… the fruits of what started as an interview with Babette but which has spiralled out of control into something much more interesting.)

Remember last year when I suggested I might go down to fortnightly posts? Seems instead I have stepped the pace back up to biweekly posts. How typical of me!

See you in the comments!


What is Cybervirtue? Version 2.0

Vitor.Elements-EarthIf virtues are the positive qualities of beings, what are the positive qualities of cyborgs? We call the admirable habits of humans ‘virtues’, so we can call the exemplary properties of the systems they form with their robots cybervirtues.

What I mean by a cybervirtue are those desirable qualities that a cyborg might possess, and what I mean by cyborg is any combination of beings and things that acts with a greater range of possibilities than either can achieve alone. Of particular interest to me at this time is the cyborg each of us forms with a robot, such as a laptop, a smartphone, or a desktop computer. If you are reading these words, you are a cyborg in the relevant sense since you could not have encountered what I am writing here without participating directly or indirectly in a network of humans and robots. The qualities of these networks, whether with just a single human and robot, or with a vast plurality of beings and things, is precisely what is at task when we think about cybervirtues.

So if virtues are the desirable habits of humans and other beings, cybervirtues are the equivalent properties humans possess as cyborgs. There are at least two senses that we can identify such qualities, and the current open forum here at Only a Game is interested in both of them. Firstly, the personal side of cybervirtue concerns the relationship between a robot and its human; the way your smart phone is designed (both in terms of its hardware and its software) governs its moral relationship with you. A simple example of such a personal cybervirtue is the recycling bin, which offers the kindness of protecting against the permanent loss of digital material by separating the decision to discard from the decision to make such discards permanent. Personal cybervirtues offer an internal sense of the term, internal to any given human-robot pairing.

On the other hand, social cybervirtues concern how the human-robot cyborg relates to other cyborgs, the external sense of the term. Here it is perhaps easier to demonstrate situations that show a lack of virtue, such as when anonymity in digital public spaces such as Twitter encourages appalling behaviour, especially (for some sad reason) towards female cyborgs. Yet the very presence of these machine-invoked moral debilities points to the possibility of cybervirtue in this external sense – the design of hardware and software to encourage virtuous behaviour in the cyborgs that result from the intimate relationship between a robot and its human.

What of autonomous robots? The capacity for a robot to take independent action once launched into a designed program of action somewhat conceals the way these are also cyborgs, always involving a human element in their constitution and operation. A cyborg (which is a contraction for cybernetic organism) could be constituted entirely by robots, provided ‘organism’ is taken metaphorically, as is often the case. But the question of whether there might eventually be robots made by other robots and entailing no assistance, direction, or maintenance by humans draws us away from the problem at hand. If such imagined future robots were beings in the relevant sense, they could possess virtues – and if they did not or could not, they would not be beings in any important sense.

Yet we do not need science fiction’s beloved sentient AI for robots to intrude upon ethics – both in terms of how they are designed, and how they effect human behaviour, computers have a significant moral aspect. What’s more, this is true of all tools for as Bruno Latour, Peter-Paul Verbeek, and Isabelle Stengers have all touched upon in various ways, things possess a moral agency too. It is not that our tools act without us, but through modifying our capabilities for action the things we use reconfigure the moral space we move within – the presence of a gun changes the moral potential of a situation; ultrasound introduces moral complexities to pregnancy that were previously absent; armed drones invite the leaders of nations to turn to assassination as a mere expediency. When we allow for the moral agency of things (even as just the modification of moral possibilities) the question of what is a virtue changes into something radically new and different, and that new perspective is precisely what I am seeking to explore.

Through the concept of cybervirtue I seek to draw attention both to the meaning of traditional virtues when considered against the backdrop of our vast networks of technology, and also to suggest ways in which the design of our robot’s hardware and software could be made to encourage virtue. Currently, this does not happen, but perhaps only because we are unaccustomed to thinking this way, and have never really thought about the design of computerised systems like this. Better design does not have to be about utility, which is a more problematic concept than we think; it could also be about encouraging virtue in humans, and cybervirtue in robot-human systems. It is up to us to create better cyborgs – either by changing our robots, or by changing ourselves.

The opening image is Elements: Earth by Vitor, which I picked out of his dormant website, The Fractal Forest, and is used with permission. This post is an edited version of What is Cybervirtue?, which ran earlier this week.


Jon Cogburn's Commentary on Babich and Bateman, Dialogue I

Pleased to report that Jon Cogburn, who is one of the professional philosophers interested in games (rather than professional games designers embroiled in philosophy, such as Ian Bogost, Stefano Gualeni, and myself…), took an interest in the first Babich and Bateman dialogue, The Last of the Continental Philosophers. Over at the multi-author Philosophical Percolations blog, Jon provided some excellent commentary on our discussions under the title One more difference between analytic and contintental philosophy. Here’s an extract:

I do have one quibble with Babich’s characterization of analytic and continental philosophy. I think that in characterizing continental philosophy she tends to characterize what the Mighty Dead of that tradition have done and in characterizing analytic philosophy she tends to characterize what standard academic philosophers get up to. But if you do this, then of course analytic philosophy ends up looking stupid when contrasted to continental philosophy. It’s dangerous too as we might lose sight of the fact that philosophy is egregiously difficult, so much so that most of it is going to be mediocre. The problem with analytic philosophy isn’t that the overwhelming majority of it is mediocre, but that the self appointed (though widely recognized) mandarins of analytic philosophy don’t have enough humility to recognize this. I would hate to see Babich unwittingly recapitulate this vice.

This makes this dialogue into part of the Republic of Bloggers, and that is always good news. My thanks to Jon for his thoughtful contributions to the topic. And speaking of Babich and Bateman, Dialogue II is on its way – look out for that soon!

 


Sunset

Over on ihobo today, my critique of Tale of Tales’ 2015 artgame Sunset. Here’s an extract:

There are guns in Sunset, but you never see them. Indeed, this is a game that spectacularly eschews conventional spectacle. Throughout the games’ slowly-unfolding story, a civil war against a 1970s South American dictatorship is witnessed both from a distance – the sound of gunfire in the streets, an explosion at a neighbouring building – and from the intimate inside, since the player serves as maid to a key politician-turned-rebel. It is an ambitious, highly theatrical staging, and admirable when it works, which it does more often than not… Yet to treat Sunset purely as a narrative game is to rob it of its greatest achievement, and perhaps also to misunderstand one of the layers of meaning wrapped up in its name.

You can read the entirety of Sunset over on ihobo.com.


What is Cybervirtue?

Rashid Rana.What Lies Between Skin and FleshIf virtues are the positive qualities of beings, what are the positive qualities of cyborgs? We call the admirable habits of humans ‘virtues’, so we can call the exemplary properties of the systems they form with their robots cybervirtues.

For the majority of the recorded history of our species, the concept of virtue has been the primary vehicle of morality. Whatever grasp individuals may or may not have had of their wider situation, the idea that it is a good thing to be brave, polite, patient, generous, or kind was easy to grasp – even if it was not always so easy to put into practice. In 1981, Alasdair MacIntyre’s After Virtue traced the history of virtues up to their contemporary near-demise, supplanted by two new moral systems devised in the Enlightenment: Kant’s moral philosophy that leads to human rights, and John Stuart Mill’s utilitarianism that leads to contemporary corporate consequentialism – a system that no longer resembles morality at all, as the late Derek Partfit accused.

We are beset by moral disasters, in particular where a laudable moral system has become corrupted into a distortion of itself. This is the nature of the two major disasters of contemporary ethics – the moral disaster of individualism, which confuses selfishness or paternalism for collective responsibility, and the moral disaster of consequentialism, which boils down situations to the point that decisions are easy to make, and in the process destroys the essential context of every ethical challenge. In terms of the disaster of individualism, there is an urgent need to repair our broken concepts of rights now that nations such as the United States and the United Kingdom have abandoned them while individuals still angrily invoke ‘their rights’ without any understanding of what that claim implies. There is an even more vital requirement to reconfigure the kind of consequentialist thinking that leads both nations and corporations to act in appalling ways because their definitions of what is good is reduced to the merely calculable. But neither of these projects has much hope of proceeding without a substantial reboot of moral thinking, and the academic community cannot achieve this – not without engaging with the wider populace it has been regrettably isolated from.

Reawakening an interest in the qualities of moral agents might be the best chance of reconfiguring our devastated moral mythologies, because we can have productive discussions concerning virtues without requiring much in the way of theoretical meanderings. What’s more, virtues are qualities that form a set that no-one would expect everyone to possess, making it easier to foster virtues in practice since the moral standard that they set is attainable by everyone in at least the minimal case of expressing at least one virtue. Rules and consequences suggest an absolute, all-or-nothing approach to morality that seems to require (when not approached with care) superhuman powers. Yet virtues sit firmly within human experience, and the key objection against virtue ethics is their failure to be absolute, which only serves to show how hypnotised by the Enlightenment’s moral systems we have become. Besides, if we can rescue virtues, we can rescue rights and outcome-focussed ethics too. One step at a time.

However, there is a crucial new complexity that we must be able to take into account: moral agency can no longer be constrained to humans. On the one hand, we have thankfully abandoned the intellectual conviction that animals cannot behave ethically, a perspective that was dominant right up to the 1980s. Animals are moral beings too, they possess a form of morality that Marc Bekoff and Jessica Pierce call ‘wild justice’, and cannot be treated as entirely excluded from moral consideration, as was the prevailing belief until quite recently. The embargo on accepting the moral value of animals was ended through the efforts of philosophers like Peter Singer, Mary Midgley, and the aforementioned Alasdair MacIntyre, who (with different methods and different motives) all undermined the assumption that animals did not matter.

But that has not been the only sea change in moral thought, for now we recognise that things have a moral agency too, as Bruno Latour, Peter-Paul Verbeek, and Isabelle Stengers have all touched upon in various ways. It is not that our tools act without us, but through modifying our capabilities for action the things we use reconfigure the moral space we move within – the presence of a gun changes the moral potential of a situation; ultrasound introduces moral complexities to pregnancy that were previously absent; armed drones invite the leaders of nations to turn to assassination as a mere expediency. Allowing for the moral agency of things (even as just the modification of moral possibilities) changes the question of what is a virtue into something radically new and different.

What I mean by a cybervirtue are those desirable qualities that a cyborg might possess, and what I mean by cyborg is any combination of beings and things that acts with a greater range of possibilities than either can achieve alone. Of particular interest to me at this time is the cyborg each of us forms with a robot, such as a laptop, a smartphone, or a desktop computer. If you are reading these words, you are a cyborg in the relevant sense since you could not have encountered what I am writing here without participating directly or indirectly in a network of humans and robots. The qualities of these networks, whether with just a single human and robot, or with a vast plurality of beings and things, is precisely what is at task when we think about cybervirtues.

So if virtues are the desirable habits of humans and other beings, cybervirtues are the equivalent properties humans possess as cyborgs. There are at least two senses that we can identify such qualities, and the current Cybervirtue Campaign here at Only a Game is interested in both of them. Firstly, the personal side of cybervirtue concerns the relationship between a robot and its human; the way your smart phone is designed (both in terms of its hardware and its software) governs its moral relationship with you. A simple example of such a personal cybervirtue is the recycling bin, which offers the kindness of protecting against the permanent loss of digital material by separating the decision to discard from the decision to make such discards permanent. Personal cybervirtues offer an internal sense of the term, internal to any given human-robot pairing.

On the other hand, social cybervirtues concern how the human-robot cyborg relates to other cyborgs, the external sense of the term. Here it is perhaps easier to demonstrate situations that show a lack of virtue, such as when anonymity in digital public spaces such as Twitter encourages appalling behaviour, especially (for some sad reason) towards female cyborgs. Yet the very presence of these machine-invoked moral debilities points to the possibility of cybervirtue in this external sense – the design of hardware and software to encourage virtuous behaviour in the cyborgs that result from the intimate relationship between a robot and its human. Here, we must be careful to avoid the moral disaster of individualism in its guise of paternalism: it is not cybervirtuous to forcibly censor online swearing (although it may be prudent to do so in some situations), because doing so does not encourage virtuous habits in humans.

What of autonomous robots? The capacity for a robot to take independent action once launched into a designed program of action somewhat conceals the way these are also cyborgs, always involving a human element in their constitution and operation. A cyborg (which is a contraction for cybernetic organism) could be constituted entirely by robots, provided ‘organism’ is taken metaphorically, as is often the case. But the question of whether there might eventually be robots made by other robots and entailing no assistance, direction, or maintenance by humans draws us away from the problem at hand. If such imagined future robots were beings in the relevant sense, they could possess virtues – and if they did not or could not, they would not be beings in any important sense.

Through the concept of cybervirtue I seek to draw attention both to the meaning of traditional virtues when considered against the backdrop of our vast networks of technology, and also to suggest ways in which the design of our robot’s hardware and software could be made to encourage virtue. Currently, this does not happen: our perverse mockery of Enlightenment ideals has been used in precisely the opposite way it was envisioned; instead of empowering individual autonomy and mutual respect, a shallow individualism devoid of context bankrolls a commercial consequentialism where only profit matters. The corporations cannot be blamed for this; they act in precisely the way they have been designed. It is up to us to create better cyborgs – either by changing our robots, or by changing ourselves.

The opening image is What Lies Between Skin and Flesh by Rashid Rana, which I found here on his artnet page. As ever, no copyright infringement is intended and I will take the image down if asked.


Librarians on File-sharing

File Sharing Before It Was CoolWhen pirates make copies of music, television shows, movies, or books, they don’t just keep it for themselves, and they don’t sell them on for money – they share their files freely with their (illegal) community. That means file-sharing pirates are operating as underground librarians. Yet despite this, the most noise about piracy has come from musicians (whose album sales have fallen sharply because of it) and especially the media corporations who market them. But the people I most want to hear from about the ethics of file-sharing are the librarians themselves.

What do the librarians think about file-sharing? What do they think are appropriate penalties for operating an unauthorised library? Are there any librarians who would consider allying with pirates for legal distribution of material (e.g. out of copyright books)? Do you think the future of libraries and the future of file-sharing are related in any way?

If you are a librarian, I would love to hear from you, either in the comments here, or in a reply to the tweet that pointed to this post. And if not, please promote the discussion by retweeting the original tweet.