What is Cybervirtue?
January 17, 2017
If virtues are the positive qualities of beings, what are the positive qualities of cyborgs? We call the admirable habits of humans ‘virtues’, so we can call the exemplary properties of the systems they form with their robots cybervirtues.
For the majority of the recorded history of our species, the concept of virtue has been the primary vehicle of morality. Whatever grasp individuals may or may not have had of their wider situation, the idea that it is a good thing to be brave, polite, patient, generous, or kind was easy to grasp – even if it was not always so easy to put into practice. In 1981, Alasdair MacIntyre’s After Virtue traced the history of virtues up to their contemporary near-demise, supplanted by two new moral systems devised in the Enlightenment: Kant’s moral philosophy that leads to human rights, and John Stuart Mill’s utilitarianism that leads to contemporary corporate consequentialism – a system that no longer resembles morality at all, as the late Derek Partfit accused.
We are beset by moral disasters, in particular where a laudable moral system has become corrupted into a distortion of itself. This is the nature of the two major disasters of contemporary ethics – the moral disaster of individualism, which confuses selfishness or paternalism for collective responsibility, and the moral disaster of consequentialism, which boils down situations to the point that decisions are easy to make, and in the process destroys the essential context of every ethical challenge. In terms of the disaster of individualism, there is an urgent need to repair our broken concepts of rights now that nations such as the United States and the United Kingdom have abandoned them while individuals still angrily invoke ‘their rights’ without any understanding of what that claim implies. There is an even more vital requirement to reconfigure the kind of consequentialist thinking that leads both nations and corporations to act in appalling ways because their definitions of what is good is reduced to the merely calculable. But neither of these projects has much hope of proceeding without a substantial reboot of moral thinking, and the academic community cannot achieve this – not without engaging with the wider populace it has been regrettably isolated from.
Reawakening an interest in the qualities of moral agents might be the best chance of reconfiguring our devastated moral mythologies, because we can have productive discussions concerning virtues without requiring much in the way of theoretical meanderings. What’s more, virtues are qualities that form a set that no-one would expect everyone to possess, making it easier to foster virtues in practice since the moral standard that they set is attainable by everyone in at least the minimal case of expressing at least one virtue. Rules and consequences suggest an absolute, all-or-nothing approach to morality that seems to require (when not approached with care) superhuman powers. Yet virtues sit firmly within human experience, and the key objection against virtue ethics is their failure to be absolute, which only serves to show how hypnotised by the Enlightenment’s moral systems we have become. Besides, if we can rescue virtues, we can rescue rights and outcome-focussed ethics too. One step at a time.
However, there is a crucial new complexity that we must be able to take into account: moral agency can no longer be constrained to humans. On the one hand, we have thankfully abandoned the intellectual conviction that animals cannot behave ethically, a perspective that was dominant right up to the 1980s. Animals are moral beings too, they possess a form of morality that Marc Bekoff and Jessica Pierce call ‘wild justice’, and cannot be treated as entirely excluded from moral consideration, as was the prevailing belief until quite recently. The embargo on accepting the moral value of animals was ended through the efforts of philosophers like Peter Singer, Mary Midgley, and the aforementioned Alasdair MacIntyre, who (with different methods and different motives) all undermined the assumption that animals did not matter.
But that has not been the only sea change in moral thought, for now we recognise that things have a moral agency too, as Bruno Latour, Peter-Paul Verbeek, and Isabelle Stengers have all touched upon in various ways. It is not that our tools act without us, but through modifying our capabilities for action the things we use reconfigure the moral space we move within – the presence of a gun changes the moral potential of a situation; ultrasound introduces moral complexities to pregnancy that were previously absent; armed drones invite the leaders of nations to turn to assassination as a mere expediency. Allowing for the moral agency of things (even as just the modification of moral possibilities) changes the question of what is a virtue into something radically new and different.
What I mean by a cybervirtue are those desirable qualities that a cyborg might possess, and what I mean by cyborg is any combination of beings and things that acts with a greater range of possibilities than either can achieve alone. Of particular interest to me at this time is the cyborg each of us forms with a robot, such as a laptop, a smartphone, or a desktop computer. If you are reading these words, you are a cyborg in the relevant sense since you could not have encountered what I am writing here without participating directly or indirectly in a network of humans and robots. The qualities of these networks, whether with just a single human and robot, or with a vast plurality of beings and things, is precisely what is at task when we think about cybervirtues.
So if virtues are the desirable habits of humans and other beings, cybervirtues are the equivalent properties humans possess as cyborgs. There are at least two senses that we can identify such qualities, and the current Cybervirtue Campaign here at Only a Game is interested in both of them. Firstly, the personal side of cybervirtue concerns the relationship between a robot and its human; the way your smart phone is designed (both in terms of its hardware and its software) governs its moral relationship with you. A simple example of such a personal cybervirtue is the recycling bin, which offers the kindness of protecting against the permanent loss of digital material by separating the decision to discard from the decision to make such discards permanent. Personal cybervirtues offer an internal sense of the term, internal to any given human-robot pairing.
On the other hand, social cybervirtues concern how the human-robot cyborg relates to other cyborgs, the external sense of the term. Here it is perhaps easier to demonstrate situations that show a lack of virtue, such as when anonymity in digital public spaces such as Twitter encourages appalling behaviour, especially (for some sad reason) towards female cyborgs. Yet the very presence of these machine-invoked moral debilities points to the possibility of cybervirtue in this external sense – the design of hardware and software to encourage virtuous behaviour in the cyborgs that result from the intimate relationship between a robot and its human. Here, we must be careful to avoid the moral disaster of individualism in its guise of paternalism: it is not cybervirtuous to forcibly censor online swearing (although it may be prudent to do so in some situations), because doing so does not encourage virtuous habits in humans.
What of autonomous robots? The capacity for a robot to take independent action once launched into a designed program of action somewhat conceals the way these are also cyborgs, always involving a human element in their constitution and operation. A cyborg (which is a contraction for cybernetic organism) could be constituted entirely by robots, provided ‘organism’ is taken metaphorically, as is often the case. But the question of whether there might eventually be robots made by other robots and entailing no assistance, direction, or maintenance by humans draws us away from the problem at hand. If such imagined future robots were beings in the relevant sense, they could possess virtues – and if they did not or could not, they would not be beings in any important sense.
Through the concept of cybervirtue I seek to draw attention both to the meaning of traditional virtues when considered against the backdrop of our vast networks of technology, and also to suggest ways in which the design of our robot’s hardware and software could be made to encourage virtue. Currently, this does not happen: our perverse mockery of Enlightenment ideals has been used in precisely the opposite way it was envisioned; instead of empowering individual autonomy and mutual respect, a shallow individualism devoid of context bankrolls a commercial consequentialism where only profit matters. The corporations cannot be blamed for this; they act in precisely the way they have been designed. It is up to us to create better cyborgs – either by changing our robots, or by changing ourselves.
The opening image is What Lies Between Skin and Flesh by Rashid Rana, which I found here on his artnet page. As ever, no copyright infringement is intended and I will take the image down if asked.
Interestingly enough, one of the last papers I wrote as a philosophy major in college dealt with holding cyborgs morally responsible. The specific thought experiment was (roughly, this was a few years ago) someone, who having done some evil/immoral acts with a "cyborg" helmet (that increased their mental faculties enormously), was caught. If the helmet was removed, could then the human claim that the "cyborg" was not them, i.e. can we hold the human accountable for the cyborg's actions.
What I hit upon was that we have a framework that we can refashion perhaps to apply to cyborgs, which is how we deal with corporation which are basically cyborgs anyways. (social cyborgs?) But we have some sense for when we should hold the corporation as whole responsible, or management, or particular individuals within that system. We consider them legal persons, which is a legal fiction that allows us to hold to corporation as whole responsible in the judicial system.
Or course, we're not very good at having corporations act virtuously either, but I also have ideas for that.
Posted by: Ari | January 19, 2017 at 03:17 PM
Hi Ari,
Your cyborg thought experiment strikes me as a very specific instantiation of the more general problem of connectivity between selves. I discuss this at the start of Chaos Ethics, and draw against Derek Parfit, who in turn draws against Thomas Nagel, with the concept of series-persons: seeing our lives not as one entity, but as a chain of entities. Parfit wants to use this to argue, effectively, for a lenient way of looking at crime - that the crimes of the earlier series-person should not be held against a later series-person, who is effectively a 'different' person. (You may already be familiar with this reference, of course).
What I end up doing is putting this into opposition with Alasdair MacIntyre's point that the unity of self - a consistent narrative across our lives - is actually essential for our moral selves. A quick and dirty way of getting to this point is that if we did follow Nagel/Parfit, it invites irresponsibility by basically saying 'whatever I do now, a future series-person won't be accountable for it'. That's not how I pursue this in the book, which is more about raising questions than answering them, but I do side with MacIntyre on this one: yes, we are different people at different times of our lives, but narrative unity is, as Daniel Dennett also argued, the basis of self. We should not be too keen to undermine that.
Now, you turn this discussion to corporations... I shall defer this for now! The posts at the beginning of February will hit this point dead on, and will be a good time to discuss your thoughts and ideas. But I do object to your suggested term 'social cyborg' - corporations are rather anti-social cyborgs... and I am definitely interested in how we might encourage virtue in them. As I say - we shall get to this!
All the best!
Posted by: Chris | January 20, 2017 at 06:06 AM
The social cybervirtues segment immediately made me think of video games, especially the kind of online competitive multiplayer videogames I generally play (as you know!).
Street Fighter V practically "forcibly censors" almost all forms of communication with your opponents (by barely allowing any, beyond a few meagre forms of self-expression) and this fact has been criticised recently for it's lack of any community feeling.
On the other hand, maybe that's a good thing, as an acquaintance of mine recently wrote about the game-design elements that cause MOBAs (Lane-pushers to me, ;-) ) to become so frequently toxic https://stevesgamedesignblog.wordpress.com/2017/01/15/why-are-mobas-so-toxic/
My own recent thought about this too was about how these types of team competitive games could actually try to encourage you to play with your friendship group in actual pre-arranged teams, rather than relying on random stranger cyborgs on the internet to act in an agreeable manner (http://www.agoners.com/index.php/why-ranked-modes-in-team-based-videogames-are-doing-it-wrong/)
Hope you don't mind these tangential videogame related nuggets :-)
Posted by: Rik Newman | January 23, 2017 at 06:20 PM
Hey Rik,
I welcome tangents of all kind! It's not my current plan to put much time into the videogame angles up front, just because the book project won't deal with them, but I certainly intend to get there sometime down the line (probably around April-May).
Until then, it's still the case that everything counts for this topic as long as it relates to the topic - and I think the design of community games is an important aspect of cybervirtue.
The links are also extremely welcome, and shall be thrown into the hopper for future digestion.
Since one comment is enough to qualify, I shall count you among the players of the Cybervirtue Campaign! :)
All the best,
Chris.
Posted by: Chris | January 23, 2017 at 08:34 PM