Previous month:
March 2017
Next month:
May 2017

Pause for Reflection

Labyrinthe ClockHaving completed the first phase of the Cybervirtue Campaign, it is time for my Summer blogging break. I shall be working on incorporating the pre-reader feedback to the new book, The Virtuous Cyborg, prior to submission to the publisher – whom I hope to announce when I return in June. Then, in the second phase of the Campaign, we’ll be exploring contemporary issues as problems of cybervirtue and engaging in debate about living with robots. While I am ‘away’, all the previous posts here at Only a Game are eager for your comments, and I will reply to everyone when my hiatus concludes. Thanks to everyone who has participated in the Campaign so far – it has been extremely enlightening to share this with y’all!

Only a Game returns in the Summer.


Cyberg Applications

Network globeOver on the Journals of Doc Surge, Chris Billows has some additional thoughts about cybergs, the networks of technology and humans that are a definitive feature of our time. He suggests that literacy, toys, telephone networks, and vision correction technology also comprise cybergs.

While there are certainly technological networks around these four things, only two of them are strictly cybergs in my sense, since the term assumes “we must be dealing with a network that spans its entire breadth with some kind of active relation, even if merely one of potential.” I should note, however, that this isn’t really a criteria of exclusion: all technology forms a network of connections at some scale; the discussion of megacybergs and gigacybergs merely set a criterion for what to count together.

Literacy meets this large-network criteria, because the exchange of written media through all the available channels flows in every direction. Indeed, the advent of writing methods was a significant turning point in human thought. This might even be the second largest cyberg after money.

Toys do not meet the criteria directly, which is to say, traditional toys (like hammers) do not form large networks, but only small networks of productions. However, with most of the toy industry now intimately caught up with film and television (go to a toy superstore and see how many unbranded toys you can find!) this is all-but subsumed in the movie and television cybergs.

Telephone networks meet the criteria, even if it is merely the potential for global communication that elevates this to he big leagues. As, for that matter, does the global mail services – which was the cyberg that made the original Republic of Letters possible!

Finally, vision correction technology like glasses and contact lenses do not obviously meet the large-network criteria, since their networks are all relatively small (national scale). There is no connection to my knowledge (even of potential) between, say, French contact lenses and US contact lenses. That said, most contact lenses are manufactured by large multinational corporations such as Johnson & Johnson or Novartis – and in that sense they are part of megacybergs. But it would be the corporation, more than these specific tools, that would be the obvious network to point to.

What’s interesting about these four examples, and about the cyberg concept in general, is that thinking in this way about technology immediately draws us into a different point of view on tools. I had not really thought about toys as a technology before (despite defining them as a ‘tool for play’ in 21st Century Game Design), and putting the into this framework really does emphasise the way play is conditioned by existing media property. There’s something faintly disturbing about that.

Many thanks to Chris for sharing his perspective on this! I’d also like to thank him for his helpful feedback on the manuscript for The Virtuous Cyborg, not to mention his continued friendship and support!


Is the Truth Still Out There?

Joan Mitchell (1957) LadybugIs it possible to be truthful without understanding the truth? When our sense of truth vanishes, what does it mean to be honest?

At a time when ‘fake news’ and ‘alternative facts’ have become everyday points of discussion, it may seem we have lost our moorings with respect to the truth. This growing crisis of knowledge has been long recognised by philosophers, and indeed many other academics, and what we are seeing now is less the breakdown of the truth as the dawning public realisation of the extent of a problem that had previously been corralled within the ivory towers. Part of the problem we are facing is that understanding knowledge in terms of what is or is not true makes it seem as if knowing is just a matter of having the correct set of statements. The situation seems very different when the facts are understood as something created through the exercise of authentic knowledge.

You have knowledge whenever you are able to act reliably. You have practical knowledge of a car when you can repair it reliably; you have social knowledge of people when you can communicate reliably with them; you have scientific knowledge of an alpha particle when you can detect it reliably. Telling me how a car works, or where people go, or what an alpha particle is doesn’t necessarily show knowledge. It is often just repeating what those with authentic knowledge have said. The facts are true when the practices from which they emerge are reliable.

When we are not the person with the knowledge, what matters is whether we are being a reliable witness to those that do. This idea, drawn out in a novel way by chemist-turned-philosopher Isabelle Stengers, connects all situations where facts are being claimed. For Stengers, when a researcher acquires knowledge of an alpha particle, they are a reliable witness for the alpha particle, making a great deal of scientific research into finding ways to make inanimate objects ‘speak’. To have the facts is to be connected to the knowledge that established them by a chain of reliable witnesses, where you are the final link in the chain. On this understanding, which I developed in Wikipedia Knows Nothing, the truth of the facts is periphery to there being both an authentic knowledge at the root of any claim, and a chain of reliable witnesses from this to whomever is speaking.

When we take being truthful to be about stating true facts, it can obscure the moral qualities relevant to honesty. There is something to be said for the Christian conception of ‘bearing false witness’ as a description for the debility of deceitfulness: it stresses the breaking of the chain of reliable witnesses, and makes the standard of virtue hinge upon the act of witnessing for others rather than having true facts, as if honesty were a surrogate for forensic investigation or empirical experiments. To be honest or sincere is to be a reliable witness, whenever that is what is called for. When someone has a terrible haircut, for instance, remarking on how striking it looks is a very different response from telling them they should get that haircut again! Withholding comment is not necessarily a sign of dishonesty, and other virtues such as tact may make remaining silent or being selective with remarks appropriate choices in many situations. Only when we bear false witness have we acted with tangible deceitfulness.

As cyborgs, however, we are constantly tempted into bearing false witness. When a story comes to us through the digital public spaces, we sometimes retweet, reshare, or otherwise propagate unreliable witnesses – our sharing of these stories is acting as a witness, and as such there is a tacit duty to at least minimally confirm the veracity of what we are passing down the chain. Our robots are, in these cases, incapable of being cyber-deceitful: they reliably repeat what was said. Yet at the same time, cyborgs communicating in this way are cyber-gullible: the sheer ease with which whatever attracts our attention gets flagged for repeating discourages careful checking of what is being claimed – especially when the political and ideological leanings of the message align with the human’s view of the world. Thus ‘fake news’, which is the propagation of false witness, can proliferate easily.

Could we design the digital public spaces to be cyber-honest instead? Even the decision to work towards this would be a severe break from the tenor of the internet, where click-bait is carefully cultivated to misrepresent or to package vacuous fluff as if it were interesting, all in the relentless pursuit of our attention. Yet it does not seem inherently impossible to design for cyber-honesty, although any mechanism for drawing attention to gaps in the plausibility of claims being repeated runs severe risks of being abused for cyber-cruelty or exacerbating cyber-bluntness. A system for flagging errors or deceit would risk producing all the same problems as public karma systems that track negative behaviour – the potential for abuse could make anything like this untenable, unless the cost of blowing the whistle was high.

One possibility worth considering is a ‘warning flag’ that could be used to challenge a story as deceitful. To prevent abuse, these flags might have to be kept hidden from public eyes until a sufficient number had been raised, or until at least one flag had been raised by someone who had earned a trustworthy status in such matters. Trust in such a system could be staked: raising your flag unduly might result in the availability of that flag being withdrawn for a progressive period of time afterwards. In such an arrangement, deploying a flag frivolously would be to lose it for a while whilst a vindicated objection would raise trust ratings and perhaps provide additional flags. This system, if it could be made to work, would offer cyborgs the chance to become akin to honesty sheriffs on the digital frontier, for those willing to put in the time required to check the veracity of claims, while most would be free to ignore flagging procedures yet still be informed when a duplicitous story was being shared. It perhaps would be best if no story was ever prevented from being shared, and if flag-raising only affected the trust rating of that individual story, since the alternative would be censorship.

Such a system requires moderators at first, but as cyborgs earn trust within the system they could eventually operate without direct oversight, helping to overcome the scale problem that afflicts networks like Twitter and Facebook. In some respects, this proposal is not dissimilar to the way spam reporting works in certain contexts. The blogging platform Typepad, for instance, allows blog owners to manually flag spam comments, immediately deleting them and then watching for the same messages elsewhere in their blogs. Such an approach would not work in the digital public spaces precisely because of the collective qualities of these networks, but with a blog (where ownership is clear) they are adequate to the task they were designed for. It’s not clear whether anyone has thought about expanding these kind of nuisance defences to ‘fake news’, but this is something worth exploring now the cyber-deceitfulness of ‘social’ media has become a significant problem.

An objection may occur: how to deal with messages like a Christian posting “Jesus is the saviour of the world” or a Discordian posting “Did you know god’s name is Eris and that he is a girl?” If we think in terms of the truth of a statement, flags might be raised. But these cases entail no false witness, any more than do claims like “The USA is the greatest nation on Earth” or “The US is the greatest threat to the planet”. It is not because these are ‘just opinions’ that they are not examples of deceit, but because they are sincerely-made statements entailing no claim to knowledge (understood as guiding reliable actions). Faith traditions do entail knowledge (of rituals, hymns, community life and so forth), but never metaphysical knowledge, as such – if they did, there would be no question of faith in connection with them, and unwavering blind faith is always bad faith.

Stemming the growing tide of cyber-deceit – the use of the internet as a self-selecting propaganda machine – will entail taking a stand on knowledge. This will involve improving our collective critical skills, or at least taking a greater care to distinguish deceitful statements from sincere but untestable, accidentally mistaken, or reliably grounded ones. False witness depends upon someone falsifying a claim or repeating a falsified claim carelessly; these are not the same as bearing witness in a situation where no viable knowledge can be claimed, and we need to be clear about how we know things to understand the boundaries. Sharing the claims of someone who disagrees with ‘scientific consensus’ is not bearing false witness, no matter how much we disagree with what is said. Viewing knowledge as a bag of true statements deceives us in this regard, and we need to become smarter about tolerating dissensus, since as Jacques Rancière warns, democracy is impossible without disagreement. The key problem here is deceit, which is always a moral weakness even when applied in pursuit of a supposed greater good. The unintentional false witness of our digital public spaces is merely the amplifier that transforms ‘fake news’ from problem to crisis.

Deceit goes beyond the tactful presentation of your views – there are always infinite ways to present your perspective, after all, and bluntness should not be unduly conflated with honesty. Deceit is the intent to deny respect by purposefully bearing false witness. We have let ourselves become so accustomed to duplicity that we think there is nothing we can do about it. We can still change this, but only if we discern treacherous speech from its alternatives. The truth was never out there – reality is out there, and that was never true but merely, unwaveringly there. The truth is what is sometimes revealed by authentic knowledge, and what is always destroyed by false witness. If we aim to be reliable witnesses, and place our trust in those who know how to repeat honestly, sincerely, and, indeed, truthfully, we can get far closer to the truth than we have so far managed.

The opening image is Ladybug, a 1957 painting by Joan Mitchell, which I found here at Alain R. Truong’s site. As ever, no copyright infringement is intended and I will take the image down if asked. My thanks to seymourblogger (@abbeysbooks on Twitter) for the suggestion of this artist.


Everybody's Got It Wrong Except You

Alisha Edinburgh.Sand DuneJustice is something nearly everyone cares about, but nobody shares their sense of what it is. This problem, upon which ethics has flailed since at least Nietzsche, divides moral philosophers into those who assert there is a perfect rational standard of justice, and those that claim there never could be such a thing. This conflict is not worth buying into on either side. What matters instead is the recognition that there are a set of ideals for justice, as well as for related concepts such as fairness and ‘just war’, and that these ideals will only possess a rational standard if they are sustained as a shared practice. When no such practice exists, chaos reigns, and agreement is fragmentary. I am an appreciator of this chaotic nova of individual freedom, even while I am a critic of its underlying lack of fidelity. For there is no path for justice to be found here, and in that absence, injustice proliferates and escalates.

Before considering what it would mean for a robot and its human to be just, we must consider  the debilities and defects that result from the chaos nova around us, amply embodied in any of the gigacyberg digital public spaces around us, such as Twitter and Facebook. It is by seeing how things go wrong that we can discover ways to rescue justice, and thus propose a virtue of justice for cyborgs.

One defect is quick to discuss: nihilism, the self-defeating faith in there being nothing worth having faith in. Nihilism is borne of a rigid conviction in the value of truth (such as the Abrahamic traditions bequeathed to the sciences) coming up against the unjustifiable circumstances of that conviction. Here is where a person from a faith tradition like Christianity or Islam or some of the Hindu practices makes a leap of faith. The nihilist makes a similar leap into believing nothing has value – without noticing the unwavering faith in standards of judgement required to get to such an absurd position. Thus, the nihilist is confused into justifying anything on grounds of nothing. There’s not much we can do to help such unfortunates: like viruses that infect humans and robots, you just have to put up with the trouble they cause, and defend against them as best you can.

Holding ideals of justice so strongly that faith passes into the condition of certainty describes the debility of indignance. Here, as with the certainty of blind faith in religion or nationalism, people know they are right, and this empowers them to act in all manner of atrocious ways – particularly when they are acting as internet cyborgs. Sometimes, this cyber-indignance manifests as a desire for punishment, a kind of digital vigilantism fuelled by the cyber-cruelty invoked by loners in masks, deploying despicable methods such as doxxing (releasing personal data, such as a home address) or death threats. Here, the underlying certainty has deceived the cyborgs in question into thinking methods of acting are not important – their moral certainty, ironically, makes them behave as nihilists, turning to terror as a weapon as if nothing mattered. They have an unfortunate role-model in this: both the megacyborg nations and their guerrilla enemies have been employing terror and death as ill-considered weapons for decades.

More often, however, cyber-indignance is all bark and little bite. Passing angry judgements on those you find vile and disgusting is only human, as is acting out behind a mask for that matter, but tirades of cyber-disdain do nothing but antagonise those being judged or encourage others to hurl further abuse. There is no persuasive force to this form of argument – you instantly drive a wedge between yourself and whomever is being reviled – and as with the resort to terror in its various forms, it rests on the certainty that others must be wrong. In these kinds of situation, however, this is at least restrained by an acute sense that there should be limits to which methods are permitted, and this is the principal redeeming quality to this kind of preachy outburst.

Those that do not lose sight of the importance of methods recognise that some courses of action could never be just, and should not be pursued. To think otherwise is to fall into the moral disaster of consequentialism, to think only outcomes matter because of the understandable emphasis we place on how things turn out. But choice of method is part of ensuring desired outcomes, and precisely what makes certain methods unjust is their exclusion from any viable justice that is not merely revenge dressed up in convenient justifications. Nothing is worth achieving at any cost, although there are things that ought to be defended with every just method available. Thinking otherwise is to fail at being a just cyborg of any kind, because all extreme methods make justice harder – even impossible – to attain, and thus entail an element of self-defeat.

Justice is fragile, a sliding puzzle that can be aligned but never completed - a Rubik’s Cube with the wrong number of pieces in each colour. The jury system is an acknowledgment of this: it’s not enough to administer a rigid system of rules, as judges are charged with doing, there are times where an uncertain moral judgement must be made in order to render a legal judgement. In both justice and morality, reliable judgement is all but impossible to attain as individuals, since it is only when a particular set of ideals are weighed in a specific context that an answer to a moral question can be produced. This requires a common moral practice, and this depends on prior experience and the skill this develops. An individual can make judgements about others, but not reliably without the counsel of others, and in considering their own behaviour an individual is all too often an unreliable judge.

Can cyborgs be cyber-just? It is difficult for this to happen, because robots do not live in a world, and thus justice is as alien to them as bravery. However, a robot can still behave in cyber-just, cyber-indignant, or cyber-nihilist ways towards humans as a consequence of its design. Computer viruses are a perfect example of cyber-nihilism, since they act against the humans they encounter as a result of the absence of moral judgement entailed in their creation. Online discussion media that carelessly censor curse words can act with cyber-indignance, as becomes apparent when they censor words like ‘Dickensian’ because they contain a swear word. It is less clear that a robot could be cyber-just towards its human, although it could be argued that passwords and biometric security are cyber-just, in that they uphold property law and personal privacy with the unquenchable zeal of a machine.

The social sense of cyber-just – encouraging a cyborg to act with a clear sense justice – is a case where there may yet be possibilities. Key to ethical knowledge is moral representation, the sharing and discussing of quandaries and dilemmas. This happens temporarily in a jury room, intermittently in a confessional, and quite often in a pub too. Whether between priest and parishioner or two friends, ethical thought requires a shared perspective. Moral representation is in no way exclusive to religion, although it is inherent to authentic religious practice to engage in some kind of moral representation, and when this is absent – particularly when demagogues stir up indignance instead – the result is an ethical failure.

To form a moral practice requires discourse to offset the inherent fragility of justice, which is never wholly reducible to rule-following, as Alasdair MacIntyre cautions. Even Kant, who is most associated with rule-focussed ethics, recognised this, and thus (in contrast to how his views are usually presented) held that virtue and self-improvement were central to an ethical life. Besides, as Wittgenstein made clear, rule-following itself is a practice for beings such as humans. We are not robots who can exercise instructions unthinkingly: there is always a background of understanding that gives a set of rules its distinctive context – and in the absence of this, a thing lacks what is required to understand justice.

Designing a robot to facilitate cyber-just cyborgs might yet be possible by allowing for moral representation. We could leverage the network properties of digital public spaces to help people make moral judgements, or to defer the cyber-indignance inflamed by the culture clash of the internet. In some respects, this is already happening wherever humans discuss ethical problems online, but it could be enhanced by the design of mechanisms to foster discourse of the relevant kind. Sometimes, this will occur in private between individuals, as in the confessional or the bar booth, sometimes in closed groups like the jury room, perhaps it could even happen in public if indignance could be appropriately constrained. All that is required to head in a more helpful direction is to get beyond thinking that everyone has it wrong except you and those who happen to agree with you. This blight of certainty has become the greatest threat to moral practices of all kinds, for ethical knowledge is neither scientific nor calculable, but always entails the doubt that comes from knowing that you might be wrong.

The opening image is Sand Dune by Alisha Edinburgh, which I found here on her Custommade page. As ever, no copyright infringement is intended and I will take the image down if asked.


Technological Cowardice

Grim Reaper DroneWhat do internet trolls and drone assassinations have in common? An absence of courage brought about by creating the most grotesque kinds of cyborg.

In the heroic age of ancient Greece, the Norse Vikings, Celtic warriors, courage was the central virtue, around which society revolved. This was not just in battle, but everywhere in life: to lack the courage to do what was required of you was to bring shame upon yourself and your kin. Fidelity was an important part of this, and Alasdair MacIntyre suggests this was the primary virtue expected of women during this time, but that is not to say it only affected them; indeed, in feudal China, fidelity was more central to a virtuous man than courage. To be alive in the heroic age was to be bound to blood kin that you were expected to defend in both body and honour – and in so doing, sooner or later you would meet your death. To die was everyone’s fate, and this awareness – which we have lost sight of today – provided the backdrop against which courage gained its meaning.

Today, we are inclined to view such situations negatively, emphasising not the culture of valour that mattered to the people of that time, but the ways these stifling strictures of honour suppressed individual liberty and freedom. Yet there is a danger here, one entangled with the moral disaster of individualism and brought into focus by the problems with fidelity. For without a common bond against which the exercise of courage acquires its meaning, we either lose sight of it completely or mistakenly identify our outrage with valour. The ease with which our digital public spaces permit us to scratch this itch only deepens the crisis. How do we even know if we are brave when all measure of courage has been lost to us?

A robot cannot show cyber-courage in any personal manner for it fears nothing and is thus incapable of valour as we understand it. This very absence of both fearfulness and courageousness is precisely why robots are such appealing ‘soldiers’ when war is conditioned solely by the moral disaster of consequentialism. But before we consider the abject failure of the battlefield we ought to consider whether cyber-courage is even a plausible concept – for the evidence of the effects of technology upon this virtue is primarily of the exact opposite.

For Alasdair MacIntyre, courage was not only the primary virtue of the heroic age, but a central virtue in any situation. Virtues are qualities that acquire their meaning from the practices that people pursue together, since only in a shared context do qualitative judgements possess a common ground. Macintyre suggests three virtues are indispensable to any tradition, since without them even maintaining a practice becomes implausible. Truthfulness and a sense of justice are the two virtues required to maintain a viable community; courage is required to do the right thing even when it is difficult – indeed, the most basic understanding of courageousness is as the capacity to act when others would not, and this is vanishingly far from the mere willingness to display outrage, which need not be virtuous.

For a cyborg to display cyber-courage, a robot would need to be capable of encouraging its human to assert themselves virtuously: but how would it know? Precisely the failure of Artificial Intelligence has been the discovery, slow to be accepted, that computational intelligence is divorced from the practices of beings. All animals understand their situation through being able to coordinate their memories and habits within their own imagination, which ‘fills in the blanks’ of every circumstance through means so familiar to us that we take it for granted. Yet no robot can do this. The computational efficiency of silicon chips creates an impression of greater mental power because complex calculations are hard for us yet easy for robots. But calculation is a very small aspect of our cognitive capabilities – and for computers, it is all they have. To exist as a being is to live within a world, and this capacity is something none of robots possesses, nor is it likely that they will on the current design principles for software.

Rather than cyber-courage, what we have seen in the growing presence of computers in all aspects of human life is an erosion of courage as robots become the point of confrontation, and humans are able to distance themselves from their actions. The internet troll – the 21st century’s resident bully – is emboldened to make verbal attacks on strangers precisely because it is only a computer that is in personal contact with their victim. Bullying had long been associated with cowardice, it’s psychological appeal resting on the illusion of power created by picking on those who are powerless to stop you. In the playground or workplace, the bully chose to target only those who could be successfully intimidated. The cyber-cowardice engendered by our digital public spaces so successfully isolates trolls from their actions, the risk of reprisal falls to almost nothing. The virtual mask stokes the confidence of trolls, but courage is more than blind assertiveness, and there is nothing courageous about skulking in the shadows and preying upon others who have no capacity for reprisal or restitution.

In the heroic age, the fundamental display of courage was upon the battlefield. There, warriors braved death to defend their brothers in arms, and their families and clans for whom defeat could mean slavery, rape, or death. There is still courage to be found among today’s soldiers, but it is threatened by the cyber-cowardice that offers the capacity to kill without any risk of injury in return. Armed drones, a grotesque modification of equipment originally intended merely for surveillance, allow missile strikes on distant lands without any risk of personal harm to the operator. Here is the ultimate example of cyber-cowardice, a technology than extinguishes the flame of valour that burns in all those who serve in armed forces and dishonours entire nations such as the United Kingdom and the United States who have turned to these robotic weapons as a means of assassination.

Bradley Strawser is the ethicist who had made the strongest case for the moral permissibility of drones. He points to the psychological stress upon drone pilots, and the terrible post-traumatic stress caused by watching people die in a screen. He suggests it takes “intellectual bravery and perhaps some moral courage” to fly drones... but is this not the cyber-cowardice of the internet troll elevated to its most extreme degree? Laurie Calhoun draws exactly the opposite conclusion from the psychological impact of being a killer drone pilot: it demonstrates that they do feel remorse for taking the lives of their victims. Perhaps the most that can be said in defence of the armed drone pilot is that unlike the troll, they suffer for what they do.

I have respect for Strawser, who has engaged with the moral problems of armed drones in a way that is honourable for all that I radically disagree with his conclusions. He has suggested that the perceived problems with armed drones springs from the intuitive asymmetry of the battlefield where one side can kill without risk. His claim is that this imbalance was already present when jet fighters faced off against guerrillas armed with shoulder-mounted missiles, who could not be deemed remotely equivalent in power. Yet the fighter pilot still put themselves at risk in this scenario: there is not just a difference of degree involved in the use of armed drones, the ratio of risk between combatants has become infinite – courage cannot survive this asymptotic chasm, and the psychological cost of being part of an armed drone cyborg is evidence of the depravity of this technology, not of any form of courage.

What makes the armed drone seem acceptable is the moral disaster of consequentialism, which sees morality as reducible to calculation. Thus Strawser’s view is that the capacity to complete a mission without risking a soldier is morally obligatory – provided, he repeatedly stresses, that the cause is just. But good ends cannot justify despicable means, and the battlefield emptied of valour ceases to be a site of anything honourable. Indeed, it is no longer a battlefield, but merely the place where extermination takes place in pursuit of a victory that gets further from reach when such robotic weapons are deployed. Every civilian killed or injured in a drone strike sees nothing but the horror of death brought about by a cyborg enemy too cowardly even to show its face.

More cybervirtues next week.