Cyberg Applications

Network globeOver on the Journals of Doc Surge, Chris Billows has some additional thoughts about cybergs, the networks of technology and humans that are a definitive feature of our time. He suggests that literacy, toys, telephone networks, and vision correction technology also comprise cybergs.

While there are certainly technological networks around these four things, only two of them are strictly cybergs in my sense, since the term assumes “we must be dealing with a network that spans its entire breadth with some kind of active relation, even if merely one of potential.” I should note, however, that this isn’t really a criteria of exclusion: all technology forms a network of connections at some scale; the discussion of megacybergs and gigacybergs merely set a criterion for what to count together.

Literacy meets this large-network criteria, because the exchange of written media through all the available channels flows in every direction. Indeed, the advent of writing methods was a significant turning point in human thought. This might even be the second largest cyberg after money.

Toys do not meet the criteria directly, which is to say, traditional toys (like hammers) do not form large networks, but only small networks of productions. However, with most of the toy industry now intimately caught up with film and television (go to a toy superstore and see how many unbranded toys you can find!) this is all-but subsumed in the movie and television cybergs.

Telephone networks meet the criteria, even if it is merely the potential for global communication that elevates this to he big leagues. As, for that matter, does the global mail services – which was the cyberg that made the original Republic of Letters possible!

Finally, vision correction technology like glasses and contact lenses do not obviously meet the large-network criteria, since their networks are all relatively small (national scale). There is no connection to my knowledge (even of potential) between, say, French contact lenses and US contact lenses. That said, most contact lenses are manufactured by large multinational corporations such as Johnson & Johnson or Novartis – and in that sense they are part of megacybergs. But it would be the corporation, more than these specific tools, that would be the obvious network to point to.

What’s interesting about these four examples, and about the cyberg concept in general, is that thinking in this way about technology immediately draws us into a different point of view on tools. I had not really thought about toys as a technology before (despite defining them as a ‘tool for play’ in 21st Century Game Design), and putting the into this framework really does emphasise the way play is conditioned by existing media property. There’s something faintly disturbing about that.

Many thanks to Chris for sharing his perspective on this! I’d also like to thank him for his helpful feedback on the manuscript for The Virtuous Cyborg, not to mention his continued friendship and support!


Is the Truth Still Out There?

Joan Mitchell (1957) LadybugIs it possible to be truthful without understanding the truth? When our sense of truth vanishes, what does it mean to be honest?

At a time when ‘fake news’ and ‘alternative facts’ have become everyday points of discussion, it may seem we have lost our moorings with respect to the truth. This growing crisis of knowledge has been long recognised by philosophers, and indeed many other academics, and what we are seeing now is less the breakdown of the truth as the dawning public realisation of the extent of a problem that had previously been corralled within the ivory towers. Part of the problem we are facing is that understanding knowledge in terms of what is or is not true makes it seem as if knowing is just a matter of having the correct set of statements. The situation seems very different when the facts are understood as something created through the exercise of authentic knowledge.

You have knowledge whenever you are able to act reliably. You have practical knowledge of a car when you can repair it reliably; you have social knowledge of people when you can communicate reliably with them; you have scientific knowledge of an alpha particle when you can detect it reliably. Telling me how a car works, or where people go, or what an alpha particle is doesn’t necessarily show knowledge. It is often just repeating what those with authentic knowledge have said. The facts are true when the practices from which they emerge are reliable.

When we are not the person with the knowledge, what matters is whether we are being a reliable witness to those that do. This idea, drawn out in a novel way by chemist-turned-philosopher Isabelle Stengers, connects all situations where facts are being claimed. For Stengers, when a researcher acquires knowledge of an alpha particle, they are a reliable witness for the alpha particle, making a great deal of scientific research into finding ways to make inanimate objects ‘speak’. To have the facts is to be connected to the knowledge that established them by a chain of reliable witnesses, where you are the final link in the chain. On this understanding, which I developed in Wikipedia Knows Nothing, the truth of the facts is periphery to there being both an authentic knowledge at the root of any claim, and a chain of reliable witnesses from this to whomever is speaking.

When we take being truthful to be about stating true facts, it can obscure the moral qualities relevant to honesty. There is something to be said for the Christian conception of ‘bearing false witness’ as a description for the debility of deceitfulness: it stresses the breaking of the chain of reliable witnesses, and makes the standard of virtue hinge upon the act of witnessing for others rather than having true facts, as if honesty were a surrogate for forensic investigation or empirical experiments. To be honest or sincere is to be a reliable witness, whenever that is what is called for. When someone has a terrible haircut, for instance, remarking on how striking it looks is a very different response from telling them they should get that haircut again! Withholding comment is not necessarily a sign of dishonesty, and other virtues such as tact may make remaining silent or being selective with remarks appropriate choices in many situations. Only when we bear false witness have we acted with tangible deceitfulness.

As cyborgs, however, we are constantly tempted into bearing false witness. When a story comes to us through the digital public spaces, we sometimes retweet, reshare, or otherwise propagate unreliable witnesses – our sharing of these stories is acting as a witness, and as such there is a tacit duty to at least minimally confirm the veracity of what we are passing down the chain. Our robots are, in these cases, incapable of being cyber-deceitful: they reliably repeat what was said. Yet at the same time, cyborgs communicating in this way are cyber-gullible: the sheer ease with which whatever attracts our attention gets flagged for repeating discourages careful checking of what is being claimed – especially when the political and ideological leanings of the message align with the human’s view of the world. Thus ‘fake news’, which is the propagation of false witness, can proliferate easily.

Could we design the digital public spaces to be cyber-honest instead? Even the decision to work towards this would be a severe break from the tenor of the internet, where click-bait is carefully cultivated to misrepresent or to package vacuous fluff as if it were interesting, all in the relentless pursuit of our attention. Yet it does not seem inherently impossible to design for cyber-honesty, although any mechanism for drawing attention to gaps in the plausibility of claims being repeated runs severe risks of being abused for cyber-cruelty or exacerbating cyber-bluntness. A system for flagging errors or deceit would risk producing all the same problems as public karma systems that track negative behaviour – the potential for abuse could make anything like this untenable, unless the cost of blowing the whistle was high.

One possibility worth considering is a ‘warning flag’ that could be used to challenge a story as deceitful. To prevent abuse, these flags might have to be kept hidden from public eyes until a sufficient number had been raised, or until at least one flag had been raised by someone who had earned a trustworthy status in such matters. Trust in such a system could be staked: raising your flag unduly might result in the availability of that flag being withdrawn for a progressive period of time afterwards. In such an arrangement, deploying a flag frivolously would be to lose it for a while whilst a vindicated objection would raise trust ratings and perhaps provide additional flags. This system, if it could be made to work, would offer cyborgs the chance to become akin to honesty sheriffs on the digital frontier, for those willing to put in the time required to check the veracity of claims, while most would be free to ignore flagging procedures yet still be informed when a duplicitous story was being shared. It perhaps would be best if no story was ever prevented from being shared, and if flag-raising only affected the trust rating of that individual story, since the alternative would be censorship.

Such a system requires moderators at first, but as cyborgs earn trust within the system they could eventually operate without direct oversight, helping to overcome the scale problem that afflicts networks like Twitter and Facebook. In some respects, this proposal is not dissimilar to the way spam reporting works in certain contexts. The blogging platform Typepad, for instance, allows blog owners to manually flag spam comments, immediately deleting them and then watching for the same messages elsewhere in their blogs. Such an approach would not work in the digital public spaces precisely because of the collective qualities of these networks, but with a blog (where ownership is clear) they are adequate to the task they were designed for. It’s not clear whether anyone has thought about expanding these kind of nuisance defences to ‘fake news’, but this is something worth exploring now the cyber-deceitfulness of ‘social’ media has become a significant problem.

An objection may occur: how to deal with messages like a Christian posting “Jesus is the saviour of the world” or a Discordian posting “Did you know god’s name is Eris and that he is a girl?” If we think in terms of the truth of a statement, flags might be raised. But these cases entail no false witness, any more than do claims like “The USA is the greatest nation on Earth” or “The US is the greatest threat to the planet”. It is not because these are ‘just opinions’ that they are not examples of deceit, but because they are sincerely-made statements entailing no claim to knowledge (understood as guiding reliable actions). Faith traditions do entail knowledge (of rituals, hymns, community life and so forth), but never metaphysical knowledge, as such – if they did, there would be no question of faith in connection with them, and unwavering blind faith is always bad faith.

Stemming the growing tide of cyber-deceit – the use of the internet as a self-selecting propaganda machine – will entail taking a stand on knowledge. This will involve improving our collective critical skills, or at least taking a greater care to distinguish deceitful statements from sincere but untestable, accidentally mistaken, or reliably grounded ones. False witness depends upon someone falsifying a claim or repeating a falsified claim carelessly; these are not the same as bearing witness in a situation where no viable knowledge can be claimed, and we need to be clear about how we know things to understand the boundaries. Sharing the claims of someone who disagrees with ‘scientific consensus’ is not bearing false witness, no matter how much we disagree with what is said. Viewing knowledge as a bag of true statements deceives us in this regard, and we need to become smarter about tolerating dissensus, since as Jacques Rancière warns, democracy is impossible without disagreement. The key problem here is deceit, which is always a moral weakness even when applied in pursuit of a supposed greater good. The unintentional false witness of our digital public spaces is merely the amplifier that transforms ‘fake news’ from problem to crisis.

Deceit goes beyond the tactful presentation of your views – there are always infinite ways to present your perspective, after all, and bluntness should not be unduly conflated with honesty. Deceit is the intent to deny respect by purposefully bearing false witness. We have let ourselves become so accustomed to duplicity that we think there is nothing we can do about it. We can still change this, but only if we discern treacherous speech from its alternatives. The truth was never out there – reality is out there, and that was never true but merely, unwaveringly there. The truth is what is sometimes revealed by authentic knowledge, and what is always destroyed by false witness. If we aim to be reliable witnesses, and place our trust in those who know how to repeat honestly, sincerely, and, indeed, truthfully, we can get far closer to the truth than we have so far managed.

The opening image is Ladybug, a 1957 painting by Joan Mitchell, which I found here at Alain R. Truong’s site. As ever, no copyright infringement is intended and I will take the image down if asked. My thanks to seymourblogger (@abbeysbooks on Twitter) for the suggestion of this artist.


Everybody's Got It Wrong Except You

Alisha Edinburgh.Sand DuneJustice is something nearly everyone cares about, but nobody shares their sense of what it is. This problem, upon which ethics has flailed since at least Nietzsche, divides moral philosophers into those who assert there is a perfect rational standard of justice, and those that claim there never could be such a thing. This conflict is not worth buying into on either side. What matters instead is the recognition that there are a set of ideals for justice, as well as for related concepts such as fairness and ‘just war’, and that these ideals will only possess a rational standard if they are sustained as a shared practice. When no such practice exists, chaos reigns, and agreement is fragmentary. I am an appreciator of this chaotic nova of individual freedom, even while I am a critic of its underlying lack of fidelity. For there is no path for justice to be found here, and in that absence, injustice proliferates and escalates.

Before considering what it would mean for a robot and its human to be just, we must consider  the debilities and defects that result from the chaos nova around us, amply embodied in any of the gigacyberg digital public spaces around us, such as Twitter and Facebook. It is by seeing how things go wrong that we can discover ways to rescue justice, and thus propose a virtue of justice for cyborgs.

One defect is quick to discuss: nihilism, the self-defeating faith in there being nothing worth having faith in. Nihilism is borne of a rigid conviction in the value of truth (such as the Abrahamic traditions bequeathed to the sciences) coming up against the unjustifiable circumstances of that conviction. Here is where a person from a faith tradition like Christianity or Islam or some of the Hindu practices makes a leap of faith. The nihilist makes a similar leap into believing nothing has value – without noticing the unwavering faith in standards of judgement required to get to such an absurd position. Thus, the nihilist is confused into justifying anything on grounds of nothing. There’s not much we can do to help such unfortunates: like viruses that infect humans and robots, you just have to put up with the trouble they cause, and defend against them as best you can.

Holding ideals of justice so strongly that faith passes into the condition of certainty describes the debility of indignance. Here, as with the certainty of blind faith in religion or nationalism, people know they are right, and this empowers them to act in all manner of atrocious ways – particularly when they are acting as internet cyborgs. Sometimes, this cyber-indignance manifests as a desire for punishment, a kind of digital vigilantism fuelled by the cyber-cruelty invoked by loners in masks, deploying despicable methods such as doxxing (releasing personal data, such as a home address) or death threats. Here, the underlying certainty has deceived the cyborgs in question into thinking methods of acting are not important – their moral certainty, ironically, makes them behave as nihilists, turning to terror as a weapon as if nothing mattered. They have an unfortunate role-model in this: both the megacyborg nations and their guerrilla enemies have been employing terror and death as ill-considered weapons for decades.

More often, however, cyber-indignance is all bark and little bite. Passing angry judgements on those you find vile and disgusting is only human, as is acting out behind a mask for that matter, but tirades of cyber-disdain do nothing but antagonise those being judged or encourage others to hurl further abuse. There is no persuasive force to this form of argument – you instantly drive a wedge between yourself and whomever is being reviled – and as with the resort to terror in its various forms, it rests on the certainty that others must be wrong. In these kinds of situation, however, this is at least restrained by an acute sense that there should be limits to which methods are permitted, and this is the principal redeeming quality to this kind of preachy outburst.

Those that do not lose sight of the importance of methods recognise that some courses of action could never be just, and should not be pursued. To think otherwise is to fall into the moral disaster of consequentialism, to think only outcomes matter because of the understandable emphasis we place on how things turn out. But choice of method is part of ensuring desired outcomes, and precisely what makes certain methods unjust is their exclusion from any viable justice that is not merely revenge dressed up in convenient justifications. Nothing is worth achieving at any cost, although there are things that ought to be defended with every just method available. Thinking otherwise is to fail at being a just cyborg of any kind, because all extreme methods make justice harder – even impossible – to attain, and thus entail an element of self-defeat.

Justice is fragile, a sliding puzzle that can be aligned but never completed - a Rubik’s Cube with the wrong number of pieces in each colour. The jury system is an acknowledgment of this: it’s not enough to administer a rigid system of rules, as judges are charged with doing, there are times where an uncertain moral judgement must be made in order to render a legal judgement. In both justice and morality, reliable judgement is all but impossible to attain as individuals, since it is only when a particular set of ideals are weighed in a specific context that an answer to a moral question can be produced. This requires a common moral practice, and this depends on prior experience and the skill this develops. An individual can make judgements about others, but not reliably without the counsel of others, and in considering their own behaviour an individual is all too often an unreliable judge.

Can cyborgs be cyber-just? It is difficult for this to happen, because robots do not live in a world, and thus justice is as alien to them as bravery. However, a robot can still behave in cyber-just, cyber-indignant, or cyber-nihilist ways towards humans as a consequence of its design. Computer viruses are a perfect example of cyber-nihilism, since they act against the humans they encounter as a result of the absence of moral judgement entailed in their creation. Online discussion media that carelessly censor curse words can act with cyber-indignance, as becomes apparent when they censor words like ‘Dickensian’ because they contain a swear word. It is less clear that a robot could be cyber-just towards its human, although it could be argued that passwords and biometric security are cyber-just, in that they uphold property law and personal privacy with the unquenchable zeal of a machine.

The social sense of cyber-just – encouraging a cyborg to act with a clear sense justice – is a case where there may yet be possibilities. Key to ethical knowledge is moral representation, the sharing and discussing of quandaries and dilemmas. This happens temporarily in a jury room, intermittently in a confessional, and quite often in a pub too. Whether between priest and parishioner or two friends, ethical thought requires a shared perspective. Moral representation is in no way exclusive to religion, although it is inherent to authentic religious practice to engage in some kind of moral representation, and when this is absent – particularly when demagogues stir up indignance instead – the result is an ethical failure.

To form a moral practice requires discourse to offset the inherent fragility of justice, which is never wholly reducible to rule-following, as Alasdair MacIntyre cautions. Even Kant, who is most associated with rule-focussed ethics, recognised this, and thus (in contrast to how his views are usually presented) held that virtue and self-improvement were central to an ethical life. Besides, as Wittgenstein made clear, rule-following itself is a practice for beings such as humans. We are not robots who can exercise instructions unthinkingly: there is always a background of understanding that gives a set of rules its distinctive context – and in the absence of this, a thing lacks what is required to understand justice.

Designing a robot to facilitate cyber-just cyborgs might yet be possible by allowing for moral representation. We could leverage the network properties of digital public spaces to help people make moral judgements, or to defer the cyber-indignance inflamed by the culture clash of the internet. In some respects, this is already happening wherever humans discuss ethical problems online, but it could be enhanced by the design of mechanisms to foster discourse of the relevant kind. Sometimes, this will occur in private between individuals, as in the confessional or the bar booth, sometimes in closed groups like the jury room, perhaps it could even happen in public if indignance could be appropriately constrained. All that is required to head in a more helpful direction is to get beyond thinking that everyone has it wrong except you and those who happen to agree with you. This blight of certainty has become the greatest threat to moral practices of all kinds, for ethical knowledge is neither scientific nor calculable, but always entails the doubt that comes from knowing that you might be wrong.

The opening image is Sand Dune by Alisha Edinburgh, which I found here on her Custommade page. As ever, no copyright infringement is intended and I will take the image down if asked.


Technological Cowardice

Grim Reaper DroneWhat do internet trolls and drone assassinations have in common? An absence of courage brought about by creating the most grotesque kinds of cyborg.

In the heroic age of ancient Greece, the Norse Vikings, Celtic warriors, courage was the central virtue, around which society revolved. This was not just in battle, but everywhere in life: to lack the courage to do what was required of you was to bring shame upon yourself and your kin. Fidelity was an important part of this, and Alasdair MacIntyre suggests this was the primary virtue expected of women during this time, but that is not to say it only affected them; indeed, in feudal China, fidelity was more central to a virtuous man than courage. To be alive in the heroic age was to be bound to blood kin that you were expected to defend in both body and honour – and in so doing, sooner or later you would meet your death. To die was everyone’s fate, and this awareness – which we have lost sight of today – provided the backdrop against which courage gained its meaning.

Today, we are inclined to view such situations negatively, emphasising not the culture of valour that mattered to the people of that time, but the ways these stifling strictures of honour suppressed individual liberty and freedom. Yet there is a danger here, one entangled with the moral disaster of individualism and brought into focus by the problems with fidelity. For without a common bond against which the exercise of courage acquires its meaning, we either lose sight of it completely or mistakenly identify our outrage with valour. The ease with which our digital public spaces permit us to scratch this itch only deepens the crisis. How do we even know if we are brave when all measure of courage has been lost to us?

A robot cannot show cyber-courage in any personal manner for it fears nothing and is thus incapable of valour as we understand it. This very absence of both fearfulness and courageousness is precisely why robots are such appealing ‘soldiers’ when war is conditioned solely by the moral disaster of consequentialism. But before we consider the abject failure of the battlefield we ought to consider whether cyber-courage is even a plausible concept – for the evidence of the effects of technology upon this virtue is primarily of the exact opposite.

For Alasdair MacIntyre, courage was not only the primary virtue of the heroic age, but a central virtue in any situation. Virtues are qualities that acquire their meaning from the practices that people pursue together, since only in a shared context do qualitative judgements possess a common ground. Macintyre suggests three virtues are indispensable to any tradition, since without them even maintaining a practice becomes implausible. Truthfulness and a sense of justice are the two virtues required to maintain a viable community; courage is required to do the right thing even when it is difficult – indeed, the most basic understanding of courageousness is as the capacity to act when others would not, and this is vanishingly far from the mere willingness to display outrage, which need not be virtuous.

For a cyborg to display cyber-courage, a robot would need to be capable of encouraging its human to assert themselves virtuously: but how would it know? Precisely the failure of Artificial Intelligence has been the discovery, slow to be accepted, that computational intelligence is divorced from the practices of beings. All animals understand their situation through being able to coordinate their memories and habits within their own imagination, which ‘fills in the blanks’ of every circumstance through means so familiar to us that we take it for granted. Yet no robot can do this. The computational efficiency of silicon chips creates an impression of greater mental power because complex calculations are hard for us yet easy for robots. But calculation is a very small aspect of our cognitive capabilities – and for computers, it is all they have. To exist as a being is to live within a world, and this capacity is something none of robots possesses, nor is it likely that they will on the current design principles for software.

Rather than cyber-courage, what we have seen in the growing presence of computers in all aspects of human life is an erosion of courage as robots become the point of confrontation, and humans are able to distance themselves from their actions. The internet troll – the 21st century’s resident bully – is emboldened to make verbal attacks on strangers precisely because it is only a computer that is in personal contact with their victim. Bullying had long been associated with cowardice, it’s psychological appeal resting on the illusion of power created by picking on those who are powerless to stop you. In the playground or workplace, the bully chose to target only those who could be successfully intimidated. The cyber-cowardice engendered by our digital public spaces so successfully isolates trolls from their actions, the risk of reprisal falls to almost nothing. The virtual mask stokes the confidence of trolls, but courage is more than blind assertiveness, and there is nothing courageous about skulking in the shadows and preying upon others who have no capacity for reprisal or restitution.

In the heroic age, the fundamental display of courage was upon the battlefield. There, warriors braved death to defend their brothers in arms, and their families and clans for whom defeat could mean slavery, rape, or death. There is still courage to be found among today’s soldiers, but it is threatened by the cyber-cowardice that offers the capacity to kill without any risk of injury in return. Armed drones, a grotesque modification of equipment originally intended merely for surveillance, allow missile strikes on distant lands without any risk of personal harm to the operator. Here is the ultimate example of cyber-cowardice, a technology than extinguishes the flame of valour that burns in all those who serve in armed forces and dishonours entire nations such as the United Kingdom and the United States who have turned to these robotic weapons as a means of assassination.

Bradley Strawser is the ethicist who had made the strongest case for the moral permissibility of drones. He points to the psychological stress upon drone pilots, and the terrible post-traumatic stress caused by watching people die in a screen. He suggests it takes “intellectual bravery and perhaps some moral courage” to fly drones... but is this not the cyber-cowardice of the internet troll elevated to its most extreme degree? Laurie Calhoun draws exactly the opposite conclusion from the psychological impact of being a killer drone pilot: it demonstrates that they do feel remorse for taking the lives of their victims. Perhaps the most that can be said in defence of the armed drone pilot is that unlike the troll, they suffer for what they do.

I have respect for Strawser, who has engaged with the moral problems of armed drones in a way that is honourable for all that I radically disagree with his conclusions. He has suggested that the perceived problems with armed drones springs from the intuitive asymmetry of the battlefield where one side can kill without risk. His claim is that this imbalance was already present when jet fighters faced off against guerrillas armed with shoulder-mounted missiles, who could not be deemed remotely equivalent in power. Yet the fighter pilot still put themselves at risk in this scenario: there is not just a difference of degree involved in the use of armed drones, the ratio of risk between combatants has become infinite – courage cannot survive this asymptotic chasm, and the psychological cost of being part of an armed drone cyborg is evidence of the depravity of this technology, not of any form of courage.

What makes the armed drone seem acceptable is the moral disaster of consequentialism, which sees morality as reducible to calculation. Thus Strawser’s view is that the capacity to complete a mission without risking a soldier is morally obligatory – provided, he repeatedly stresses, that the cause is just. But good ends cannot justify despicable means, and the battlefield emptied of valour ceases to be a site of anything honourable. Indeed, it is no longer a battlefield, but merely the place where extermination takes place in pursuit of a victory that gets further from reach when such robotic weapons are deployed. Every civilian killed or injured in a drone strike sees nothing but the horror of death brought about by a cyborg enemy too cowardly even to show its face.

More cybervirtues next week.


Lessons from the MUD

AccursedLandsAnonymity and technology mix badly. While you are required in most countries to pass a test of skill with cars, our most dangerous tool, and even the US licenses and records the identity of firearm owners, any fool can appear on Twitter or Facebook with a fictional or falsified identity and act abusively towards the cyborgs they encounter there. However, eliminating anonymity by forcing the use of public identities is a heavy-handed solution that would almost certainly prove insufficient for eliminating the problem, as Brian Green has carefully outlined. But there are lessons that can be learned from earlier digital public spaces that offered anonymity but had less of a problem with abuse, and this can put a different slant on these kinds of problems.

The Multi-User Dungeons, or MUDs, began as spaces for creative adventures, greatly conditioned by the play of the pivotal tabletop role-playing game Dungeons & Dragons. These imaginary worlds were entirely made of databases of text, arranged to create the impression of connected rooms and spaces, within which players could move around and encounter one another. Players would join a MUD using network protocols from the early days of the internet, creating an account with a name that would become their identity in the shared space of the game world. The MUDs would go on to provide the basis for graphical games like EverQuest and World of Warcraft that would achieve tremendous commercial success.

A player coming to a MUD for the first time was likely to have been invited by someone else, and as such was not strictly alone. Nonetheless, players typically entered the text world as individuals, and since players would connect at different times they were often (if not always) alone. Starting players were always unknown to the existing players, so there was always an element of uncertainty about the arrival of someone new. Nonetheless, the community surrounding each MUD, which was typically a few hundred players or so, generally welcomed newcomers, and there was an air of hospitality extended in most MUD communities. Abusive players, then as with in the larger digital spaces today, were the minority, and would quickly come into conflict with the more responsible players who would report them to the administrators, typically entitled Wizards.

The Wizard system provided legislative, judicial, and executive power within the MUD. While the first Wizards would be those who set up the software and provided the hardware to run the game, many MUDs used a democratic system to elect additional Wizards, who worked as a collective to maintain order and community. Legislative acts concerned the code of conduct that applied, and thus set the boundaries of acceptable behaviour – such matters were always resolved by the Wizards working together, and generally involved consulting the wider community of players as well. Judicial and executive power was expressed by taking action against troublemakers – in many MUDs, miscreants could be ‘toaded’, which reduced a character to a powerless amphibian. Wizards would hold tribunals in this regard to determine the justice of any such punishment meted out. Although I have heard of some instances of ‘corrupt Wizards’, my own experiences showed the Wizard system to be highly effective at minimising abuse in MUDs.

While on the surface, MUDs were play spaces, in practice the division between game and communication system blurred. This was especially so because MUDs provided the first networked text communication system that didn’t require manual delivery, like a telegram. As such, many attracted a community of ‘players’ using them solely as a chat interface. These were the original chatrooms, since players would typically congregate in a room of the MUD’s fictional world to engage in conversation. This occasionally caused tension with other members of the community who were using the game differently, but for the most part it was a fact of life in MUDs that some people were there solely to chat, and facilities to do so were expanded in the code for MUDs as the 1990s progressed.

The MUD was the direct precursor to Facebook and Twitter, which descend from earlier copies of the chatroom concept, such as AOL’s offering, which lacked the fictional world but kept the name. Yet abuse in MUDs was comparatively rare, and rapidly resolved by Wizards whenever it occurred. Anonymity may still have fostered abuse, but the systems were in place in MUDs both to guard against it, and to discourage it from happening in the first place. The most effective deterrent against online abuse is community – and the MUDs fostered this far more than the latest digital public spaces.

Thus while a new MUD player might arrive alone and unknown, they were never unguarded – both in the sense of protected from the abuse of others, and watched for signs of conducting abuse. Conversely, a ‘tweep’ (as a user of Twitter is colloquially termed) is alone, unknown, and essentially unguarded – and these are the optimal conditions for abuse to fester. Twitter has an abuse reporting system, but it is distant and bureaucratic, with no community to manage the warnings and reporting, and no community-engaged Wizards to act as magistrates.

Here we have three different senses of ‘anonymous’, all of which contribute to cyber-disdain, and thus a greater risk of cyber-cruelty. To be alone in a digital public space is to lack a community, and crucially ‘follows’ and ‘friends’ do not mark the authentic social bonds of a community relationship but merely an open communication channel. To be unknown is to be anonymous in the sense of having a concealed identity – a situation that fosters abuse if it is not offset by community relations. Lastly, unguarded marks an invisibility to the systems of justice within a digital public space – a situation worsened by being unknown, and by being alone.

Thus Facebook’s requirement to use conventional identities (to eliminate being unknown) is insufficient to stop abuse, both because its users are mostly alone and unguarded, and also because the size of its membership means that with random encounters, cyborgs are still effectively unknown to each other. This is the fertile soil in which abusive behaviour online grows: as the cybernetic networks increase in scale, community is unsustainable since humans can only sustain viable communities at a scale of hundreds and never at a scale of billions. Two Facebook users, even with public identities, are effectively unknowable to each other – and nothing much can solve this problem short of managing encounters in a way that most would find intolerable. Guarding against problematic behaviour is more tractable when there is a village-scale community to engage, respond, and react – while at planetary-scale even robot-assisted magistrates are rendered impotent by the sheer scope of the network.

Anonymity is the root of online abuse, but there are at least three senses of this term that matter. We tend to focus on unknown anonymity, and thus miss the importance of alone anonymity and unguarded anonymity. My emphasis on being alone may seem misplaced. For instance, in his discussion of the problems of anonymity, Brian reports that “people in groups are more likely to transgress.” I agree with this claim, even though this may seem to run counter to my suggestion that alone anonymity is a key part of the problem. However, Brian’s point concerns ‘mob mentality’, and a mob is not a community in any relevant sense. Indeed, precisely what makes a mob dangerous is that people are alone together when they are a part of it – and this anonymity of the crowd (which also operates fairly innocently in audiences for musicians and so forth) becomes dangerous when the people concerned are also unknown and unguarded, as happens all the time in our digital public spaces.

When Sherry Turkle raises concerns about the way we are alone together online, she is not talking about the mob as such, but her work also emphasises this same concern: the undermining of authentic community by the current design features of the new communication systems. Yet different designs will produce different results. It is notable that blogs, which assign executive power to the blog owner (and thus are close to unguarded), and are ambiguous on the question of being unknown (since it is the blog owner’s choice how they identify) still manage to be less of a locus of abuse than the large-scale digital public spaces since bloggers are never alone. Forums tolerate contributions that are alone and unknown because they are not unguarded, thanks to the presence of moderators who can work effectively because the scale of the network of contributors is manageable. When a moderator ‘knows that such-and-such is a troublemaker’, they mean that particular cyborg is not anonymous in the sense of being unguarded. Different solutions to fostering cyber-respect (and minimising cyber-cruelty) hinge upon the different senses of anonymity.

What does not work – indeed, cannot work – is expecting our robots to plug the gap caused by scaling networks beyond human capacity to form a viable community. Abuse will remain endemic on Facebook and Twitter for as long as their cyborg participants can be functionally alone, effectively unknown, and inadequately guarded. If there are solutions to this problem, and it is not clear that there are, the most important lessons to learn are those revealed by the stories of the MUDs, the pioneering digital public spaces, the first cyborg communities of their kind.

With grateful thanks to Peter Crowther, both for feedback on this piece, and for running UglyMUG, a game that changed both his life and my own. Also, my thanks to Brian Green for his outstanding summary of the relationship between privacy and anonymity, which this piece responds to.


Coming Soon: The Virtuous Cyborg

Neuron ChaosIt gives me great pleasure to formally announce my new book project, The Virtuous Cyborg, which builds upon the cybervirtue discussions going on here at Only a Game this year. A publisher has already invited me to place the book with them, but I will not announce who it is until later this year when we’ve dotted the i’s and crossed the t’s. I will just say for now that they are are a small independent press with a commitment to virtuous publishing and nurturing talent outside of the mainstream corporate system, and they feel like a great fit for my philosophy writing.

The new book will be another short form text like Wikipedia Knows Nothing, but will not have a free edition as I am committed to repaying the investment of both trust and money the publisher is making by publishing me, and their situation means they have to be practical about making ends meet. (ETC Press, as an academic publisher, had different priorities, and I’m very grateful to them for being there when WKN needed a home).

At this time, I am inviting pre-readers to read the draft manuscript, that will be ready some time in April. All you need to be as a pre-reader is someone I have already spoken to (in any medium) with an interest in my philosophy and the time to read a 30,000 word manuscript in April-May this year. I will need feedback in less than a month, so please make sure you have the time to help out.

Interested? Contact me through the usual channels, including comments here if you have no other option.

The opening image is a free texture from TextureX.com which I found here, used under their license and remaining under their copyright.


Tenacity and the Domination of Things

Anselm Kiefer (2013) Morgenthau Plan (detail)Our robots never tire, and always pursue what we have instructed them to do if nothing disrupts them along the way. Can their tenacity be made to work on us, to bring out our perseverance where we most need it? Or are we doomed to be dominated by the systems we have designed?

The question of when persistence and determination constitutes a virtue is parallel to the ambiguity that accompanies fidelity. Indeed, these are closely related virtues – one binds us to a ideal, a practice, or a community, the other to a course of action. Yet clearly not all activities are equal when it comes to tenacity: a heroin addict’s perseverance in their habit, and their dedication to acquiring money for it, do not count as any kind of virtuous tenacity.  The shift in our understanding of agency brought about by re-assessing the role of things in our decisions gives us a way of appreciating why: the heroin is in charge of that chain of events, and the human is reduced to its servant.

To construct a virtuous understanding of tenacity we need a viable understanding of what Enlightenment philosophers called ‘will’ – the resolve to take a certain path; to commit to an uncertain future and make it happen. This is distinct from impulses – I can hanker for a sandwich but I cannot will one, although I could will that I mastered the art of sandwich making, or baking bread, or that I would open a sandwich shop. But what does this distinction consist in? Is it a difference of kind, or merely one of degree?

The one surviving place in our language where the concept of will survives is in talk of ‘willpower’ – but our current understanding of biology renders this concept suspect. If there were a viable concept of willpower, it would distinguish between two kinds of people: ones that had it, and thus would show tenacity in all things, and those who lacked it and would thus be condemned to perpetual sloth. But this isn’t what happens in life. Although we do see differences in persistence both in terms of people and in terms of activities, a person who persists in all tasks does not seem ‘high in willpower’ so much as obsessive-compulsive, unable to stop themselves from attending to whatever happens to be in their attention. Both willpower (and the earlier concept of will it descends from) presume our capacity to assert agency in a selective fashion, such that we appear to be in charge of our own actions.

What we find in our biology wherever we look for persistence is habit. Take getting up in the morning. I recall a time in my life when I had been staying up late most nights, carousing with friends or playing games. At some point, I resolved to get my sleeping back in order – but was dismayed to discover that setting my alarm early made little difference to my routines. Barely awake, I would snooze or switch off any alarm before my half-conscious mind knew what was going on. Today, I get up at the same time every day and getting up is comparatively easy to do, even at 5:30 am, a time I had previously associated with calling it a night. This transformation has nothing to do with willpower but everything to do with habit. It was not enough to commit while awake to something that would happen before I would be fully conscious: I had to establish the habit. This, as it happens, is far easier when we act in the context of other people: exercise groups and dieting clubs establish successful habits more easily than people acting alone.

Here, then, is a way of tracing a boundary between will and impulse, tenacity and capriciousness. To will something entails founding and sustaining habits that are steps towards what is imagined. Our impulses, on the other hand, strike us on a moment-to-moment basis – and when these impulses become habits, as with heroin, we are sacrificing what we might will for forming circumstantial habits; we are enslaved to the will of other beings, or the inclinations brought on by things. While there are certainly debilities corresponding to an absence of diligence and determination (apathy, for instance) perhaps the more interesting contrast is this one between tenacity of the will, and submissiveness to impulse.

When it comes to thinking of cyber-tenacity, it may initially seem that we have a context where our robots might indeed foster enthusiasm and perseverance in their humans. We only have to look at videogames for endless examples of cyborgs persisting against rage, confusion, or boredom, or indeed establishing ostensibly positive habits such as walking, which Pokémon Go (for instance) makes essential to its play. If we are comparing tenacity to apathy, our robot-mediated games clearly come up trumps – if there is indeed a form of cyber-apathy I have yet to see it, and every commercially successful game encourages its players to come back for more.

But then, whose will is being served here? If the player is truly imagining a future and pursuing it, we might very well call the desire to keep playing the cyber-tenacity of the human-robot pairing. Yet when a videogame has us in its grip we are submissive to it: our desire to keep playing is often more like the heroin addict’s habit than the will to become a master baker. In particular, if we look at what the lazier exponents of what is called ‘gamification’ have recommended, this seems indistinguishable from the Behaviourist’s schedules of reinforcement – habit formation through repetition and reward... dog training for humans. This is submission, not tenacity.

As I have argued elsewhere, gamification is all too often stultification. Jacques Rancière makes the claim in The Ignorant Schoolmaster that education is counter-productive when teachers attempt to force upon students their understanding of a particular topic or skill, rather than encouraging the student to acquire their own competences. He calls the effect of an education that teaches a specific way of understanding (rather than encouraging learning without specifying a specific form of comprehension) stultifying. Learning avoids this when the teacher’s will is bound to the student’s solely in terms of committing to the learning being achieved; whenever the means that learning will proceed eclipses this binding of a common will, the outcome is stultification, and learning is hindered or entirely stifled.

Gamification risks stultification because the game developer (or behavioural engineer) is specifying what is being learned, and there is no engagement of the will of the player (or employee). Submission is the inevitable outcome of this failure to create a common vision. What’s more, through mandatory achievements and scoring systems like Xbox’s Gamerscore we have witnessed the gamification of games... an emphasis on cyber-submission over the more engaging alternatives. This state of affairs is now endemic in software design: what is Twitter and Facebook’s Follow counters if not an invitation to judge quantity over quality? Everywhere game-like scoring systems occur, there is a degradation of our judgement as we are drawn away from even asking what we will, and into submission to the designed system and its values – the ultimate manifestation of which is money itself, our greatest and most dominating cybernetic network.

Yet the cyber-submission of videogames is by no means the whole story. Videogames also demonstrate cyber-tenacity in the way humans form teams and co-operate towards goals together, and although competitive play often brings out the worst in people, there are virtuous communities of players in a great many situations where their will is being exercised, albeit within the limited context of the games in question. The player who commits to the pursuit of a digital sporting victory is not, perhaps, the paragon of tenacity – but they are not so far removed from the physical athlete, whose determination we justly admire. Add to this the exercise of imagination, in the narrative play of MMOs and elsewhere, or the creative projects realised in Minecraft, and the situation does not seem so resolutely submissive.

These examples occur in the context of play, which is always a negotiable, transient experience. But they point to ways that our robots can illicit cyber-tenacity in cyborgs. There are possibilities here worthy of exploration, but they must avoid the stultifying risks of cyber-submission and empower us to set our own wills in motion – and see matters through. Here is somewhere that our robots have a natural advantage, for they are automatically cyber-tenacious in the personal sense – they do not tire or flag, and keep progressing towards what we have willed unless prevented by inability or malfunction. If we can couple that indomitable spirit with our own wills, without being dragged down into submission along the way, there might be no limit to what we cyborgs might achieve.

The opening image is a detail from Anselm Kiefer’s Morgenthau Plain, which I found at the Royal Academy page for their Kiefer exhibition. As ever, no copyright infringement is intended and I will take the image down if asked. My thanks to seymourblogger (@abbeysbooks on Twitter) for the suggestion of this artist.


The Dependent World

Banksy.Robot BarcodeEither the dog is the paragon of fidelity, expressing boundless loyalty to their human, or dogs are incapable of fidelity. It comes down to whether the bond a dog forms with their pack leader counts as a promise, and there are good reasons to say that it doesn’t. Nonetheless, I come down on the other side of the argument and see dogs as practicing fidelity in their own unique and admirable ways. The counterpoint amounts to claiming a dog’s commitment is merely instinctual habit. This contrast – habit as against fidelity – is precisely the battleground upon which cyborgs are losing.

In Imaginary Games, I draw against Félix Ravaisson's remarkable 1838 conception of habit as what sets aside beings from things. Habit, for Ravaisson, has two sides – it is the foundation of all our skills and excellences, which only achieve anything through the repetition of training and application. Yet it is also the source of addiction, and it is not coincidental that phrases such as ‘habitual user’ and ‘habit-forming’ have attached to substances such as heroin. The virtue of fidelity that I have been carefully tracing is what allows our skills to achieve their excellence, for the artist, athlete, researcher and so on achieves their proficiency only through commitment to their chosen path. If my argument in Wikipedia Knows Nothing is accepted, this means all knowledge comes from fidelity, since if knowledge is understood as a practice, only fidelity to a practice attains it.

Yet there is something missing in this characterisation, a hidden aspect I tried to draw attention to last week by taking marriage as an exemplar of the relationship between faith and fidelity. Whenever we exercise fidelity, we form a bond with other people. This is most visible in marriage, but it can be found in all cases that fidelity occurs (even if it is sometimes a commitment to honour the dead in some way, rather than the living). The athlete’s fidelity to their sport binds them to their trainers and fellow competitors; the researcher’s fidelity to scientific methods binds them to their research community (for all that the empirical sciences sometimes foster a perverse obfuscation of their human dimension); the artist’s fidelity to their craft binds them not only to the lineages of art that inspire them, but to communities of appreciators without whom their work is incomplete. Fidelity, therefore, is both the root of knowledge and the wellspring of community and culture. To lack fidelity is to become, as traced at the beginning of this discussion, a cultural nomad, and this is not freedom but a kind of ephemeral prison.

As cyborgs, we are assaulted with habit-forming situations because commercial technology is designed, from the ground upwards, to be addictive, to form habits that turn to desire rather than fidelity, to addiction rather than knowledge. Take, as the smallest example, your relationship with your smartphone. By design, this robot is not intended to last, it is not meant for repair beyond trivial interventions (a broken screen, for instance). It is intended to habituate you to its action before being rendered obsolete by the escalating scales of computing power that drive hardware sales. The announcement of a new iPhone or Android phone is intended to push our buttons and draw us into ‘upgrading’, a euphemism for indulging an addiction to the new. This critique can certainly be challenged, but to do so on grounds of increasing utility is to fall prey to the moral disaster of consequentialism and thus be shallow sighted.

Although I am no fan of motor vehicles, I would like to compare the way cars were designed fifty years ago to the way they are designed now. For it is not a coincidence that classic cars are still in service: they were built to last, and designed for repair. A mechanic could express fidelity towards these machines and thus gain knowledge of them. Today, the core function of an automobile is barred to all but the wizards of manufacturing, and an onboard robot controls almost all functions thus reducing the role of mechanics to merely substituting faulty components when instructed. These are machines built for obsolescence that bar all practical knowledge of their workings except as proprietary trade secrets. In short, the design of contemporary machines aims at dependence, and this cyber-dependence is the first principle of commercial technology. It is not a coincidence that the clockwork torch (or flashlight) was designed for Africa and not the ‘developed’ world. ‘Developed’ here is a synonym for ‘dependent’.

Thus Facebook (or any other social media platform for that matter) is designed not for fidelity, nor for binding people together in practices that foster knowledge, but for dependence and addiction. Follows and shares are the motivating force by design, and this pursuit of metrics to measure ‘social success’ serves to substitute dependence for fidelity, addiction for community. That is not to say that fidelity cannot be expressed through these purportedly conversational media – merely that they are not designed to support it. They are created for cyber-dependence, and the utility of the communicative networks they create blinds us to this in yet another example of shallow sightedness. It is scarcely surprising that propaganda, ‘fake news’ as it had been dubbed, thrives in systems that discourage fidelity and thus minimise productive community. Knowledge requires fidelity to a practice; when it is reduced to repeating, we come adrift from our epistemic moorings, as the Wikipedia, that methodical aggregator of corporate artefacts, epitomises.

What would cyber-fidelity mean, and could we imagine technology built for it? Fidelity is founded on a promise, literal or figurative, a commitment to be part of something and thus to foster knowledge within that community (whether we are talking sports, research, art, or whatever). Cyber-fidelity would therefore apply whenever our robots aided our commitment and our communities without simultaneously engendering our dependency. At the moment, whatever fidelity is expressed via the internet does so against the prevailing winds of dependency. If you wish to learn about fidelity, you will find exemplars more easily in the so-called Third World than in the Dependent World we live in. Hence the suggestion that there is a pressing need to technologise the planet is another aspect of the moral disaster of consequentialism – the free ‘Third’ world does not need to learn our dependencies from us; colonial occupation already established dependencies that will not be resolved by adding technological addiction to economies that were optimised for colonial export and that always acted as cyber-dependencies, long before computers upped the ante.

What I am calling cyber-fidelity is another name for what Ivan Illich called convivial tools, technology that empowers individuals within their communities, rather than creating dependence and dividing or destroying community in the name of ‘progress’ (the consequentialist war-cry par excellence). The bicycle versus the car is just one example of cyber-fidelity versus cyber-dependence – and here it is not a mere footnote that the former fosters physical fitness and mechanical skill through maintenance, while the latter fosters ‘road rage’ and planned obsolescence. Note that both cars and bicycles are products of overlapping technological networks: tyres, gears, steering... but one empowers its human and community, and the other fosters dependencies, on manufacturing, oil, and infrastructures that are far from egalitarian.

In asking earlier if dogs could express fidelity, what was at stake was a distinction between habit and dependence, and now I can suggest another aspect of this question: the dog’s commitment to its pack is the evidence of its fidelity. The dog not only belongs to a community – and for domestic dogs, that means both the humans they live with and the neighbourhood dogs they fraternise with – but it has knowledge of that community. Indeed, it is the principal knowledge that any dog learns. The dog cares which other dogs have been in the park recently, and cannot wait to be reunited with members of its pack as they come back home. The dog, in other words, is a convivial being, as (in its own way) is the cat. The human too has this capacity; we are, as Donna Harraway suggested, a companion species to our dogs and cats, and rather less so in the context of our robots.  

Like cars, computers opened up a space that could be convivial or could fall into dependency – and at this point it seems clear which way they have gone. Nothing marks me out as a heretic quite as spectacularly as my suggestion in Chaos Ethics that we have more to learn from the traditional cultures of the Third World than they can benefit from moving uncritically towards the Dependent World we live in. If we wish to build computers that can foster cyber-fidelity, perhaps we should look to the clockwork torch and the way it was designed to be of use outside our enmeshing networks of technology. I do not know what a convivial computer might be, I do not know whether cyber-fidelity is even possible in a world of robots – but we have truly narrowed our horizons of possibility to mere technological addiction if we cannot even imagine trying to explore this uncharted and unimagined frontier.

The opening image is by Banksy. As ever, no copyright infringement is intended and I will take the image down if asked by Banksy, which seems unlikely.

More cybervirtue next week.


Brian Green on Online Anonymity

Over on Psychochild’s Blog, Brian Green has a fantastic four part series exploring the relationship between privacy and anonymity, and arguing against the idea that removing anonymity would address the problem – both because this means giving up privacy, which we value, and because it is not practical to do so. Highly recommended reading for game designers and anyone interested in online abuse and privacy:

  • Part 1 looks at the relationship between privacy and anonymity, and the key questions about anonymity.
  • Part 2 examines the harms entailed in removing anonymity.
  • Part 3 makes the case for the impossibility for enforcing public identity and restricting anonymity.
  • Part 4 looks at dealing with the problems of online behaviour, and the changes that might be required.

I shall respond in full in about two weeks time with a piece entitled Lessons from the MUD, but in the meantime a few quick remarks.

Brian’s example that we are now used to people pulling their phones out all the time in the final part sits badly with me; I do not think this an example of a cultural shift to deal with technology consequences so much as I think we have instituted our rudeness and now accept a higher degree of impoliteness towards each other. The same thing happens in big cities, of course: we learn to be less polite. I do not think this specific example upholds the point Brian wishes to make, in terms of adapting to technology, although I do agree with him that this adaptation both needs to and will happen. We just need to be careful in recognising the active role required in shaping norms.

At several points, Brian trots out the example of people who need to protect their identity. I do not think this is as strong an objection as he and others do; his more general arguments about everyone’s need for privacy are much stronger in my view, in particular because they apply to everyone. If we thought public identities would solve all the problems, the need for some people to adjust their permanent identity online would be a manageable issue. But as Brian nicely outlines, public identities aren’t a guaranteed fix. This is not even a likely fix, as Brian elaborates very clearly in part 3.

We need to be having these discussions, and I am enormously grateful to Brian for wading in here, and making such a thorough report on the issues. I heartily recommend you check out all four parts.


Living with Machines: A Dialogue

Living with Machines was a seven part dialogue between veteran Nietzsche scholar Babette Babich and ‘outsider philosopher’ Chris Bateman, looking at our relationship to corporate power and influence, the possibility of virtuous behaviour against a backdrop of pervasive technology, life with social media, dinosaur hands, sex robots, and smartphones.

The dialogue originally ran from 26th January to 9th March 2017. Each of the parts ends with a link to the next one, so to read the entire serial, simply click on the first link below, and then follow the “next” links to read on.

The seven parts are as follows:

  1. Corporate Venality
  2. Monopoly and Other Games
  3. Mediaddiction
  4. Godzilla’s Tiny Hands
  5. Touching Robots
  6. Your Plastic Pal Who’s Fun To Be With
  7. Techdolls

If you enjoyed this dialogue, please leave a comment! A new Babich and Bateman dialogue begins later this year.