Technological Cowardice
Is the Truth Still Out There?

Everybody's Got It Wrong Except You

Alisha Edinburgh.Sand DuneJustice is something nearly everyone cares about, but nobody shares their sense of what it is. This problem, upon which ethics has flailed since at least Nietzsche, divides moral philosophers into those who assert there is a perfect rational standard of justice, and those that claim there never could be such a thing. This conflict is not worth buying into on either side. What matters instead is the recognition that there are a set of ideals for justice, as well as for related concepts such as fairness and ‘just war’, and that these ideals will only possess a rational standard if they are sustained as a shared practice. When no such practice exists, chaos reigns, and agreement is fragmentary. I am an appreciator of this chaotic nova of individual freedom, even while I am a critic of its underlying lack of fidelity. For there is no path for justice to be found here, and in that absence, injustice proliferates and escalates.

Before considering what it would mean for a robot and its human to be just, we must consider  the debilities and defects that result from the chaos nova around us, amply embodied in any of the gigacyberg digital public spaces around us, such as Twitter and Facebook. It is by seeing how things go wrong that we can discover ways to rescue justice, and thus propose a virtue of justice for cyborgs.

One defect is quick to discuss: nihilism, the self-defeating faith in there being nothing worth having faith in. Nihilism is borne of a rigid conviction in the value of truth (such as the Abrahamic traditions bequeathed to the sciences) coming up against the unjustifiable circumstances of that conviction. Here is where a person from a faith tradition like Christianity or Islam or some of the Hindu practices makes a leap of faith. The nihilist makes a similar leap into believing nothing has value – without noticing the unwavering faith in standards of judgement required to get to such an absurd position. Thus, the nihilist is confused into justifying anything on grounds of nothing. There’s not much we can do to help such unfortunates: like viruses that infect humans and robots, you just have to put up with the trouble they cause, and defend against them as best you can.

Holding ideals of justice so strongly that faith passes into the condition of certainty describes the debility of indignance. Here, as with the certainty of blind faith in religion or nationalism, people know they are right, and this empowers them to act in all manner of atrocious ways – particularly when they are acting as internet cyborgs. Sometimes, this cyber-indignance manifests as a desire for punishment, a kind of digital vigilantism fuelled by the cyber-cruelty invoked by loners in masks, deploying despicable methods such as doxxing (releasing personal data, such as a home address) or death threats. Here, the underlying certainty has deceived the cyborgs in question into thinking methods of acting are not important – their moral certainty, ironically, makes them behave as nihilists, turning to terror as a weapon as if nothing mattered. They have an unfortunate role-model in this: both the megacyborg nations and their guerrilla enemies have been employing terror and death as ill-considered weapons for decades.

More often, however, cyber-indignance is all bark and little bite. Passing angry judgements on those you find vile and disgusting is only human, as is acting out behind a mask for that matter, but tirades of cyber-disdain do nothing but antagonise those being judged or encourage others to hurl further abuse. There is no persuasive force to this form of argument – you instantly drive a wedge between yourself and whomever is being reviled – and as with the resort to terror in its various forms, it rests on the certainty that others must be wrong. In these kinds of situation, however, this is at least restrained by an acute sense that there should be limits to which methods are permitted, and this is the principal redeeming quality to this kind of preachy outburst.

Those that do not lose sight of the importance of methods recognise that some courses of action could never be just, and should not be pursued. To think otherwise is to fall into the moral disaster of consequentialism, to think only outcomes matter because of the understandable emphasis we place on how things turn out. But choice of method is part of ensuring desired outcomes, and precisely what makes certain methods unjust is their exclusion from any viable justice that is not merely revenge dressed up in convenient justifications. Nothing is worth achieving at any cost, although there are things that ought to be defended with every just method available. Thinking otherwise is to fail at being a just cyborg of any kind, because all extreme methods make justice harder – even impossible – to attain, and thus entail an element of self-defeat.

Justice is fragile, a sliding puzzle that can be aligned but never completed - a Rubik’s Cube with the wrong number of pieces in each colour. The jury system is an acknowledgment of this: it’s not enough to administer a rigid system of rules, as judges are charged with doing, there are times where an uncertain moral judgement must be made in order to render a legal judgement. In both justice and morality, reliable judgement is all but impossible to attain as individuals, since it is only when a particular set of ideals are weighed in a specific context that an answer to a moral question can be produced. This requires a common moral practice, and this depends on prior experience and the skill this develops. An individual can make judgements about others, but not reliably without the counsel of others, and in considering their own behaviour an individual is all too often an unreliable judge.

Can cyborgs be cyber-just? It is difficult for this to happen, because robots do not live in a world, and thus justice is as alien to them as bravery. However, a robot can still behave in cyber-just, cyber-indignant, or cyber-nihilist ways towards humans as a consequence of its design. Computer viruses are a perfect example of cyber-nihilism, since they act against the humans they encounter as a result of the absence of moral judgement entailed in their creation. Online discussion media that carelessly censor curse words can act with cyber-indignance, as becomes apparent when they censor words like ‘Dickensian’ because they contain a swear word. It is less clear that a robot could be cyber-just towards its human, although it could be argued that passwords and biometric security are cyber-just, in that they uphold property law and personal privacy with the unquenchable zeal of a machine.

The social sense of cyber-just – encouraging a cyborg to act with a clear sense justice – is a case where there may yet be possibilities. Key to ethical knowledge is moral representation, the sharing and discussing of quandaries and dilemmas. This happens temporarily in a jury room, intermittently in a confessional, and quite often in a pub too. Whether between priest and parishioner or two friends, ethical thought requires a shared perspective. Moral representation is in no way exclusive to religion, although it is inherent to authentic religious practice to engage in some kind of moral representation, and when this is absent – particularly when demagogues stir up indignance instead – the result is an ethical failure.

To form a moral practice requires discourse to offset the inherent fragility of justice, which is never wholly reducible to rule-following, as Alasdair MacIntyre cautions. Even Kant, who is most associated with rule-focussed ethics, recognised this, and thus (in contrast to how his views are usually presented) held that virtue and self-improvement were central to an ethical life. Besides, as Wittgenstein made clear, rule-following itself is a practice for beings such as humans. We are not robots who can exercise instructions unthinkingly: there is always a background of understanding that gives a set of rules its distinctive context – and in the absence of this, a thing lacks what is required to understand justice.

Designing a robot to facilitate cyber-just cyborgs might yet be possible by allowing for moral representation. We could leverage the network properties of digital public spaces to help people make moral judgements, or to defer the cyber-indignance inflamed by the culture clash of the internet. In some respects, this is already happening wherever humans discuss ethical problems online, but it could be enhanced by the design of mechanisms to foster discourse of the relevant kind. Sometimes, this will occur in private between individuals, as in the confessional or the bar booth, sometimes in closed groups like the jury room, perhaps it could even happen in public if indignance could be appropriately constrained. All that is required to head in a more helpful direction is to get beyond thinking that everyone has it wrong except you and those who happen to agree with you. This blight of certainty has become the greatest threat to moral practices of all kinds, for ethical knowledge is neither scientific nor calculable, but always entails the doubt that comes from knowing that you might be wrong.

The opening image is Sand Dune by Alisha Edinburgh, which I found here on her Custommade page. As ever, no copyright infringement is intended and I will take the image down if asked.


Feed You can follow this conversation by subscribing to the comment feed for this post.

". Here, as with the certainty of blind faith in religion or nationalism, people know they are right, and this empowers them to act in all manner of atrocious ways – particularly when they are acting as internet cyborgs. Sometimes, this cyber-indignance manifests as a desire for punishment, a kind of digital vigilantism fuelled by the cyber-cruelty invoked by loners in masks, deploying despicable methods such as doxxing (releasing personal data, such as a home address) or death threats. "

Of course none of that concerns "anti-nationalists", "globalists", and whatever other name they find society rewards well for taking.
Specially in an age where it's globalists who have their hands of all the levers of power (from which popular opinion descends), you'd expect the strong, and their followers, to be the most kindly tolerant and open-minded.
That's exactly how it always is, in fact!

"All that is required to head in a more helpful direction is to get beyond thinking that everyone has it wrong except you and those who happen to agree with you."

That is human nature, and in the nature of Power as well.
And how would it not be, given that humans (the most successful among them, at least) are power-seeking beings, shouldering the burden of having to do their seeking while seeming moral to the others and their self.

Ethics are there because they help the whole machine function, nothing else. They are a tool. Everybody will think what makes them feel/function better if they think it.
Doubt and doubt-proponents have ever been a nuisance among nuisance. They help nothing.

You called the people who don't run against the dominant grain "trolls" in a recent post, and you did right.
That's how losers who don't know how to believe and like what is dominating in their time have to be judged, their hindrance dealt with.

Hi anonymous passerby,
The layers of apparent sarcasm in this comment make it difficult to adequately respond! :) So I shall just focus on a few key points...

'Ethics are a tool' strikes me as a difficult to defend position. If you mean, the people in power use ethical systems to control those lower in the hierarchy, you are close to Nietzsche's position on morality. This is a highly defensible viewpoint, although holding it doesn't really help anything. It's one of those places that you can feel content with the sufficiency of your perspective, while not being able to do anything practical with it.

In general, one of the problems I have with the 'will to power' perspective - which Nietzsche championed, and which Foucault saw through to the bitter end - is that the motive for power is less a foundational quality than it is one of the (many) key motives around us (one that is involved in a lot of specific problems). While I share your perspective that there are those with power and influence taking steps to ensure that the situation that benefits them remains that way, I wouldn't invoke a term like 'levers of power' as it suggests greater control than exists. Influence, even excessive influence, should not be mistaken for control. Nobody has any control at the moment... and that is as much an opportunity as it is a problem.

Thanks for sharing your views, and my apologies for not quite being able to adequately separate your sarcasm from your sincerity. I feel there are other points here that would be worth discussing.


Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)