Towards Cyberethics
October 25, 2016
The question of how we act in the light of our technological situation has become, almost unnoticed, the central question in ethics. This transformation is rooted in the extent to which the networks underlying our purportedly neutral tools – cars, nuclear weapons, computers, armed drones, DNA analysers, anonymous communications, programmable agents – serve to make the moral impact of technology no longer a question of individual agency. It is not that any individual car is a problem, but that our networks of roads and cars kills millions of humans each year, and that the networks of metal and oil that manufacture and power them add to this a terrible contribution towards the on-going devastation of our planet that is far wider and deeper than the anxieties about climate change.
Descartes and Kant cleaved subject from object rather too well – giving all the ethical qualities to subjects alone. But free will, while far from an illusion, operates in a landscape of possibilities carved by objects, which were in turn shaped by design process driven by subjects, motivated by concerns that utterly straddled the divide. To create cars was to have a moral impact far beyond the imagined purpose of transportation; to continue to participate in the automotive network at all is to be complicit in extinctions and denudations we are aware of but still do not grasp – and yet despite our collective emphasis on ideals of ‘freedom’, not participating in the reigning paradigm of transportation seems as if it is barely an option for most people. Things have their own ethical agency; their own trajectories within the moral perspective, even without conscious awareness of this. Thus ethics is not the exclusive domain of the human, and never was, for all that our powerful imagination shapes moral concerns like no other animal before us.
The twentieth century was the site of three tragic failures of moral thinking. Firstly, the imperialist mythos of a clockwork universe begotten by the ‘natural philosophers’ of the Enlightenment and made firm by Victorian scientists opened up a path of power and control that made technological progress seem not only desirable but inevitable, such that no matter what purportedly neutral tool was invented it never brought to question the process of researching new technology. Thus mechanized transport and flight give us the World Wars, ostensibly ended by the invention of a nuclear weapon that was far worse, and which led directly to the end of conventional warfare, such that now extermination, rather than (say) honourable conflict, is a primary activity of the United States military forces, dishonouring the country that did the most to usher in the era of human rights. So here we are suggesting, without a trace of irony, that our robot cars should kill pedestrians to save drivers, while designing our cars to be limited to 30 mph and thus saving millions of lives each year isn’t even on the table for discussion.
None of this could have happened without the second tragic failure of moral thinking, the reduction of moral philosophy to a contest between competing systems of thought. The rule-focussed systems that led to human rights and the outcome-focussed systems that are the de facto moral mythos of corporations set themselves up to fight one another, assured (for some reason) that one of them had to be right. Virtue ethics, meanwhile, was reduced to a small crowd of refugees on the moral stage, staring in horror at the calamities wrought by the two competing views of the moral ‘ought’. Kant’s faith that morality was expressible as a unity was based on the idea that all religions encoded a common core of moral truth; cleaved from ties to any tradition, however, and moral thought lost its secure footing at precisely the time it was most forcibly demanding adherence to singular views of ethical truth.
This is turn was the product of the third tragic failure of moral thinking, the breach with tradition opened – in radically different but intimately related ways – by Kierkegaard and Nietzsche. Nihilism is not the problem here, since it marks nothing more than a mere loss of faith, with merely pathetic consequences. The problem is that the crisis of the existentialists was rooted in the individual – the isolated subject handed to de Beauvoir, Sartre, and Camus could then be isolated from other subjects as well as objects (despite honourable attempts, especially by de Beauvoir, to prevent this). The successes of this newly empowered mythic individual has been consistently less than is claimed; de Beauvoir’s feminism, for instance, led paradoxically to feminist oppressions of the kind brought to brutal light by the long-overdue intersectionality critiques. Freedom does not mean isolation; indeed, we have very nearly lost any concept of autonomy worth having.
My own Chaos Ethics is an analysis of these and other problems, and a tentative step towards illuminating possible paths forward. Wikipedia Knows Nothing is a more positive response, sketching both a rescue of equality as an ideal and an understanding of knowledge as a practice that invites a repositioning of ‘expertise’ in order to better understand the diversity of knowledge. Both books build upon the idea of an ethical and phenomenal multiverse, which was always where we lived. In such a place, finding the correct moral system is the wrong grail to quest after, for moral truth can be translated only, as with all translation, through distortion. The question cannot be ‘which moral system must we impose?’ – that very proposition is self-defeating – but ‘which moral practices do we still have?’ and ‘how do we make them speak to one another?’
If the epistemology (i.e. view of knowledge) in Wikipedia Knows Nothing holds, and I contend that it does, the next step for me is applied ethics, practical reason as Kant puts it. If knowledge is indeed best understood as practices, then moral knowledge is also a practice, as Alasdair MacIntyre has more-or-less argued already. That doesn’t make virtue ethics the only game in town – far from it – but it does mean that any authentic moral knowledge must be at the very least expressible as virtues, and that eliminates from plausibility all attempts to calculate the good, as I already argued in Chaos Ethics. The notorious Trolley Problem, as Allen Wood has argued, all too successfully misleads us into accepting a mathematics of harm born from the horrors of battlefield logic as a surrogate for authentic ethical thought. Both States and corporations must therefore be challenged to adapt to new conditions of moral being where calculable consequences are at best only part of the process of reaching ethical decisions.
Cyberethics, or the moral craft of living within technological networks, is not exactly new (the Amish chose a particular kind of cyberethics, for instance) – but it does not yet exist in the form we need for our own peculiar situation. A craft in good order has to be embodied in a tradition in good order, as MacIntyre says, and this means if we lack the moral craft we need (and we do) we must either create or identify a suitable tradition that might be able to embody it, or to allow it to develop. Yet knowing we live in a multiverse makes this problem somewhat harder… a new tradition is not likely to be the best path forward, or even a plausible option for that matter, even though the existing traditions are so numerous the challenge of working with them all can seem intractable.
One way around the impasse of needing a tradition to carry the moral craft required might be to offer a kind of ‘cyberethics add on’, an ‘expansion pack’ for moral thinking that could be used to upgrade any any all existing traditions to take into account our new situation. Of the three ways of conducting ethical thought, outcome-focussed thought can only emphasise the extent of our problems, while rights-focussed thinking is (sadly) not currently a tradition in good order, leaving agent-focussed ethics to carry the day. This implies the need to imagine what a virtuous cyborg would be like, and thus to identify cybervirtues for humans, and cybervirtues for our robots too. It is this substantial challenge that I want to pursue, but I clearly cannot get there alone; my path from here must lead me away from the comfort of the philosopher’s armchair, and towards some authentic form of cyberethics practice.
The opening image is Geometric Blaze Blue by Daniel DeLuna, which I found here at Mutant Space. As ever, no copyright infringement is intended and I will take the image down if asked.
Comments