Tenacity and the Domination of Things

Anselm Kiefer (2013) Morgenthau Plan (detail)Our robots never tire, and always pursue what we have instructed them to do if nothing disrupts them along the way. Can their tenacity be made to work on us, to bring out our perseverance where we most need it? Or are we doomed to be dominated by the systems we have designed?

The question of when persistence and determination constitutes a virtue is parallel to the ambiguity that accompanies fidelity. Indeed, these are closely related virtues – one binds us to a ideal, a practice, or a community, the other to a course of action. Yet clearly not all activities are equal when it comes to tenacity: a heroin addict’s perseverance in their habit, and their dedication to acquiring money for it, do not count as any kind of virtuous tenacity.  The shift in our understanding of agency brought about by re-assessing the role of things in our decisions gives us a way of appreciating why: the heroin is in charge of that chain of events, and the human is reduced to its servant.

To construct a virtuous understanding of tenacity we need a viable understanding of what Enlightenment philosophers called ‘will’ – the resolve to take a certain path; to commit to an uncertain future and make it happen. This is distinct from impulses – I can hanker for a sandwich but I cannot will one, although I could will that I mastered the art of sandwich making, or baking bread, or that I would open a sandwich shop. But what does this distinction consist in? Is it a difference of kind, or merely one of degree?

The one surviving place in our language where the concept of will survives is in talk of ‘willpower’ – but our current understanding of biology renders this concept suspect. If there were a viable concept of willpower, it would distinguish between two kinds of people: ones that had it, and thus would show tenacity in all things, and those who lacked it and would thus be condemned to perpetual sloth. But this isn’t what happens in life. Although we do see differences in persistence both in terms of people and in terms of activities, a person who persists in all tasks does not seem ‘high in willpower’ so much as obsessive-compulsive, unable to stop themselves from attending to whatever happens to be in their attention. Both willpower (and the earlier concept of will it descends from) presume our capacity to assert agency in a selective fashion, such that we appear to be in charge of our own actions.

What we find in our biology wherever we look for persistence is habit. Take getting up in the morning. I recall a time in my life when I had been staying up late most nights, carousing with friends or playing games. At some point, I resolved to get my sleeping back in order – but was dismayed to discover that setting my alarm early made little difference to my routines. Barely awake, I would snooze or switch off any alarm before my half-conscious mind knew what was going on. Today, I get up at the same time every day and getting up is comparatively easy to do, even at 5:30 am, a time I had previously associated with calling it a night. This transformation has nothing to do with willpower but everything to do with habit. It was not enough to commit while awake to something that would happen before I would be fully conscious: I had to establish the habit. This, as it happens, is far easier when we act in the context of other people: exercise groups and dieting clubs establish successful habits more easily than people acting alone.

Here, then, is a way of tracing a boundary between will and impulse, tenacity and capriciousness. To will something entails founding and sustaining habits that are steps towards what is imagined. Our impulses, on the other hand, strike us on a moment-to-moment basis – and when these impulses become habits, as with heroin, we are sacrificing what we might will for forming circumstantial habits; we are enslaved to the will of other beings, or the inclinations brought on by things. While there are certainly debilities corresponding to an absence of diligence and determination (apathy, for instance) perhaps the more interesting contrast is this one between tenacity of the will, and submissiveness to impulse.

When it comes to thinking of cyber-tenacity, it may initially seem that we have a context where our robots might indeed foster enthusiasm and perseverance in their humans. We only have to look at videogames for endless examples of cyborgs persisting against rage, confusion, or boredom, or indeed establishing ostensibly positive habits such as walking, which Pokémon Go (for instance) makes essential to its play. If we are comparing tenacity to apathy, our robot-mediated games clearly come up trumps – if there is indeed a form of cyber-apathy I have yet to see it, and every commercially successful game encourages its players to come back for more.

But then, whose will is being served here? If the player is truly imagining a future and pursuing it, we might very well call the desire to keep playing the cyber-tenacity of the human-robot pairing. Yet when a videogame has us in its grip we are submissive to it: our desire to keep playing is often more like the heroin addict’s habit than the will to become a master baker. In particular, if we look at what the lazier exponents of what is called ‘gamification’ have recommended, this seems indistinguishable from the Behaviourist’s schedules of reinforcement – habit formation through repetition and reward... dog training for humans. This is submission, not tenacity.

As I have argued elsewhere, gamification is all too often stultification. Jacques Rancière makes the claim in The Ignorant Schoolmaster that education is counter-productive when teachers attempt to force upon students their understanding of a particular topic or skill, rather than encouraging the student to acquire their own competences. He calls the effect of an education that teaches a specific way of understanding (rather than encouraging learning without specifying a specific form of comprehension) stultifying. Learning avoids this when the teacher’s will is bound to the student’s solely in terms of committing to the learning being achieved; whenever the means that learning will proceed eclipses this binding of a common will, the outcome is stultification, and learning is hindered or entirely stifled.

Gamification risks stultification because the game developer (or behavioural engineer) is specifying what is being learned, and there is no engagement of the will of the player (or employee). Submission is the inevitable outcome of this failure to create a common vision. What’s more, through mandatory achievements and scoring systems like Xbox’s Gamerscore we have witnessed the gamification of games... an emphasis on cyber-submission over the more engaging alternatives. This state of affairs is now endemic in software design: what is Twitter and Facebook’s Follow counters if not an invitation to judge quantity over quality? Everywhere game-like scoring systems occur, there is a degradation of our judgement as we are drawn away from even asking what we will, and into submission to the designed system and its values – the ultimate manifestation of which is money itself, our greatest and most dominating cybernetic network.

Yet the cyber-submission of videogames is by no means the whole story. Videogames also demonstrate cyber-tenacity in the way humans form teams and co-operate towards goals together, and although competitive play often brings out the worst in people, there are virtuous communities of players in a great many situations where their will is being exercised, albeit within the limited context of the games in question. The player who commits to the pursuit of a digital sporting victory is not, perhaps, the paragon of tenacity – but they are not so far removed from the physical athlete, whose determination we justly admire. Add to this the exercise of imagination, in the narrative play of MMOs and elsewhere, or the creative projects realised in Minecraft, and the situation does not seem so resolutely submissive.

These examples occur in the context of play, which is always a negotiable, transient experience. But they point to ways that our robots can illicit cyber-tenacity in cyborgs. There are possibilities here worthy of exploration, but they must avoid the stultifying risks of cyber-submission and empower us to set our own wills in motion – and see matters through. Here is somewhere that our robots have a natural advantage, for they are automatically cyber-tenacious in the personal sense – they do not tire or flag, and keep progressing towards what we have willed unless prevented by inability or malfunction. If we can couple that indomitable spirit with our own wills, without being dragged down into submission along the way, there might be no limit to what we cyborgs might achieve.

The opening image is a detail from Anselm Kiefer’s Morgenthau Plain, which I found at the Royal Academy page for their Kiefer exhibition. As ever, no copyright infringement is intended and I will take the image down if asked. My thanks to seymourblogger (@abbeysbooks on Twitter) for the suggestion of this artist.


A Fabulous Planet to Die On

Delighted to report that Justin Robertson’s interview with me for Ransom Note went up today! Here’s an extract:

I trust my mechanic to fix my car when it’s broken – except when the manufacturer has made the car into a black box that people can’t actually acquire practical knowledge of. And I trust that my physicist friends can calculate how to adjust satellite data for distortions. All in all, I think there’s plenty of expertise around today. But you don’t ask a mechanic to decide on the base rate of the Bank of England or a physicist to bake a soufflé. Most of the problems with expertise happen because we’re failing to recognise who has knowledge of what.

Check out the entirety of A Fabulous Planet to Die On when you get a chance.


The Dependent World

Banksy.Robot BarcodeEither the dog is the paragon of fidelity, expressing boundless loyalty to their human, or dogs are incapable of fidelity. It comes down to whether the bond a dog forms with their pack leader counts as a promise, and there are good reasons to say that it doesn’t. Nonetheless, I come down on the other side of the argument and see dogs as practicing fidelity in their own unique and admirable ways. The counterpoint amounts to claiming a dog’s commitment is merely instinctual habit. This contrast – habit as against fidelity – is precisely the battleground upon which cyborgs are losing.

In Imaginary Games, I draw against Félix Ravaisson's remarkable 1838 conception of habit as what sets aside beings from things. Habit, for Ravaisson, has two sides – it is the foundation of all our skills and excellences, which only achieve anything through the repetition of training and application. Yet it is also the source of addiction, and it is not coincidental that phrases such as ‘habitual user’ and ‘habit-forming’ have attached to substances such as heroin. The virtue of fidelity that I have been carefully tracing is what allows our skills to achieve their excellence, for the artist, athlete, researcher and so on achieves their proficiency only through commitment to their chosen path. If my argument in Wikipedia Knows Nothing is accepted, this means all knowledge comes from fidelity, since if knowledge is understood as a practice, only fidelity to a practice attains it.

Yet there is something missing in this characterisation, a hidden aspect I tried to draw attention to last week by taking marriage as an exemplar of the relationship between faith and fidelity. Whenever we exercise fidelity, we form a bond with other people. This is most visible in marriage, but it can be found in all cases that fidelity occurs (even if it is sometimes a commitment to honour the dead in some way, rather than the living). The athlete’s fidelity to their sport binds them to their trainers and fellow competitors; the researcher’s fidelity to scientific methods binds them to their research community (for all that the empirical sciences sometimes foster a perverse obfuscation of their human dimension); the artist’s fidelity to their craft binds them not only to the lineages of art that inspire them, but to communities of appreciators without whom their work is incomplete. Fidelity, therefore, is both the root of knowledge and the wellspring of community and culture. To lack fidelity is to become, as traced at the beginning of this discussion, a cultural nomad, and this is not freedom but a kind of ephemeral prison.

As cyborgs, we are assaulted with habit-forming situations because commercial technology is designed, from the ground upwards, to be addictive, to form habits that turn to desire rather than fidelity, to addiction rather than knowledge. Take, as the smallest example, your relationship with your smartphone. By design, this robot is not intended to last, it is not meant for repair beyond trivial interventions (a broken screen, for instance). It is intended to habituate you to its action before being rendered obsolete by the escalating scales of computing power that drive hardware sales. The announcement of a new iPhone or Android phone is intended to push our buttons and draw us into ‘upgrading’, a euphemism for indulging an addiction to the new. This critique can certainly be challenged, but to do so on grounds of increasing utility is to fall prey to the moral disaster of consequentialism and thus be shallow sighted.

Although I am no fan of motor vehicles, I would like to compare the way cars were designed fifty years ago to the way they are designed now. For it is not a coincidence that classic cars are still in service: they were built to last, and designed for repair. A mechanic could express fidelity towards these machines and thus gain knowledge of them. Today, the core function of an automobile is barred to all but the wizards of manufacturing, and an onboard robot controls almost all functions thus reducing the role of mechanics to merely substituting faulty components when instructed. These are machines built for obsolescence that bar all practical knowledge of their workings except as proprietary trade secrets. In short, the design of contemporary machines aims at dependence, and this cyber-dependence is the first principle of commercial technology. It is not a coincidence that the clockwork torch (or flashlight) was designed for Africa and not the ‘developed’ world. ‘Developed’ here is a synonym for ‘dependent’.

Thus Facebook (or any other social media platform for that matter) is designed not for fidelity, nor for binding people together in practices that foster knowledge, but for dependence and addiction. Follows and shares are the motivating force by design, and this pursuit of metrics to measure ‘social success’ serves to substitute dependence for fidelity, addiction for community. That is not to say that fidelity cannot be expressed through these purportedly conversational media – merely that they are not designed to support it. They are created for cyber-dependence, and the utility of the communicative networks they create blinds us to this in yet another example of shallow sightedness. It is scarcely surprising that propaganda, ‘fake news’ as it had been dubbed, thrives in systems that discourage fidelity and thus minimise productive community. Knowledge requires fidelity to a practice; when it is reduced to repeating, we come adrift from our epistemic moorings, as the Wikipedia, that methodical aggregator of corporate artefacts, epitomises.

What would cyber-fidelity mean, and could we imagine technology built for it? Fidelity is founded on a promise, literal or figurative, a commitment to be part of something and thus to foster knowledge within that community (whether we are talking sports, research, art, or whatever). Cyber-fidelity would therefore apply whenever our robots aided our commitment and our communities without simultaneously engendering our dependency. At the moment, whatever fidelity is expressed via the internet does so against the prevailing winds of dependency. If you wish to learn about fidelity, you will find exemplars more easily in the so-called Third World than in the Dependent World we live in. Hence the suggestion that there is a pressing need to technologise the planet is another aspect of the moral disaster of consequentialism – the free ‘Third’ world does not need to learn our dependencies from us; colonial occupation already established dependencies that will not be resolved by adding technological addiction to economies that were optimised for colonial export and that always acted as cyber-dependencies, long before computers upped the ante.

What I am calling cyber-fidelity is another name for what Ivan Illich called convivial tools, technology that empowers individuals within their communities, rather than creating dependence and dividing or destroying community in the name of ‘progress’ (the consequentialist war-cry par excellence). The bicycle versus the car is just one example of cyber-fidelity versus cyber-dependence – and here it is not a mere footnote that the former fosters physical fitness and mechanical skill through maintenance, while the latter fosters ‘road rage’ and planned obsolescence. Note that both cars and bicycles are products of overlapping technological networks: tyres, gears, steering... but one empowers its human and community, and the other fosters dependencies, on manufacturing, oil, and infrastructures that are far from egalitarian.

In asking earlier if dogs could express fidelity, what was at stake was a distinction between habit and dependence, and now I can suggest another aspect of this question: the dog’s commitment to its pack is the evidence of its fidelity. The dog not only belongs to a community – and for domestic dogs, that means both the humans they live with and the neighbourhood dogs they fraternise with – but it has knowledge of that community. Indeed, it is the principal knowledge that any dog learns. The dog cares which other dogs have been in the park recently, and cannot wait to be reunited with members of its pack as they come back home. The dog, in other words, is a convivial being, as (in its own way) is the cat. The human too has this capacity; we are, as Donna Harraway suggested, a companion species to our dogs and cats, and rather less so in the context of our robots.  

Like cars, computers opened up a space that could be convivial or could fall into dependency – and at this point it seems clear which way they have gone. Nothing marks me out as a heretic quite as spectacularly as my suggestion in Chaos Ethics that we have more to learn from the traditional cultures of the Third World than they can benefit from moving uncritically towards the Dependent World we live in. If we wish to build computers that can foster cyber-fidelity, perhaps we should look to the clockwork torch and the way it was designed to be of use outside our enmeshing networks of technology. I do not know what a convivial computer might be, I do not know whether cyber-fidelity is even possible in a world of robots – but we have truly narrowed our horizons of possibility to mere technological addiction if we cannot even imagine trying to explore this uncharted and unimagined frontier.

The opening image is by Banksy. As ever, no copyright infringement is intended and I will take the image down if asked by Banksy, which seems unlikely.

More cybervirtue next week.


Brian Green on Online Anonymity

Over on Psychochild’s Blog, Brian Green has a fantastic four part series exploring the relationship between privacy and anonymity, and arguing against the idea that removing anonymity would address the problem – both because this means giving up privacy, which we value, and because it is not practical to do so. Highly recommended reading for game designers and anyone interested in online abuse and privacy:

  • Part 1 looks at the relationship between privacy and anonymity, and the key questions about anonymity.
  • Part 2 examines the harms entailed in removing anonymity.
  • Part 3 makes the case for the impossibility for enforcing public identity and restricting anonymity.
  • Part 4 looks at dealing with the problems of online behaviour, and the changes that might be required.

I shall respond in full in about two weeks time with a piece entitled Lessons from the MUD, but in the meantime a few quick remarks.

Brian’s example that we are now used to people pulling their phones out all the time in the final part sits badly with me; I do not think this an example of a cultural shift to deal with technology consequences so much as I think we have instituted our rudeness and now accept a higher degree of impoliteness towards each other. The same thing happens in big cities, of course: we learn to be less polite. I do not think this specific example upholds the point Brian wishes to make, in terms of adapting to technology, although I do agree with him that this adaptation both needs to and will happen. We just need to be careful in recognising the active role required in shaping norms.

At several points, Brian trots out the example of people who need to protect their identity. I do not think this is as strong an objection as he and others do; his more general arguments about everyone’s need for privacy are much stronger in my view, in particular because they apply to everyone. If we thought public identities would solve all the problems, the need for some people to adjust their permanent identity online would be a manageable issue. But as Brian nicely outlines, public identities aren’t a guaranteed fix. This is not even a likely fix, as Brian elaborates very clearly in part 3.

We need to be having these discussions, and I am enormously grateful to Brian for wading in here, and making such a thorough report on the issues. I heartily recommend you check out all four parts.