Politeness is not merely an arcane code of conduct, it serves to smooth over the rough edges of human interaction by making requests more tactful, and thus less irritating. Yet as cyborgs we are not good at displaying tact towards one another, and if our robots could exhibit cyber-tact, does this mean your laptop should start saying ‘please’?
One of the unexpected side-effects of linking most of the human race together in a cybernetic communication and data retrieval network has been throwing everyone, regardless of background or circumstances, into random contact. Because the internet was conceived primarily as an institutional tool for combining computational capacities i.e. for networking robots and not humans, the collision of people it has facilitated can only be considered unanticipated. We are still far from prepared for the consequences.
When a cyborg understands others and acts considerately towards them they display tact. This is a virtue that can mean many different things in many different situations, but the core habit behind them all is an attentiveness to the emotional impact of speech and behaviour. Politeness can be seen as an aspect of tactful behaviour – indeed, the easiest part of tact to master, since it is so formulaic. But politeness is a fairly narrow virtue while tact is broad and versatile, having the beneficial quality of helping both those who master it and those it is displayed towards. The corresponding debilities are bluntness, which marks a disregard for courtesy or an inflexible obsession with truth, and tactlessness, which manifests through a failure to correctly anticipate the interests of other cyborgs. Tact need not entail lying; honesty is not at task here, but rather awareness of the effects of language and action upon others.
The internet has made tact far harder to master. When you deal solely with the people from your local culture you usually appreciate what you can or can’t get away with saying without causing offence. In our digital public spaces, however, someone from New York or Paris can collide with someone from rural Georgia or a remote part of Micronesia. This inherent culture clash is concealed by the indirectness of online connections (the vagueness of the digital other), and leads to substantially worse bluntness than happens in face-to-face interactions. The mask of anonymity here, as with kindness and respect, only makes the situation worse.
Tact manifests both in what is said and what remains unspoken or untyped. There is substantial overlap in this regard with respect and cyber-respect, but while respect is probably a requirement for tact, it is possible to respect another cyborg without displaying tact. Furthermore, attempts to enforce tact tend to end in a lack of respect. Thus while providing suitable warnings is a thoughtful expression of tact, it can never be entirely ethical to forcibly demand such warnings mandatorily. To do so is demand respect by denying respect, a peculiar contemporary moral blindness that comes from practicing the rules-focussed ethics of ‘rights talk’ in a complete absence of appreciation for the ethical traditions that lead to rights claims (that is, to fall prey of the moral disaster of individualism).
Robots display personal cyber-tact when they act considerately towards their humans in terms of the triggering of information and do not pursue unwanted displays of media or information. Pop-ups are a classic example of cyber-tactlessness, as are embedded videos that play when accidentally touched while scrolling through text (the BBC news website is especially bad for this). Our robots are inherently cyber-blunt (although they needn’t be): when was the last time your laptop said ‘please’ when it wanted to download and install an update? Not that long ago, computers said ‘please’ when you had to insert a disc into a drive (see the image above): now, they just bully you into updating whether you want to or not.
Cyber-tact can also hypothetically manifest socially, when a robot encourages its human to behave with tact. It is far from clear that this ever happens in practice, and all the problems of maintaining respect against the mask of anonymity apply with tact. The root problem here is that concepts such as politeness, consideration, or toleration require a social imagination, something that beings of various kinds are capable of, but well beyond the programmatic capabilities of robots. This means any viable form of social cyber-tact must leverage human capabilities in order to work.
Designing robot systems to augment tact presents a significant challenge. Suppose a social network were to attempt to train its humans in tact by adding a policing system, such that tactless or blunt remarks were flagged by the community as such. The net result of this would rapidly devolve into carnage, since humans in digital public spaces will always abuse systems that are capable of causing harm. Of course, not everyone does so – but it only takes a small proportion of people to make a minor design flaw into a disaster.
A classic example occurred in the design of The Sims Online game. In the early version of this, players could declare other players ‘trustworthy’ or ‘untrustworthy’. However, a group of players calling themselves the ‘Sims Mafia’ realised they could use this feature to shakedown new players – threatening to blackball them as ‘untrustworthy’ if they didn’t give them all their in-game money. The design of ‘public karma’ systems (as they are known) has avoided dealing with negative scores for precisely this reason, not to mention that humans will abandon tainted account credentials if necessary in what has been called ‘karma bankruptcy’.
Now it may seem that this is irrelevant to the question of cyber-tact: couldn’t you just have the robot provide a positive tact score? Yes, this would be the minimal case for cyber-tact. A positive tact system records when people report that others have been tactful, but necessarily such humans must be already capable of tact. The robot has displayed cybervirtue, but merely through tracking human virtue and thus encouraging the use of tact that a human already possessed. But precisely our problem is that the kind of tact we now need exceeds our prior experience. What is most needed in terms of cyber-tact is a way for a robot to teach its human how to act tactfully in the cultural collision of the internet. It is far from clear this design question is actually soluble.
Whereas designing for social cyber-respect may be a matter of giving up the mask of anonymity, social cyber-tact seems to be more challenging. In both cases, however, the design of robots can at least aim at personal cybervirtue, by (for example) affording their humans adequate control over what they see or read, defending against unwanted displays of media, and supplicating when requesting an inconvenience (instead of demanding, as is far more common). If we think of our robots as ‘neutral tools’, the idea that virtue could be applied to their function is lost on us. Yet we do not use a computer like a hammer (except when we are especially irate!) and we are more entitled than it may seem to expect it to say ‘please’ when it wants to do something that we do not.
More cybervirtues next week.