Should Your Laptop Say Please?
February 21, 2017
Politeness is not merely an arcane code of conduct, it serves to smooth over the rough edges of human interaction by making requests more tactful, and thus less irritating. Yet as cyborgs we are not good at displaying tact towards one another, and if our robots could exhibit cyber-tact, does this mean your laptop should start saying ‘please’?
One of the unexpected side-effects of linking most of the human race together in a cybernetic communication and data retrieval network has been throwing everyone, regardless of background or circumstances, into random contact. Because the internet was conceived primarily as an institutional tool for combining computational capacities i.e. for networking robots and not humans, the collision of people it has facilitated can only be considered unanticipated. We are still far from prepared for the consequences.
When a cyborg understands others and acts considerately towards them they display tact. This is a virtue that can mean many different things in many different situations, but the core habit behind them all is an attentiveness to the emotional impact of speech and behaviour. Politeness can be seen as an aspect of tactful behaviour – indeed, the easiest part of tact to master, since it is so formulaic. But politeness is a fairly narrow virtue while tact is broad and versatile, having the beneficial quality of helping both those who master it and those it is displayed towards. The corresponding debilities are bluntness, which marks a disregard for courtesy or an inflexible obsession with truth, and tactlessness, which manifests through a failure to correctly anticipate the interests of other cyborgs. Tact need not entail lying; honesty is not at task here, but rather awareness of the effects of language and action upon others.
The internet has made tact far harder to master. When you deal solely with the people from your local culture you usually appreciate what you can or can’t get away with saying without causing offence. In our digital public spaces, however, someone from New York or Paris can collide with someone from rural Georgia or a remote part of Micronesia. This inherent culture clash is concealed by the indirectness of online connections (the vagueness of the digital other), and leads to substantially worse bluntness than happens in face-to-face interactions. The mask of anonymity here, as with kindness and respect, only makes the situation worse.
Tact manifests both in what is said and what remains unspoken or untyped. There is substantial overlap in this regard with respect and cyber-respect, but while respect is probably a requirement for tact, it is possible to respect another cyborg without displaying tact. Furthermore, attempts to enforce tact tend to end in a lack of respect. Thus while providing suitable warnings is a thoughtful expression of tact, it can never be entirely ethical to forcibly demand such warnings mandatorily. To do so is demand respect by denying respect, a peculiar contemporary moral blindness that comes from practicing the rules-focussed ethics of ‘rights talk’ in a complete absence of appreciation for the ethical traditions that lead to rights claims (that is, to fall prey of the moral disaster of individualism).
Robots display personal cyber-tact when they act considerately towards their humans in terms of the triggering of information and do not pursue unwanted displays of media or information. Pop-ups are a classic example of cyber-tactlessness, as are embedded videos that play when accidentally touched while scrolling through text (the BBC news website is especially bad for this). Our robots are inherently cyber-blunt (although they needn’t be): when was the last time your laptop said ‘please’ when it wanted to download and install an update? Not that long ago, computers said ‘please’ when you had to insert a disc into a drive (see the image above): now, they just bully you into updating whether you want to or not.
Cyber-tact can also hypothetically manifest socially, when a robot encourages its human to behave with tact. It is far from clear that this ever happens in practice, and all the problems of maintaining respect against the mask of anonymity apply with tact. The root problem here is that concepts such as politeness, consideration, or toleration require a social imagination, something that beings of various kinds are capable of, but well beyond the programmatic capabilities of robots. This means any viable form of social cyber-tact must leverage human capabilities in order to work.
Designing robot systems to augment tact presents a significant challenge. Suppose a social network were to attempt to train its humans in tact by adding a policing system, such that tactless or blunt remarks were flagged by the community as such. The net result of this would rapidly devolve into carnage, since humans in digital public spaces will always abuse systems that are capable of causing harm. Of course, not everyone does so – but it only takes a small proportion of people to make a minor design flaw into a disaster.
A classic example occurred in the design of The Sims Online game. In the early version of this, players could declare other players ‘trustworthy’ or ‘untrustworthy’. However, a group of players calling themselves the ‘Sims Mafia’ realised they could use this feature to shakedown new players – threatening to blackball them as ‘untrustworthy’ if they didn’t give them all their in-game money. The design of ‘public karma’ systems (as they are known) has avoided dealing with negative scores for precisely this reason, not to mention that humans will abandon tainted account credentials if necessary in what has been called ‘karma bankruptcy’.
Now it may seem that this is irrelevant to the question of cyber-tact: couldn’t you just have the robot provide a positive tact score? Yes, this would be the minimal case for cyber-tact. A positive tact system records when people report that others have been tactful, but necessarily such humans must be already capable of tact. The robot has displayed cybervirtue, but merely through tracking human virtue and thus encouraging the use of tact that a human already possessed. But precisely our problem is that the kind of tact we now need exceeds our prior experience. What is most needed in terms of cyber-tact is a way for a robot to teach its human how to act tactfully in the cultural collision of the internet. It is far from clear this design question is actually soluble.
Whereas designing for social cyber-respect may be a matter of giving up the mask of anonymity, social cyber-tact seems to be more challenging. In both cases, however, the design of robots can at least aim at personal cybervirtue, by (for example) affording their humans adequate control over what they see or read, defending against unwanted displays of media, and supplicating when requesting an inconvenience (instead of demanding, as is far more common). If we think of our robots as ‘neutral tools’, the idea that virtue could be applied to their function is lost on us. Yet we do not use a computer like a hammer (except when we are especially irate!) and we are more entitled than it may seem to expect it to say ‘please’ when it wants to do something that we do not.
More cybervirtues next week.
Usefully provocative as always. Some quick thoughts:
1. I believe that anonymity is a strong component of the toxic advantage-taking behaviors we see online, including when robots stop saying "please." Others believe I exaggerate its influence.
2. That said, I mulishly persist in thinking that when anonymity is baked into the structure of large-group communication among humans, advantage-taking emerges naturally. This belief comes from a combination of observation and from what's still one of the most remarkable studies of the origins of cooperative behavior, Robert Axelrod's "The Evolution of Cooperation."
TEOC is very clear on this: humans are less likely to take advantage of each other when they recognize each other. Additional structural components are required for islands of cooperation to emerge, notably "indefinitely repeated interactions," but anonymity visibly reduces the perceived cost of defecting (in Prisoner's Dilemma terminology) and advantage-taking in interactions.
I mention this because you might find it interesting to explore communication structure as an extrinsic component for expressing cybervirtue, and in particular how the lessons of The Evolution of Cooperation may apply.
3. TEOC doesn't go into why anonymity reduces the cost of advantage-taking, but I think you touched on it. It's that the better you know another person, the more likely you are to see that person as human. In other words, less anonymity produces more empathy, or the ability to perceive other persons as "human like me." As you felicitously put it, empathy is "social imagination."
Empathy on its own doesn't guarantee cooperation, felt as respect and expressed as tact. If you hate yourself, then knowing someone else as human just makes it easier to hate them, too. Fortunately for humanity, self-loathing is usually not a majority position. When most people equate "human like me" with "worthy of respect like me," structures that reduce anonymity (and thus increase empathy, even marginally) reduce advantage-taking and increase cooperative behaviors such as tactful communication.
4. This then guides the exploration of cyber-tact, whose presence or absence can be understood as a function of how well the structures that mediate communication promote, tolerate, discourage, or prevent anonymity.
Organizations that don't see their clients as individuals, as named and known human beings who are worthy of cyber-respect, will consider cyber-tact -- to the extent that the concept occurs to them at all -- to be an unnecessary cost. We can see examples of this in business organizations: Microsoft's shift with Windows 10 to push updates to your computer whether you want them or not, because Microsoft has decided it knows what's best for the faceless herd of its clients. We can see it in governmental organizations: nanny-state bureaucrats and authoritarian tyrants imposing regulations and requirements on anonymous millions, either because it believes it knows what's best for them or simply because it can.
Neither the shepherd nor the wolf sees any reason to be tactful to sheep.
In both cases, advantage-taking follows dehumanization as night follows day. This is not a trivial point. Advantage-taking can be annoying when incidental, as when robots (whether software or bureaucrats) stop saying "please," but it can be very literally genocidal when the dehumanization of large groups of persons is deliberate.
I don't know that this is a slippery slope. But what if it is?
5. So: what I'm suggesting here is that (internalized) cyber-respect and (externalized) cyber-tact are emergent phenomena from the combination of the human capacity for empathy and communication-mediating rules that promote the growth of empathy by reducing anonymity.
This still leaves many unaddressed questions. How can robots have empathy if they aren't people? Do cyborgs deserve less empathy? (This is related to how "bioethicists" such as Peter Singer and "health care" bureaucrats -- ironic quotes very much on purpose -- are increasingly defining some persons as less than fully human in order to cut costs by euthanizing such unpersons. Another example of "sheep need no tact" resulting from dehumanization, and a perfectly rational concern for cyborgs.) Is cybervirtue even conceivable in a world where humans have stopped respecting each other? Why should we expect our robots to be better people than we are?
I'm enjoying this project of exploration, even if (or maybe because) the questions it raises about our future in a massively-connected world are pretty uncomfortable. Looking forward to the next installments!
Posted by: Bart Stewart | February 21, 2017 at 06:29 PM
Hi Bart,
Like you, I contend that anonymity is a root problem in digital public spaces, and indeed argue against anonymity in Wikipedia Knows Nothing in situations where individuals have any significant power. I concur with you on a number of points as a result. However, we have to be clear about what we mean by 'anonymity'; assuming an alternative identity is different from being disconnected from those we interact with.
With this in mind I would like, and this is in deference to what I suspect ground's Brian's objections to your position, to distinguish a MUD community (and its equivalent) from digital public spaces: in the MUD community, participants are anonymous in the sense that their assumed identities are not related to their conventional world identities. But of course, MUD players don't typically know who these people are either way, and so what happens is the formation of a new community. It's a village versus a city situation: you don't get muggings in villages, because there is not the anonymity of the crowd to conceal you afterwards. The fact that the 1990s MUD communities frequently held 'MUDmeets' where people could meet face-to-face reiterates this point for me: there is a distinction between anonymity and joining a community under an assumed identity.
Regarding Axelrod's work, I wrote about this back in 2007 as part of the Ethics Campaign. You can find this discussion here, and it's not significantly aged, I'd say:
http://onlyagame.typepad.com/only_a_game/2007/06/tit_for_tat.html
However, I would side with Mary Midgley in pointing to parental behaviours (specifically maternal) being a prerequisite for social behaviour in animals, and co-operation arises on this vector. I'm not enormously swayed by the concept of an 'evolutionary stable strategy'... the word 'evolutionary' there is thought important by those who use this phrase, and it's actually doing any appreciable work. The long-term development of animal behaviour is swayed by what provides advantages - such as co-operation (as discussed at great length by Kropotkin). It is a mistake to presume organisms begin with self-interest: the primal state is interest. Self-interest is merely a stage along that path.
"Neither the shepherd nor the wolf sees any reason to be tactful to sheep."
This is a great phrase! And as for Windows 10, I very nearly namechecked this within the piece, along with Apple's relentless downloading of iOS updates (which frequently disable my wife's phone by using up all available memory). In the end, I left the name-and-shame out to save space. :)
I think you correct to draw attention to 'dehumanisation' (or, equivalently, densensitisation to other humans) as a key issue here. This, indeed, is the difference between village and metropolis: it is not coincidental that street gangs, muggings etc. increase with population size. Also, and Tokyo might be the exception, politeness tends to decline with population density. New York's brusque style is the epitome of this, but you'll find it in London and Paris too, to lesser extent.
The large-scale digital public spaces such as Twitter and Facebook have equivalently greater risks of desensitising, dehumanising... anonymity is part of this problem, yet removing this option is not enough to solve the problem. Anonymity is not about whether you use your birth name: it's about whether you are part of a community. That, it seems to me, is where we are failing in the online metropolis.
Many thanks for this extremely thoughtful, detailed, and engaging comment - you made my day!
All the best,
Chris.
NB: Brian Green (mentioned in this comment) has responded to Bart on G+. That sidebar discussion can be found here: https://plus.google.com/115870743941134092422/posts/M1WivrSxV9d?_utm_source=199-1-1
Posted by: Chris | February 22, 2017 at 07:36 AM
I'll expand slightly on my comment here as well.
The main problem with blaming anonymity is twofold. First, it's treated like a silver bullet. That if we could just eliminate anonymity then we could clean up online communities. As I pointed out in my G+ comment, we have these same problems in the offline world where it's theoretically harder to be anonymous. Even in the village you have people who take advantage of others, using their actual names! The differences is that usually there's an established power structure, so that the victims feel helpless against the aggressors; and here a lack of anonymity (usually through social standing and reputation) often prevents people from being able to prevent being taken advantage of.
And this leads to my second point: anonymity can be a useful tool. Some people have legitimate pain in their personal histories they want to avoid; a spurned partner or lover may use a lack of anonymity to hunt others down. Or some people may just want to reserve parts of themselves and not expose it to the world. This may include things like their real names, location, age, etc. And the reasons may be perfectly legitimate.
As you say, Chris, the real issues here is the lack of empathy between people. It's easy for some people to casually disregard the feelings of others online. This leads to hurtful, harmful behavior when people don't make a connection. But, having to post under a "real name" won't prevent this, it doesn't enforce empathy. I think education and cultural shifts will do more for this than the harm that taking away anonymity would.
Maybe I should write a blog post about this...
Posted by: Brian Green | February 22, 2017 at 04:56 PM
Hi Brian,
Oddly, I agree that enforcing 'real name' policies does not resolve the problem. The reason for this, I contend, is that the kind of anonymity that is causing the problem is not eliminated solely by fixing identities - although abuse might well be reduced under such circumstances (at least in principle) by making it tougher to create fake identities as a cover for abuse. The ease of creating disposable accounts with no culpability in Twitter is part of the problem in that particular space.
In Wikipedia Knows Nothing my argument against anonymity does allow for the kind of situation you allude to here, where someone has reasons to conceal their identity. Yet systemic anonymity and legitimate reasons for concealing identity are nearly unrelated in the wider picture here: using the minority need for masks to uphold a culture of anonymity strikes me as problematic. The problem with systematic anonymity occurs when it runs contra to community, as it does in social media and, I argue, on the Wikipedia. However, since Wikipedia editors have some (marginal yet significant) power, the need for systems of culpability apply more strongly there than in social media.
MUDs make anonymity work because players are not entirely anonymous, but merely masked. They are masked, yet accountable and connected to the community (usually... there are 'free riders' even in MUDs, but they tend to be rooted out by the community precisely because there is a community capable of taking action). In this respect, social media - which I view as having descended from the MUDs, at least in principle if not in practice, have lost their way by expanding in scale without making any allowance for what this entails in terms of the diffuseness of community. The asynchronous encounters of social media are also more likely to be emotionally charged and less likely to be diplomatic and cordial (as they were/are, for the most part in MUDs - at least in my experience) because the format highlights fault lines.
As I said on G+, I would love for you to write a blog post on this topic. I think this is a supremely important issue right now, and advocates for different views on this ought to make their cases clear so we can all get a better grip of both the issues and our options regarding them. I contend, and you may not disagree, that different design features are a possible way of moving forward in the larger digital public spaces: but which features?
As I also mentioned on G+, I think we are running up against different meanings to 'anonymity'... I'm going to have a go at teasing these apart. Getting a firm statement from you would be extremely helpful in co-ordinating my thinking.
If you don't have time, I will understand - but I always welcome your involvement in discussions such as these, and hope that you might be able to write something that bears on the topic.
With love and respect,
Chris.
Posted by: Chris | February 23, 2017 at 07:33 AM
I wrote a four part series on online anonymity and privacy over on my own blog:
http://psychochild.org/?p=1465
http://psychochild.org/?p=1466
http://psychochild.org/?p=1467
http://psychochild.org/?p=1468
Enjoy!
Posted by: Brian Green | March 10, 2017 at 03:33 PM
Oh Brian, that's awesome! And you instantly got comments too, great stuff. I will try and plug this here when I get a chance... all is about to go crazy in my home life, though, as I'm off to the US in week and the entire house has to be rearranged before we go. Madness. I will find some time somewhere.
Thanks again - this is brilliant!
Posted by: Chris | March 10, 2017 at 04:44 PM