A Hundred Cyborgs
MallBots

Voice Assistants

Zelda 1926Try this simple test with whichever voice assistant you use, Siri, Alexa, Google Assistant, or whatever: ask it “Which year was Star Wars: A New Hope released?” It’s almost certain it will tell you 1977. But this is incorrect. That is the year that Star Wars was released, but this film did not become A New Hope until the modified version was released in 1981, adding the subtitle and episode number to the name and the opening crawl.

Now this is clearly a pedantic point, but shouldn’t your robot provide you with accurate information? Our sci-fi androids like Data in Star Trek: The Next Generation were characterised precisely by this kind of obsession with detail. But your smartphone has no intelligence of any kind: all it can do is search the internet, and parrot back answers, sometimes wildly wrong answers such as the one shown in the picture of Siri telling me The Legend of Zelda: The Ocarina of Time was released in 1926, and attributing this ‘fact’ to the Wikipedia.

What a voice activated robot offers is convenience, and this is seldom something that encourages virtue, although that doesn’t make it inherently negative. Some forms of convenience are cyber-indolent i.e. they encourage laziness, some (such as cars) are nowhere near as convenient as we tend to think, and some are relatively benign. My ice-making robot, for instance, is a more convenient way of freezing water than ice cube trays, and doesn’t obviously instil any bad behaviour in me. The risks in using a voice assistant depend entirely upon what it is deployed for: using it to fact-check, as the opening example highlights, is a rather bad idea; asking it to play a song or phone someone is rather less risky.

That said, I found that Siri repeatedly failed to action my command “call after school club” because it forgot this was a number in my directory and instead started searching the internet for nearby after school clubs. So I changed the contact to ‘After School Club Sausages’, which briefly worked, before again reverting to internet search. Now, my son’s after school club is a contact named ‘Regina Sausages’, and I say “Call Regina Sausages” to call them. This works reliably. But notice how I have had to adapt to my voice assistant and not the other way around.

Voice activated robots are little more than a heuristic computer program triggering certain set functions and passing unknown commands to a search engine. But when we use them, we are relying on the computer systems of a central corporate-owned server to do the legwork. In the process, they gain information about us that helps the company to advertise and monetise us. There is nothing cybervirtuous about this arrangement, and perhaps we ought to be more cautious about what we are trading away for apparent convenience.

A Hundred Cyborgs, #1

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I appreciate you making some of yer thinking available for those of us who no longer have book budgets, my broad concern here is that people in general (via the cognitive-biases that make up our takes on the world) can't grasp (aren't grasped by) effects that are complicated/multifactorial, that occur over time and or space, that are somewhat abstract, etc, whether it's supply chains or global warming, so what is the new "hack" that might let them act/be other than how they are now?
https://www.youtube.com/watch?v=7g0hx9LPBq8

Hey dmf,
Your question here is a good one: what 'hack' can we use to put into perspective everything hidden from view in our relationship with technology? The new book can broadly be understood as an attempt at such a hack, by constructing this concept of 'cybervirtue' that takes a relational perspective on moral and behavioural issues. (There is also the concept of 'cyberg', and the shallow sightedness that goes with it, which is an explicit acknowledgement of the difficulties - we only see 'the tip of the cyberg' in each case.)

As with so much of my philosophy, and indeed philosophy in general, I don't think we can bring about substantial change by aiming at changing everyone's thinking so you start by a shift among the abstract thinkers. When enough momentum is gathered towards a change in abstract thought, then comes the possibility to shift perspective more widely. Usually, that shift comes from concrete examples - thus Blue Planet 2 was able to wield influence regarding the plastic problem by presenting a very abstract environmental problem through concrete examples of plastic contamination.

If this is too optimistic a view of the problem, that for me is offset by the necessity of coming at these problems with some faith in our capacity to resolve them. If we take the opposite view, and start from the impossibility of action, doom is guaranteed. Therefore, if we hope for change, we must start by 'swallowing the dilemma' and deciding we can bring about change. More than that, as I point out in the book, we should not doubt our capacity to change the world, since we already did.

Thanks for getting involved in the discussions!

Chris.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)