Voice Assistants
May 17, 2018
Try this simple test with whichever voice assistant you use, Siri, Alexa, Google Assistant, or whatever: ask it “Which year was Star Wars: A New Hope released?” It’s almost certain it will tell you 1977. But this is incorrect. That is the year that Star Wars was released, but this film did not become A New Hope until the modified version was released in 1981, adding the subtitle and episode number to the name and the opening crawl.
Now this is clearly a pedantic point, but shouldn’t your robot provide you with accurate information? Our sci-fi androids like Data in Star Trek: The Next Generation were characterised precisely by this kind of obsession with detail. But your smartphone has no intelligence of any kind: all it can do is search the internet, and parrot back answers, sometimes wildly wrong answers such as the one shown in the picture of Siri telling me The Legend of Zelda: The Ocarina of Time was released in 1926, and attributing this ‘fact’ to the Wikipedia.
What a voice activated robot offers is convenience, and this is seldom something that encourages virtue, although that doesn’t make it inherently negative. Some forms of convenience are cyber-indolent i.e. they encourage laziness, some (such as cars) are nowhere near as convenient as we tend to think, and some are relatively benign. My ice-making robot, for instance, is a more convenient way of freezing water than ice cube trays, and doesn’t obviously instil any bad behaviour in me. The risks in using a voice assistant depend entirely upon what it is deployed for: using it to fact-check, as the opening example highlights, is a rather bad idea; asking it to play a song or phone someone is rather less risky.
That said, I found that Siri repeatedly failed to action my command “call after school club” because it forgot this was a number in my directory and instead started searching the internet for nearby after school clubs. So I changed the contact to ‘After School Club Sausages’, which briefly worked, before again reverting to internet search. Now, my son’s after school club is a contact named ‘Regina Sausages’, and I say “Call Regina Sausages” to call them. This works reliably. But notice how I have had to adapt to my voice assistant and not the other way around.
Voice activated robots are little more than a heuristic computer program triggering certain set functions and passing unknown commands to a search engine. But when we use them, we are relying on the computer systems of a central corporate-owned server to do the legwork. In the process, they gain information about us that helps the company to advertise and monetise us. There is nothing cybervirtuous about this arrangement, and perhaps we ought to be more cautious about what we are trading away for apparent convenience.
I appreciate you making some of yer thinking available for those of us who no longer have book budgets, my broad concern here is that people in general (via the cognitive-biases that make up our takes on the world) can't grasp (aren't grasped by) effects that are complicated/multifactorial, that occur over time and or space, that are somewhat abstract, etc, whether it's supply chains or global warming, so what is the new "hack" that might let them act/be other than how they are now?
https://www.youtube.com/watch?v=7g0hx9LPBq8
Posted by: dmf | July 09, 2018 at 09:08 PM
Hey dmf,
Your question here is a good one: what 'hack' can we use to put into perspective everything hidden from view in our relationship with technology? The new book can broadly be understood as an attempt at such a hack, by constructing this concept of 'cybervirtue' that takes a relational perspective on moral and behavioural issues. (There is also the concept of 'cyberg', and the shallow sightedness that goes with it, which is an explicit acknowledgement of the difficulties - we only see 'the tip of the cyberg' in each case.)
As with so much of my philosophy, and indeed philosophy in general, I don't think we can bring about substantial change by aiming at changing everyone's thinking so you start by a shift among the abstract thinkers. When enough momentum is gathered towards a change in abstract thought, then comes the possibility to shift perspective more widely. Usually, that shift comes from concrete examples - thus Blue Planet 2 was able to wield influence regarding the plastic problem by presenting a very abstract environmental problem through concrete examples of plastic contamination.
If this is too optimistic a view of the problem, that for me is offset by the necessity of coming at these problems with some faith in our capacity to resolve them. If we take the opposite view, and start from the impossibility of action, doom is guaranteed. Therefore, if we hope for change, we must start by 'swallowing the dilemma' and deciding we can bring about change. More than that, as I point out in the book, we should not doubt our capacity to change the world, since we already did.
Thanks for getting involved in the discussions!
Chris.
Posted by: Chris | July 16, 2018 at 06:22 AM
I will not be very formal in my comment, I take this as a blog and not an academic space. I think the medium / environment conditions the communication format, like you writing a book from a blog, conditioned to entries of 500 words or less.
This perspective on the relationship with the technology that you are exposing, is new to me and opens up interesting space for reflection.
Do you think that talking about cybervirtue is talking about cyber morality? If that's the case, do you consider cybernetics to be a qualitatively different tool than, say, a drill?
Finally, a small annex to reflect on, I do not know you at all and it is the second time that I enter the blog and I really do not remember how I discovered it. But just by looking at the image that illustrates the post I can deduce that: You are from England, you were away from home and perhaps you live in a rural area or traveled by metro. Cyber technology betrays our intimacy.
Posted by: Patricio | April 14, 2020 at 10:27 AM
Hi Patricio,
The question of whether talking about cybervirtue is talking about cyber morality depends somewhat on what we mean by 'morality'... if we take this to mean the difference between right and wrong, then talking in terms of virtue can be a way to sidestep the attempt at a clear division between the permissible and the forbidden.
For me, virtue ethics is still moral philosophy... there is more to morality than the line between right and wrong - especially when it comes to thinking about our technological situation, I find it helpful to talk about the effects without leaping ahead to questions concerning where to draw a hard line on the acceptable. After all, precisely what has lead us to where we are now is the assumption (now, I hope, brought into question) that since technology provides solely means towards ends, it is all permissible to make, regardless of what effects it has upon us!
As for your suppositions: I do live in England, I was outside of the house, but I don't live in a rural area - you may be right that this was taken while I was on a train, though. It is remarkable how much information is encoded in such an apparently tangential image! :)
As for the blog, this is a place where all comments are welcome. If we stray sometimes into academic territory, no-one should ever think that they are not free to voice their mind here. One of the great mistakes of the university world in my view happens when academics begin behaving like a mystery cult.
Thanks for commenting!
Chris.
Posted by: Chris | April 15, 2020 at 07:07 AM