My wife frequently accepts a playlist generated algorithmically by Spotify’s robots based on a stepping point of her choosing. I rarely do this myself... it leaves a bad taste in my mouth. I much prefer listening to a playlist hand-picked by Don Letts or Justin Robertson, or indeed any other human whose musical knowledge I trust. It is not that I fear that I could be manipulated by such algorithmic selections (although clearly, that can and does happen in certain cases) it’s that knowing that tracks are related by a common pool of listeners inevitably favours the popular over the obscure irrespective of the reason for that popularity... I ideally want to share in someone else’s experience of music, to find new things – or old things I didn’t previously know about. The robot has no knowledge that it can share, only a capacity to blindly surf information derived from vast oceans of collected data via prescriptive algorithms.
The enthusiasm with which we have taken to accepting all manner of guidance from our robots – who pragmatically can have absolutely no understanding of what they are doing or recommending – is staggering. My son likes to watch YouTube videos of friendly people playing Minecraft, or kids unboxing rather expensive-looking Lego playsets. He discovered most of these interests because YouTube’s robots recommended them to him on the basis of what he'd already watched... which they also recommended. It doesn't bother him one whit, but I am far more troubled by the whole thing. The whole concept of automated recommendations seems to run on the assumption that the input data will have originally been a result of voluntary choice – in my son’s case, it has been machine-curated from the very outset.
Yet whatever my concerns here, it is worth reflecting upon the way the robots are merely doing programmatically what humans already do voluntarily: constructing intellectual and aesthetic echo chambers. One of my many complaints about closed academic peer review is precisely the way it serves so effectively to cultivate a community of narrow vision – precisely the opposite of what we would hope for from our universities, if only we cared. Similarly, news media attract their audiences by sharing their political bias, which also allows their owners to influence the audience’s opinions rather effectively. Next to these kinds of intellectual prisons, algorithmic recommendations seem positively innocent!
Nonetheless, the cyborg we make with these recommendation robots are cyber-blinkered – they risk a narrowness of vision that can range from the innocently circular to the disturbingly self-affirming. It’s hard not to wonder sometimes if certain extreme views of the world have proliferated on the internet precisely because of the ease with which any perspective can find validation simply by entering the appropriate keywords into a search engine. That’s where the trouble began, in algorithms for effective indexing... but it certainly won’t be where any of this will end.
A Hundred Cyborgs, #23, requested by Tom Gerbicz (@YungGilbu)