Is it possible to be truthful without understanding the truth? When our sense of truth vanishes, what does it mean to be honest?
At a time when ‘fake news’ and ‘alternative facts’ have become everyday points of discussion, it may seem we have lost our moorings with respect to the truth. This growing crisis of knowledge has been long recognised by philosophers, and indeed many other academics, and what we are seeing now is less the breakdown of the truth as the dawning public realisation of the extent of a problem that had previously been corralled within the ivory towers. Part of the problem we are facing is that understanding knowledge in terms of what is or is not true makes it seem as if knowing is just a matter of having the correct set of statements. The situation seems very different when the facts are understood as something created through the exercise of authentic knowledge.
You have knowledge whenever you are able to act reliably. You have practical knowledge of a car when you can repair it reliably; you have social knowledge of people when you can communicate reliably with them; you have scientific knowledge of an alpha particle when you can detect it reliably. Telling me how a car works, or where people go, or what an alpha particle is doesn’t necessarily show knowledge. It is often just repeating what those with authentic knowledge have said. The facts are true when the practices from which they emerge are reliable.
When we are not the person with the knowledge, what matters is whether we are being a reliable witness to those that do. This idea, drawn out in a novel way by chemist-turned-philosopher Isabelle Stengers, connects all situations where facts are being claimed. For Stengers, when a researcher acquires knowledge of an alpha particle, they are a reliable witness for the alpha particle, making a great deal of scientific research into finding ways to make inanimate objects ‘speak’. To have the facts is to be connected to the knowledge that established them by a chain of reliable witnesses, where you are the final link in the chain. On this understanding, which I developed in Wikipedia Knows Nothing, the truth of the facts is periphery to there being both an authentic knowledge at the root of any claim, and a chain of reliable witnesses from this to whomever is speaking.
When we take being truthful to be about stating true facts, it can obscure the moral qualities relevant to honesty. There is something to be said for the Christian conception of ‘bearing false witness’ as a description for the debility of deceitfulness: it stresses the breaking of the chain of reliable witnesses, and makes the standard of virtue hinge upon the act of witnessing for others rather than having true facts, as if honesty were a surrogate for forensic investigation or empirical experiments. To be honest or sincere is to be a reliable witness, whenever that is what is called for. When someone has a terrible haircut, for instance, remarking on how striking it looks is a very different response from telling them they should get that haircut again! Withholding comment is not necessarily a sign of dishonesty, and other virtues such as tact may make remaining silent or being selective with remarks appropriate choices in many situations. Only when we bear false witness have we acted with tangible deceitfulness.
As cyborgs, however, we are constantly tempted into bearing false witness. When a story comes to us through the digital public spaces, we sometimes retweet, reshare, or otherwise propagate unreliable witnesses – our sharing of these stories is acting as a witness, and as such there is a tacit duty to at least minimally confirm the veracity of what we are passing down the chain. Our robots are, in these cases, incapable of being cyber-deceitful: they reliably repeat what was said. Yet at the same time, cyborgs communicating in this way are cyber-gullible: the sheer ease with which whatever attracts our attention gets flagged for repeating discourages careful checking of what is being claimed – especially when the political and ideological leanings of the message align with the human’s view of the world. Thus ‘fake news’, which is the propagation of false witness, can proliferate easily.
Could we design the digital public spaces to be cyber-honest instead? Even the decision to work towards this would be a severe break from the tenor of the internet, where click-bait is carefully cultivated to misrepresent or to package vacuous fluff as if it were interesting, all in the relentless pursuit of our attention. Yet it does not seem inherently impossible to design for cyber-honesty, although any mechanism for drawing attention to gaps in the plausibility of claims being repeated runs severe risks of being abused for cyber-cruelty or exacerbating cyber-bluntness. A system for flagging errors or deceit would risk producing all the same problems as public karma systems that track negative behaviour – the potential for abuse could make anything like this untenable, unless the cost of blowing the whistle was high.
One possibility worth considering is a ‘warning flag’ that could be used to challenge a story as deceitful. To prevent abuse, these flags might have to be kept hidden from public eyes until a sufficient number had been raised, or until at least one flag had been raised by someone who had earned a trustworthy status in such matters. Trust in such a system could be staked: raising your flag unduly might result in the availability of that flag being withdrawn for a progressive period of time afterwards. In such an arrangement, deploying a flag frivolously would be to lose it for a while whilst a vindicated objection would raise trust ratings and perhaps provide additional flags. This system, if it could be made to work, would offer cyborgs the chance to become akin to honesty sheriffs on the digital frontier, for those willing to put in the time required to check the veracity of claims, while most would be free to ignore flagging procedures yet still be informed when a duplicitous story was being shared. It perhaps would be best if no story was ever prevented from being shared, and if flag-raising only affected the trust rating of that individual story, since the alternative would be censorship.
Such a system requires moderators at first, but as cyborgs earn trust within the system they could eventually operate without direct oversight, helping to overcome the scale problem that afflicts networks like Twitter and Facebook. In some respects, this proposal is not dissimilar to the way spam reporting works in certain contexts. The blogging platform Typepad, for instance, allows blog owners to manually flag spam comments, immediately deleting them and then watching for the same messages elsewhere in their blogs. Such an approach would not work in the digital public spaces precisely because of the collective qualities of these networks, but with a blog (where ownership is clear) they are adequate to the task they were designed for. It’s not clear whether anyone has thought about expanding these kind of nuisance defences to ‘fake news’, but this is something worth exploring now the cyber-deceitfulness of ‘social’ media has become a significant problem.
An objection may occur: how to deal with messages like a Christian posting “Jesus is the saviour of the world” or a Discordian posting “Did you know god’s name is Eris and that he is a girl?” If we think in terms of the truth of a statement, flags might be raised. But these cases entail no false witness, any more than do claims like “The USA is the greatest nation on Earth” or “The US is the greatest threat to the planet”. It is not because these are ‘just opinions’ that they are not examples of deceit, but because they are sincerely-made statements entailing no claim to knowledge (understood as guiding reliable actions). Faith traditions do entail knowledge (of rituals, hymns, community life and so forth), but never metaphysical knowledge, as such – if they did, there would be no question of faith in connection with them, and unwavering blind faith is always bad faith.
Stemming the growing tide of cyber-deceit – the use of the internet as a self-selecting propaganda machine – will entail taking a stand on knowledge. This will involve improving our collective critical skills, or at least taking a greater care to distinguish deceitful statements from sincere but untestable, accidentally mistaken, or reliably grounded ones. False witness depends upon someone falsifying a claim or repeating a falsified claim carelessly; these are not the same as bearing witness in a situation where no viable knowledge can be claimed, and we need to be clear about how we know things to understand the boundaries. Sharing the claims of someone who disagrees with ‘scientific consensus’ is not bearing false witness, no matter how much we disagree with what is said. Viewing knowledge as a bag of true statements deceives us in this regard, and we need to become smarter about tolerating dissensus, since as Jacques Rancière warns, democracy is impossible without disagreement. The key problem here is deceit, which is always a moral weakness even when applied in pursuit of a supposed greater good. The unintentional false witness of our digital public spaces is merely the amplifier that transforms ‘fake news’ from problem to crisis.
Deceit goes beyond the tactful presentation of your views – there are always infinite ways to present your perspective, after all, and bluntness should not be unduly conflated with honesty. Deceit is the intent to deny respect by purposefully bearing false witness. We have let ourselves become so accustomed to duplicity that we think there is nothing we can do about it. We can still change this, but only if we discern treacherous speech from its alternatives. The truth was never out there – reality is out there, and that was never true but merely, unwaveringly there. The truth is what is sometimes revealed by authentic knowledge, and what is always destroyed by false witness. If we aim to be reliable witnesses, and place our trust in those who know how to repeat honestly, sincerely, and, indeed, truthfully, we can get far closer to the truth than we have so far managed.
The opening image is Ladybug, a 1957 painting by Joan Mitchell, which I found here at Alain R. Truong’s site. As ever, no copyright infringement is intended and I will take the image down if asked. My thanks to seymourblogger (@abbeysbooks on Twitter) for the suggestion of this artist.