October 02, 2007
We all care about the future – after all, we hope to live there one day, despite the pesky inconvenience of always being stuck in the present. But is it possible to have an ethics of the future? And if not, why not?
Future ethics is necessarily concerned with outcome-focussed ethics. After all, the outcomes about which we are talking are projected into the future. Thus, the question of future ethics is tied up with Consequentialism – the ethical school that is primarily concerned with outcomes, rather than rights (rules) or the virtues of agents. The basis of Consequentialism is the idea that an action is morally good if it has good consequences, a belief that is often summarised in the maxim “the ends justify the means”.
There are three essential problems with Consequentialism: firstly, the conception that an action is good if it has good consequences fails to provide a working definition of ‘good’, although in practice the issue of determining what is ‘good’ plagues all moral philosophy. (Indeed, G.E. Moore coined the term ‘naturalistic fallacy’ to describe the problem that the question “what is good?” always remains open). Secondly, judging an action as morally good based on its consequences presumes we can accurately predict the outcome of our actions – as Nietzsche comments: “But does one know the consequences? Perhaps as far as five steps.” There are many processes about which we can confidently predict an outcome, but even taking all these into consideration our capacity to accurately predict the future is infinitesimally feeble. Finally, the idea that “the ends justify the means” allows for the most fearsome atrocities to be conducted in the name of a sufficient end – as indicated by the horrors of the Inquisition. Do we really want to employ a system of ethics that permits any evil provided we can conceive a justification in terms of some future good?
The most commonly espoused form of Consequentialism is Utilitarianism, which is generally attributed to Jeremy Bentham. Bentham considered pleasure and pain to be the only intrinsic values in the world, and thus conceived of an ethical system with the purpose to minimize pain and maximise pleasure, which he termed “the greatest happiness principle.” At its heart, the goal of Utilitarianism is sound – it expresses the idea that “the needs of the many outweigh the needs of the few” to quote (of all people) Leonard Nimoy as Spock in Star Trek II: The Wrath of Khan. The problem with Utilitarianism is that it is utterly untenable as a formal system, and as an informal system becomes little more than a mildly altruistic form of egoism (i.e. acting in one’s own self interest).
The problems with Utilitarianism are legion. For a start, it inherits all the problems of Consequentialism in general, and in particular the issue that we are not able to predict the future accurately and thus relying solely on outcome-focussed ethics leaves us guessing far too often. Additionally, the assumption that happiness is a comparable quality is untenable. Bentham believed (having faith in the remarkable progress of scientific thinking in his time) that it would be possible to construct a calculus to compare happiness – what he called a felicific calculus, but which later became termed a hedonic calculus. Unfortunately for Utilitarianism, no such calculus seems possible. In fact, perhaps the only form of calculation of this kind that can be made is when we are comparing the loss of lives – in this instance, and perhaps in this instance only, some form of Utilitarianism can be applied, for instance, in the case of the Trolley Problem.
The real death knell for Utilitarianism (or at least, the real challenge for people who wish to defend it) came with John Rawls’ A Theory of Justice. In this book, Rawls attempts to provide an interpretation of his idea of “justice as fairness”, and to develop Kantian ethics into a formal system that could be pragmatically implemented. Rawls project largely fails because of the number of assumptions he makes which are open to debate, although it still contains a great many interesting ideas. An unexpected side effect of Rawl’s work, however, is the exposure of fatal issues in Utilitarianism. For although Rawl’s formal Kantian system certainly would not be universally accepted, he was at the very least able to develop such a formal system – while it is becoming increasingly apparent there is little hope that an equivalent formal system could be developed for Utilitarianism. In fact, Rawls’ book is more effective in undermining Utilitarianism’s credibility than it is in advancing his own position.
The guiding idea behind Utilitarianism is sound – the idea that we should act towards “the greater good” – but alas no form of Utilitarianism proposed thus far has provided any viable mechanism of establishing how such decisions should be made. Since it now seems that human beings are particularly poor about predicting how future events will make them feel, and thus at gauging the best actions to bring their own happiness (as shown by Daniel Gilbert’s research), it is hard to imagine any credible form of hedonic calculus. Happiness is not only incommensurable between people, it is not within our power to predict!
None of these issues are reasons for individuals not to adopt a Utilitarian or similar outcome-focussed ethical system. As individuals we are free to choose whatever ethical system we wish, including the ‘minimal case’ of egoism. But collectively, these problems give us sound reasons for denying our politicians the capacity to use Utilitarian justifications for their actions.
State leaders are always able to cite ends of sufficient seriousness to validate their actions – “national security” is an oft touted but rarely justified excuse for all manner of human rights violations. In the light of the severe problems with Utilitarianism, do we really want the leaders of nations using the logic of Consequentialism as the basis for their actions? If we do, we face governments that are willing to use any means to achieve ends of their own devising, and which need not reflect the wishes or best interests of the citizens they (in democratic countries, at least) represent. As Hannah Arendt said: “As long as we believe that we deal with ends and means in the political realm, we shall not be able to prevent anybody’s using all means to pursue recognised ends.”
In order to make Utilitarianism viable, it is necessary for its application to depend on something more than the judgement of the individual (which, since we are assessing future outcomes that are inherently difficult or impossible to predict, allows the individual to justify almost anything they please). One way to recover credibility is Rule Utilitarianism, which combines outcome-focussed ethics with rights-focussed ethics. The idea here is that although utility (“greater good”) is still the guiding principle, it is not applied on a case-by-case basis (as in Act Utilitarianism) but rather used as a method of determining which rules will apply, and therefore which rights we shall have. At least in Rule Utilitarianism, we have the capacity to say certain means are not acceptable. However, this assertion may also be equivalent to saying that outcome-focussed ethics are insufficient by themselves – we must have some agent or rights-focussed aspect as well if the ethical system is not to collapse under the capacity to abuse it.
If Utilitarianism is to be rescued from the scrapheap of failed ideas, it may be necessary to begin by denying the viability of “the ends justifying the means” as a political tool. We should not allow our leaders to validate their actions in terms of the importance of their supposed ends – they can always claim that the threat to the nation is real, and thus justify any means to be employed. Instead, I suggest we should stand firm behind the human rights agreements we have already made and insist that some means are simply not permitted by these covenants. Torture, inhuman or degrading treatment, slavery or involuntary servitude, and propaganda advocating either war or cultural hatred are all prohibited under the United Nations so-called Universal Declaration of Human Rights. We should hold our governments to this agreement.
We cannot get by without using outcome-focussed ethics in certain situations – as the Trolley Problem amply demonstrates. But equally, we should not allow solely outcome-focussed ethics to govern our societies when the consequences of doing so can be so dramatically detrimental. I contend that it is never in the interests of the many to allow politicians to use Consequentialist logic to justify their actions. There can be no comprehensive and reliable ethics of the future – and to continue to believe otherwise is to hand our leaders a powerful tool to commit atrocities in our names. We should demand more from our societies – and by our actions, we can ensure that these demands must be met.
The opening image is Mechanics 3000 by Dawid Michalczyk, which I found here. Prints of Dawid's work are available from the artist's site. As ever, no copyright infringement is intended and I will take the image down if asked.
The attempt to predict the future lies at the heart of strategic planning in almost any of today's companies, esp. if the company intends to invest/bet a lot of capital on future market success - so it is in the economic arena that the prediction of future outcomes is now the norm rather than the exception. I suppose it is for this wide-spread practice that outcome-focussed ethics has also become so common-place (as Arendt suggested already 50 yrs ago).
Posted by: translucy | October 03, 2007 at 05:44 PM
Despite the aspiration to challenge my core beliefs and provoke my most ardent debating faculties, I haven't felt too provoked or inspired to answer...a dull but busy few weeks of data mining, and not much intellectual inspirado. So all I have are these bullet-point-style notes.
Utilitarianism : pleasure and pain is an entirely subjective measure, and measures of utility at the societal level are quite harder to gauge. Even at the individual level one doesn't exist in splendid isolation. So this is an entirely artificial metric on which to judge ethical decisions.
Prediction : Well never say never. People are creatures of habit. Prediction for a person is hard - this doesn't mean it is methodologically hard (it is quite hard, but we're improving). Data mining is a wonderful thing.
On a larger scale, some experts claim that everything is ultimately engineerable, some disagree. Penrose claimed that consciousness was non-algorithmic, and that's all you need to deny any predictability of human action (based on current technological capability or potential). Of course, we may improve in that regard. The issue here is that the means of prediction must belong to processing resource rich parties. Then we need to envision an existence where we are all processing resource rich (approximately, you'd need processing for each person that would dwarf that possessed by all the world and all its computers today).
Again, that is not inconceivable, but you're into science fiction and the possibility that we would cease to be human in any meaningful way.
(Wrote about prediction before, not sure how much I agree with my past self though!)
Ethics : since it's the future, whose ethics are we discussing anyway? Its the future, it doesn't belong to us. The people who'll live there may not thank us for our democratic decisions, even if they are our future selves. After all, when you talk about government's will vs the will of the many, who has the better insight when it comes to the future? If one person is very bad at prediction, then a large group of them will be statistically worse, because of crowd influence effects.
Ends justify the means : well of course not! Except, maybe...sometimes this is all we have to go on! The entire green movement is based on this, since we cannot tell if what we do to combat global warming has any effect for many years. We can barely prove that we contributed to the warming in the first place!
So yes, we need a mechanism to prevent leadership abusing this principle. National security, to my mind, is one of the less important things we have to worry about (since I don't rate the concept of the nation very highly). So human rights accords are a good start. More than this, we want input from all informed parties in a timely fashion. This is why I'm interested in the online petitions regarding Burma - democracy needs to be fast. Four year old verdicts let leaders off the hook. People should be allowed to voice their opinion instantly, in a way which can be tallied - massively multiplayer nationstates. I believe that inherent human morality should serve us sufficiently, when it is expressed in large enough numbers. If there were less rules, but more inclusive, flexible and timely systems of emancipation, I think we could begin to see the best aspects of a humanity which evolved in tight-knit social groups exported to global effect.
The Future : the biggest issue with most arguments in principle, and thus with most ethical discussions, is that they don't sufficiently address change. All is change, most trends of change are either slower, faster or a different macro shape (i.e. cyclical, linear, etc) than we intuitively experience/view them as. From my perspective, the beauty of living now is that we have come to realise our experience of existence is a model provided by our senses, and this gives us the perspective to understand that much of local, relevant reality is amenable to modelling. In turn, this will allow us to formulate, with greater and lesser accuracy, the circumstances and trends of change that we find ourselves in. There is hope in that, the hope of control. Much as your hope, Chris, seems to lie in basic human goodness, mine is bet on basic human intelligence.
Posted by: zenBen | October 17, 2007 at 03:04 AM
zenBen: thanks for getting around to this! It's interesting to see your perspective here...
On the subject of prediction, while we will no doubt get better at identifying trends, this is insufficient to predict the future to any reasonable degree, and we already see many limits to our methods. I don't see even a hypothetical future predictive capacity sufficient to base ethics solely on consequences.
Regarding whether future ethics are other people's ethics, well, yes from a certain perspective, but what we are really talking about are present ethics about the future - for which you need predictive capacities, which as already discussed are not robust enough for the task. :)
I feel it is important to deny that the ends justify the means, and stand firm that certain means are not acceptible. This is the human rights position, of course.
"massively multiplayer nation states" I have mixed feelings on this... there are benefits to that speed of response - but there are disadvantages too. We are already well aware of the limits of majority rule, and this kind of approach can heighten these problems even further.
Your optimism about the capacity to predict change is not something I can share; I just don't see anything like the capacity you are hoping for coming about. The odd thing might be predicted in isolation - we will certainly get better at this in certain cases - but I don't see this ever forming a capacity to full anticipate change. The nature of change is that we are so busy worrying about what we think we know that we fail to anticipate where change will come from. :)
"Much as your hope, Chris, seems to lie in basic human goodness, mine is bet on basic human intelligence."
You're betting on human intelligence? You are a brave man! :) Be careful you don't project your own intelligence onto the rest of the world...
So, in summary, you seem to reject my attack on Utliitarianism on the grounds that you believe the problems of predicting the future will be overcome in the future. But what will you do for your ethical system until then? ;)
Many thanks for taking the time to write this up, and best wishes!
Posted by: Chris | October 17, 2007 at 01:32 PM
No, I don't think I am rejecting your attack on Utilitarianism as it stands - its sound enough to withstand simple attacks, and I have no urgent need to defend Utilitarianism. I may be in that box as it stands, but thats due to a muddiness of personal definitions. I am only ever aware of my position with respect to some thing. As I've said, principle arguments (where you argue to the general case), require more precision than they usually get short of world class professional philosophy (those that stand on porches for 20 years).
Thus, I've offered my ideas on the topics at hand, but they are only facets of the argument, and the whole is too far to go. If you want some kind of rebuttal, I can only say that I don't think your post has the necessary depth to be a convincing argument either. Both of us, I suppose, are constrained by working out of our armchairs.
Posted by: zenBen | October 17, 2007 at 05:21 PM
"Both of us, I suppose, are constrained by working out of our armchairs."
Indeed; in order to keep this piece short enough I had to cut various details, including all of Rawls' arguments against Utilitarianism, thus missing out the meat of the argument.
Also, I accept your point that utilitarianism is simply the name of the country within which your ethics can currently be found, and the destruction of the "Government of Utilitaria" doesn't stop you living there. :)
Posted by: Chris | October 17, 2007 at 08:28 PM
zenBen: "From my perspective, the beauty of living now is that we have come to realise our experience of existence is a model provided by our senses, and this gives us the perspective to understand that much of local, relevant reality is amenable to modelling..."
This pretty much what Plato already talked about, is it not?
"In turn, this will allow us to formulate, with greater and lesser accuracy, the circumstances and trends of change that we find ourselves in."
It's the combined ideas of phase/state space (Gallilei-Newton-type Mechanics and all that followed), convergence and calculability that are new, but are they meaningful to what you hope will arrive one day?
I'd argue (following the late Douglas Adams ;-)that the only way to calculate "The Model" is to do one thing - live.
Posted by: translucy | October 17, 2007 at 09:45 PM
...and add the idea of probability to the above list...
Posted by: translucy | October 17, 2007 at 10:13 PM
"This pretty much what Plato already talked about, is it not?"
Plato didn't have computers and fMRI!
Posted by: zenBen | October 18, 2007 at 12:18 PM
"Plato didn't have computers and fMRI!"
Just as well. We've spent two thousand years trying to dig out from under the problems Plato handed to us - I dread to think what mess he could have made had he had access to computers! >:-)
Posted by: Chris | October 18, 2007 at 01:13 PM