The Ultimate Moral Computer?
March 20, 2012
Deep Judge is the universe’s most advanced computer, tasked with solving every possible moral question and dilemma once and for all. The question is, could such a machine be built – and should we want to?
What I am calling moral law is the position that questions of ethics and morality have definite solutions. It may be the case that our understanding of these matters get better with time, and thus moral law is still compatible with ethical progress, but an adherent to this philosophical viewpoint holds that our dealings with one another depend ultimately upon reason, and that morality is not chimerical or mutable. Derek Parfit, whose gigantic On What Matters I am steadily chipping away at, is one contemporary moral philosopher who qualifies as a Law ethicist in my terms, and Immanuel Kant would be another example.
Kant and Parfit are unlikely bedfellows in many respects. Kant was a seventeenth century Prussian Christian, while Parfit is a twenty-and-twenty-first century positivist whose early philosophical work is drenched in science fiction thought experiments involving brain fission and teletransporters. However, they share in common a faith in reason and logic – in the reliability of mathematics, and the applicability of such systems to the world around them. When I first discovered that the later Parfit had turned Kantian, it came as something of a shock, since he had been so dedicated to outcome-focussed ethics (such as Utilitarianism) in his younger days, but in fact the ‘alliance’ is perfectly sensible. Both have a measured commitment to reason to solve problems. Kant’s faith in God supported his belief that ethical matters – while left to humanity to address – would converge on a single solution. Presumably Parfit has sufficient faith in empiricism, mathematics and objectivity to believe similarly. He writes: “Like answers to mathematical problems, moral judgements can be objective in the sense that they can be right and wrong, by being true or false.”
Supposing the moral law approach is correct, it would hypothetically be possible for future civilizations to build a mighty artificial intelligence that, given access to all the facts or the means to uncover them, could resolve every moral dilemma as a simple matter of computation. After Deep Thought, Douglas Adam’s famous mega-computer tasked with determining the answer to the ultimate question of life, the universe and everything, I suggest we call this hypothetical ethics machine ‘Deep Judge’. Deep Judge would not be a law-giver, although law-makers might certainly adjust statutes based on its conclusions, but simply an ethical problem solver. State all the morally relevant facts of a situation, and it would calculate what was the right or correct course of action.
On the basis of the way Parfit writes about ethics, I believe he would accept the idea of Deep Judge quite willingly, while recognising that it is also a flight of fancy. He might be keen to point out that what constitutes ‘morally relevant facts’ would leave some situations undetermined, and also that the premises Deep Judge would be given might be subject to revision as the moral philosophy of its creators solved new problems. Conversely, I suspect Kant (after a primer in contemporary technology!) would consider Deep Judge something of an abomination – God, he might say, may know the answers to all matters ethical, but man should not presume the same breadth of vision. As a Christian, Deep Judge might well seem to be idolatry in Kant’s eyes.
Without an appeal to any form of theism, I too would consider Deep Judge a horror rather than a blessing. What appals me in this concept is the idea that moral matters are ultimately calculations to be computed, an attitude that predicates outcome-focussed ethics (consequentialism) over the alternative approaches of agent-focussed (virtue ethics) or duty-focussed (deontology) ethics. This bias is extremely contemporary – the majority of philosophers working in ethics today seem to lean towards utilitarianism or other consequentialist approaches, although this is by no means a consensus view. Since philosophers, like scientists, are at heart nerds, this preference for mathematical solutions to ethical problems is perhaps inevitable.
Outcome-focussed ethical views from their very outset were based upon faith in mathematics. Eighteenth and nineteenth century social reformer Jeremy Bentham, considered the founding father of utilitarianism, appealed to a concept of ‘felicific calculus’ in expounding his ideas. This algorithm purported to measure the quality expressed in the utilitarian aphorism “the greatest happiness for the greatest number”, which depended upon the idea that pleasure and pain would somehow be measurable. Oddly, even though our understanding of neurobiology has advanced to the point that it is clear that this kind of measuring of happiness is patent nonsense, there has been little or no revision to prevailing approaches to utilitarian assumptions. Ethics, on this view, is still a form of algebra in disguise.
In Reasons and Persons, published in 1984, Parfit explored the felicific calculus concept and gave a name to one of its most questionable consequences – the Repugnant Conclusion. The idea is simple to grasp: if we can calculate ‘the greatest happiness for the greatest number’ as a product of enumerated quality of life and population, we can raise the overall felicity of a society by lowering the quality of life but increasing the size of the population. This leads to an optimal state with the highest conceivable population that is experiencing only marginally positive quality of life – Parfit’s Repugnant Conclusion. Parfit rejects the conclusion as unacceptable, but could not at the time provide a viable ethical theory that avoids it. His problem, which he and other outcome-focussed philosophers never seem to want to deal with, is that ethics is not at heart a mathematical exercise.
Kant, I believe, would argue against this entire approach to moral philosophy since it fails to recognise the dignity of humanity in its individual persons. As Allen Wood (the reigning expert on Kant’s moral thought) has cogently argued, the essence of Kant’s approach lies in his Formula of Humanity, which states that we should act in such a way as to treat other people always as ends and never merely as means (that is, to recognise people have their own goals, and thus to avoid treating them merely as tools). This kind of approach to ethics, which amounts to an attitude of mutual respect towards other people, is simply not compatible with resolving ethical quandaries as a matter of pure calculation. To do so is an attempt to abstract away from people as living, autonomous entities, and instead to deal with them simply as quantities to be maximised. This may appeal to introverted maths nerds, but it is not acceptable as an ultimate moral theory.
Deep Judge represents a way for an imaginary future society to cede the difficult task of working out how to live together. Rather than having the discussions and arguments, and facing down the conflicts and collisions, Deep Judge is the reclusive Law ethicist’s fantasy of solving human problems without having to deal with all the messy business of real human relations. Kant, for all his commitment to moral law and his borderline autistic introversion, would not have accepted this as a reasonable solution to the problems of ethics. Morality occurs within and around those interactions that occur between imaginative beings, and no computer program is capable of replacing that experience.
As a mere thought experiment that's probably been done: what happens if you add (let's say) a distance measure of "relatedness" or "careness" to the "number" measure of a consequentialist ethics, such that different people can calculate different answers to the same question based on their relationships with the people involved?
Certainly this can neatly solve the Repugnant Conclusion if the measure is designed so that any number of low-careness individuals can be outweighed by one high-careness individual.
If one assumes that Deep Judge could be built with such a framework, I think some very interesting discussions ensue :-).
Posted by: Peter Crowther | March 22, 2012 at 11:16 AM
In Metaethics, the notion of an idealised agent, or some better version of you, is posited as a thought to illustrate the conceivability of moral realism.
Perhaps put in another way, can we imagine a version of ourselves where we would be a much more upstanding and better moral agent? If we can, and we can also imagine that this agent would make different decisions, then we can also see the conceivability of moral facts.
This is a 'second order' ethical issue, and your post about Deep Judge is about the moral desirability of knowing if there is a 'correct' answer to a moral issue. This issue presumes moral realism is true in my view, which, if we are doing metaethics, we would have to argue independently for, but that's all an aside.
You are making a correct point when you say that some moral situations are necessarily messy, some situations are difficult and really difficult situations will happen where you will have to lose in some way or cause harm. Sometimes it can be said that a good theory is one that doesn't solve problems but illuminates what makes something problematic, and moral agency certainly has these kinds of problematic situations for which an ethical theory must illuminate as a desideratum.
I'm reminded of an episode of Red Dwarf, Inquisitor (which I'm sure you are familiar with), where a cyborg like being serves as a moral arbiter for the characters, as to whether they are 'worthy' of existence. When Lister invariably was judged unworthy, he was set to be replaced by a possible sperm alternative of him which lived a much better life.
There is something insightful about having the option of knowing that a better moral life or set of decisions is conceievable and possible. It's fair to say that if we live by a set standard of moral behaviour, there is always scope for individuals to stray or fail those ideals.
The case of there being a certain and single calculi or outcome is possibly in that specific instance not feasible, but counterfactuals serve a morally instructing lesson especially when we are looking back at decisions we may regret or consider if we did otherwise.
Interesting and provoking post. I think you overplay Kant's theism though, but that's a minor point.
With Regards
Michael
Posted by: Michael Pereira | March 25, 2012 at 07:08 PM
Peter: this kind of constraint to relatedness has been discussed, but more often informally than formally, Mary Midgley explored this idea via the idea of concentric circles of care when discussing our relationship with non-human animals, for instance. I suspect that as a solution to the Repugnant Conclusion this wouldn't fly, though, since it would be immediately to give up the core tenet of Utilitarianism where everyone is treated equally. :)
Incidentally, this morning I was continuing my reading of volume 2 of Parfit's "On What Matters" and found (quite near the back end) an argument he presents against numeric valuation of "goodness" that is somewhat pertinent to this issue!
Michael: Volume 2 of Parfit's "On What Matters" is precisely a carefully presented argument in favour of moral realism. It is because he argues for this that I presume he would approve of Deep Judge. I actually like a lot of Parfit's argumentation precisely because it moves away from some highly defeatest positions, although he goes too far in the other direction for my tastes.
"I think you overplay Kant's theism though, but that's a minor point."
I wondered about this... But even if this is the case I don't think Kant would approve of Deep Judge since it would be to substitute a machine for humanity - given his beliefs about the special qualities of human existence, I presume he would oppose this.
Best wishes!
Posted by: Chris | March 27, 2012 at 10:49 AM
Minor addendum:
This quote from volume 2 of Parfit's "On What Matters" may somewhat contradict what I claim here regarding his opinion of Deep Judge:
"There are... difficult moral questions... which may have no answer".
Nonetheless, Parfit does ascribe to moral realism and cognitivism, and it seems to me that in essence he believes Deep Judge is at least possible.
Posted by: Chris | March 27, 2012 at 11:23 AM