My Photo


  • Michael Moorcock
    "a genuine philosophy for the 21st century"
  • Mary Midgley
    "this matters - read it!"
  • Kendall Walton
    "wonderfully refreshing and inventive"


Game Design

Blog powered by Typepad

« Chaos Ethicists vs Law Ethicists | Main | Beyond Definitions of Game »


Feed You can follow this conversation by subscribing to the comment feed for this post.

As a mere thought experiment that's probably been done: what happens if you add (let's say) a distance measure of "relatedness" or "careness" to the "number" measure of a consequentialist ethics, such that different people can calculate different answers to the same question based on their relationships with the people involved?

Certainly this can neatly solve the Repugnant Conclusion if the measure is designed so that any number of low-careness individuals can be outweighed by one high-careness individual.

If one assumes that Deep Judge could be built with such a framework, I think some very interesting discussions ensue :-).

In Metaethics, the notion of an idealised agent, or some better version of you, is posited as a thought to illustrate the conceivability of moral realism.

Perhaps put in another way, can we imagine a version of ourselves where we would be a much more upstanding and better moral agent? If we can, and we can also imagine that this agent would make different decisions, then we can also see the conceivability of moral facts.

This is a 'second order' ethical issue, and your post about Deep Judge is about the moral desirability of knowing if there is a 'correct' answer to a moral issue. This issue presumes moral realism is true in my view, which, if we are doing metaethics, we would have to argue independently for, but that's all an aside.

You are making a correct point when you say that some moral situations are necessarily messy, some situations are difficult and really difficult situations will happen where you will have to lose in some way or cause harm. Sometimes it can be said that a good theory is one that doesn't solve problems but illuminates what makes something problematic, and moral agency certainly has these kinds of problematic situations for which an ethical theory must illuminate as a desideratum.

I'm reminded of an episode of Red Dwarf, Inquisitor (which I'm sure you are familiar with), where a cyborg like being serves as a moral arbiter for the characters, as to whether they are 'worthy' of existence. When Lister invariably was judged unworthy, he was set to be replaced by a possible sperm alternative of him which lived a much better life.

There is something insightful about having the option of knowing that a better moral life or set of decisions is conceievable and possible. It's fair to say that if we live by a set standard of moral behaviour, there is always scope for individuals to stray or fail those ideals.

The case of there being a certain and single calculi or outcome is possibly in that specific instance not feasible, but counterfactuals serve a morally instructing lesson especially when we are looking back at decisions we may regret or consider if we did otherwise.

Interesting and provoking post. I think you overplay Kant's theism though, but that's a minor point.

With Regards

Peter: this kind of constraint to relatedness has been discussed, but more often informally than formally, Mary Midgley explored this idea via the idea of concentric circles of care when discussing our relationship with non-human animals, for instance. I suspect that as a solution to the Repugnant Conclusion this wouldn't fly, though, since it would be immediately to give up the core tenet of Utilitarianism where everyone is treated equally. :)

Incidentally, this morning I was continuing my reading of volume 2 of Parfit's "On What Matters" and found (quite near the back end) an argument he presents against numeric valuation of "goodness" that is somewhat pertinent to this issue!

Michael: Volume 2 of Parfit's "On What Matters" is precisely a carefully presented argument in favour of moral realism. It is because he argues for this that I presume he would approve of Deep Judge. I actually like a lot of Parfit's argumentation precisely because it moves away from some highly defeatest positions, although he goes too far in the other direction for my tastes.

"I think you overplay Kant's theism though, but that's a minor point."

I wondered about this... But even if this is the case I don't think Kant would approve of Deep Judge since it would be to substitute a machine for humanity - given his beliefs about the special qualities of human existence, I presume he would oppose this.

Best wishes!

Minor addendum:

This quote from volume 2 of Parfit's "On What Matters" may somewhat contradict what I claim here regarding his opinion of Deep Judge:

"There are... difficult moral questions... which may have no answer".

Nonetheless, Parfit does ascribe to moral realism and cognitivism, and it seems to me that in essence he believes Deep Judge is at least possible.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)