Previous month:
July 2007
Next month:
September 2007

Wii Sets Records in UK

The Daily Telegraph reports that the Wii has set a sales record in the UK, becoming the fastest selling console to reach a million units. It took just 38 weeks for the Wii, compared to 49 weeks for the PlayStation 2, the previous record holder. (For comparison, the Xbox 360 took 60 weeks to clear a million units, and the PS3 has yet to do so).

There is unconfirmed speculation that the Wii may now have the largest global installed base of the current generation of home consoles.


Rumours of God's Death

Nietzsche_1882_2 In 1882, Friedrich Nietzsche published the first edition of The Gay Science, which contained his story of a madman who comes to town asking after God, and crying out that “God is dead”. The madman’s message is one of the more memorable parts of Nietzsche’s writings:

“Whither is God?” he cried; “I will tell you. We have killed him – you and I. All of us are his murderers. But how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Whither is it moving now? Whither are we moving?...” 

The madman rants at the crowd about the murder of God, but the crowd merely stares at him in astonishment. Finally, the madman realises that his message is not understood:

“I have come too early… my time is not yet. This tremendous event is still on its way, still wandering; it has not yet reached the ears of men. Lightning and thunder require time; the light of the stars requires time; deeds, though done, still require time to be seen and heard. This deed is still more distant from them than most distant stars – and yet they have done it themselves.”

The idea that “God is dead” is expanded in Nietzsche’s next book, Thus Spake Zarathustra, which does little to clarify his meaning, and much to obfuscate it. Perhaps because of this, after finishing Zarathustra, Nietzsche went back to add a new fifth and final section to The Gay Science that laid out his position in more explicit terms. In this he explains precisely what he means by “God is dead” – namely that “the Christian God has become unbelievable” (for reasons we shall shortly explore). But although Nietzsche was undoubtedly an atheist, the meaning of his famous phrase is not that atheism must replace religion, but that Christian morality cannot survive without God – and that therefore the death of God means the erasing of the established moral horizon (at least throughout the Christian world).

Nietzsche believed that what was coming in the wake of this ‘realization’ was a total collapse of morality, but saw optimistically a brighter world beyond this in which “at long last the horizon appears free to us again”. In his view, Christianity was going to unravel entirely in the wake of this new abandonment of the Christian God – and he was convinced that this was imminent. In a later work he notes that he might yet live to see the last Christian – an absurd overconfidence, as it happens, since more than a century later and Christianity remains the most popular belief system on the planet. In one respect he was right – the twentieth century did see the establishment of atheism as a respectable belief system, and indeed as a new religion, if one will accept Humanism as a religion despite its adherent’s niggly protestations. But Nietzsche vastly misread the situation in believing that Christianity would wither and die in the new landscape of thought that had come to a head in his own time. 

Unlike our modern “New Atheists”, Nietzsche did not believe that evolution was the murderer of God. In fact, he took a rather cool view on Darwin, and especially on Darwin’s already abundant and fanatical followers, noting in Twilight of the Idols that the famous struggle for existence had been “asserted rather than proved”. Nietzsche was shrewd enough to see beyond the science entailed in Darwin’s theory and recognise that a new idol was being made of it – Nietzsche’s hope was for the end of all such idols, not for the replacement of one with another.

So why was “God dead” if not at the hands of evolution? The answer can be found in one of Nietzsche’s last books, The Antichrist (which is perhaps better translated as The Antichristian, since it is Nietzsche’s rant against all things Christian). In this, he notes how the actions and beliefs of the Church had completely obliterated the original message of “the evangel” (that is, Jesus). I need not, I hope, explicate the horrors that have been conducted throughout history in the name of God, since these are well acknowledged. Nietzsche’s charge against God is in respect of this: he asks why God did not intervene. Why, given the extent to which the Church was mauling and misrepresenting the message of Jesus – why did God not stop them? This is the crux of Nietzsche’s complaint, although he develops it in more detail than this alone. (As a critic of Christianity, Nietzsche actually has many sensible observations, although his impassioned hatred for Christianity blinds him, and his comments are often quite uneven). 

This remains a theological question that must be addressed, and therefore much of the theology of the twentieth century and beyond has moved into strange new worlds. Indeed, one branch of theology that grew in the twentieth century was “theothanatology” – the ‘science of the death of God’ – which produced many curious and interesting ideas, including Thomas Altizer’s view that the crucifixion of Jesus, being the incarnation of God, marked quite literally ‘the death of God’, but imparted the world with his immanent spirit (the holy spirit), famously providing a cover story to Time in 1966 – “Is God Dead?” Other approaches descended from Kierkegaard’s Christian existentialism, such as Paul Tillich’s which saw God as “the ground of being” (following Heidegger’s ontology). 

But we are not done with Nietzsche’s philosophy quite yet. Nietzsche may have rejected the Darwinists metaphysical beliefs, but he did not reject science (and indeed would not have been opposed to evolution, simply to political beliefs projected from Darwin’s theory). Indeed, part of the force behind Nietzsche’s formula “God is dead” is indeed an incompatibility between science and the Christian God of the nineteenth century and earlier, claiming “science makes godlike – it is all over with priests and gods when man becomes scientific” – a view the “New Atheists” would agree with, but which the majority of modern religious people deny, despite a minority movement to the contrary.

It is important to understand the nature of the god that Nietzsche was objecting to, for even within Christian terms this god was an idol. This is a god that not only sits in judgement over all mankind, but is directly involved in acts of punishment and retribution, that is in absolute and personal control of everything – it is thus a god that denies free will, and as a result cannot genuinely be God at all, since without free will all religion is pointless. Hence Nietzsche’s objection: why did this God not interfere to ensure that Jesus’ message was properly followed? The idolatrous god in question came from centuries of embellishment and interpretation by a priesthood that had muddled its own theology. To this idol of god had been ascribed absolute Truth, and the source of this truth was the Bible. 

It is here that the events of the nineteenth century, and Nietzsche’s “God is dead”, come into clearer focus – for before this point, in the Christian world at least, “the Truth” was the exclusive domain of the Church. But with the rise of science in the wake of the enlightenment, many of the “truths” espoused by the Church (based on interpreting the Bible as inerrant, that is, literal) fell into disrepute. Evidence accumulated that the world was older than had been imagined, that life had emerged gradually over a great length of time – all contradicting the conventional interpretation of the Bible that had been enshrined as the Truth.

There came, therefore, a mighty metaphysical rendering. Truth, which in the era of Christian domination had been the exclusive domain of the Church, ceased to belong to religion and seemed to fall instead into the hands of the scientists. While the metaphysical and ethical domains of religion still fell to God (in theistic belief), Truth could no longer do so. But in this great separation Truth – previously foreknown and thus certain – could no longer be what it once was. Science could not establish big-T Truth – it was beyond its limited remit. It could measure, it could postulate, it could even gradually refine its theories to greater precision, but as Kuhn noted, none of these actions however remarkable were equivalent to a constant journey towards Truth. This was merely the mythology of science. 

Neither was Nietzsche unaware of this issue; he was simply too preoccupied with his hatred for Christianity to consider this a battle worth fighting. While he was adding to The Gay Science his explication of his phrase “God is dead” he followed it immediately with a warning to scientists, noting that in genuine science “convictions have no rights of citizenship” – that is, one may only produce provisional beliefs, hypotheses, which only then may earn their status as knowledge through experiment, and even then this status is provisional. Nietzsche shrewdly observed that convictions must cease to be so before they can be science, and thus even before such a process can begin their must be a prior conviction “one that is so commanding and unconditional that it sacrifices all other convictions to itself.”

He explains:

We see that science also rests on a faith, there simply is no science “without presuppositions.” The question whether truth is needed must not only have been affirmed in advance, but affirmed to such a degree that the principle, the faith, the conviction finds expression: “Nothing is needed more than truth, and in relation to it everything else has only second-rate value.” 

This critique of science, often overlooked but crucial in an understanding of Nietzsche’s position, occurs under the heading “How we, to, are still pious” – for Nietzsche was well aware of how science could become akin to religion if misunderstood and mishandled. He observes:

…it is still a metaphysical faith upon which our faith in science rests – that even we seekers after knowledge today, we godless ones and anti-metaphysicians still take our fire, too, from the flame lit by a faith that is thousands of years old, that Christian faith which was also the faith of Plato, that God is the truth, that truth is divine…

Thus in the light of the shifting of perspective that occurred after Nietzsche, we are forced to re-conclude it is not so much that “God is dead” – this was Nietzsche attacking those who in his time most forcefully asserted an absolute version of the Truth, namely the nineteenth century priesthood and its predecessors. Rather, Nietzsche’s most famous observation can be understood in its wider meaning as “Truth is dead”. For the tearing apart of Truth from religion did not leave much certainty in the hands of science. Science can determine certain matters of fact by measurement, it can establish true and false subject to certain presuppositions and even create great and effective instruments of prediction, but it cannot manufacture big-T Truth, “The Truth” – the certainty of knowledge. This can only come from faith – whether faith in an idol of religion such as the confused idea of a megalomaniacal god, or an idol of science itself.

Thus Nietzsche saw reality becoming entirely infinite – unbounded by preconceived ideas of what must be factual. He comments in one of his notes that “it is precisely facts that do not exist, only interpretations” and in this way reached his formulation of existentialism; the atheist or agnostic existentialism to accompany Kierkegaard’s religious existentialism – two sides of the same coin. It is here that Nietzsche hoped his philosophy would lead. He notes that “we cannot look around our own corner” but says: 

But I should think that today we are at least far from the ridiculous immodesty that would be involved in decreeing from our corner that perspectives are permitted only from this corner. Rather has the world become “infinite” for us all over again: inasmuch as we cannot reject the possibility that it may include infinite interpretations. 

And of course, once we allow for infinite interpretations, we may not exclude God, especially in a world where so many people have had and continue to have numinous experiences of the wholly other. We are free to interpret these experiences however we will. Some, presupposing the absence of God, will see them as ‘delusions’. Others, presupposing God, see them as an experience of God. And who now will be so arrogant as to assume that the view from their corner is the only permissible view?

Ironically, this had been a central tenet of the Dharmic religions, such as Hinduism and Buddhism, for thousands of years. It is the essential meaning of the Sanskrit word maya – reality as illusion. It is simply the case than in the West, where the Abrahamic faiths dominated, we never quite managed to get the message. 

Rumours of God’s death have been greatly exaggerated. In the late nineteenth century, a failed assassination attempt against God left Truth mortally wounded, a mere shadow of the certainty it once represented. We may measure some things to establish their veracity, we may develop grand theories with great predictive power, but the certainty of knowledge is gone from us. The Truth is dead. Long live freedom of belief!

All translations of Nietzsche quoted in this piece are by Walter Kaufmann.


Ethics of Science

How can a scientist square their research decisions with the applications they are put to? Do scientists bear some obligation in respect of the research they conduct? Or is all scientific practice essentially blameless? What are the issues in the ethics of science?

When interviewed in 1965 about the Manhattan Project, the creation of the first atomic bomb, the physicist J. Robert Oppenheimer noted his conflicted feelings about his success by making reference to a line in the Hindu scripture the Bhagavad Gita which came to him in connection with this achievement, albeit misremembered. Oppenheimer’s misquote became famous: “Now I am become Death, the destroyer of worlds.”

As it happens, humanity has thus far managed to avoid eradicating itself in a sudden nuclear oblivion – a fear that abounded in the short-lived “Atomic age”. Yet there can be no doubt that scientists worked together to create nuclear weapons, the most terrible and destructive force on the planet. Was it ethical for them to do so? All ethics, we now realise, are relative. So to ask if the Manhattan Project was an ethical research project is to ask about the ethics of scientists.

Science, as with most traditions, has its own inherent values. In the case of science in general and scientists in particular, the central value is truth, therefore, from an agent-focussed perspective, the central virtue of ethics is honesty. From a rights-focussed perspective, this commitment to honesty manifests as the unwritten rules of scientific practice that render fabrication of results and plagiarism forbidden and highly disreputable. From an outcome-focussed perspective, scientists are usually not held accountable to the consequences of their research; as long as they have been suitably honest in the conduct and reporting of their research they have done more or less all that is required of them by conventional scientific ethics.

The trouble is, this enshrinement of truth as the central value of science leads to a naive attitude as to the consequences of research. If we are honest about scientific experiments, we must acknowledge a tremendous realm of future events about which we cannot adequately anticipate. Oppenheimer could not have known that forty years after the Manhattan Project, a nuclear power station in Ukraine would explode, contaminating the city of Chernobyl and the surrounding region, and having effects throughout the world. This was a consequence of the early research into nuclear weapons, which were a step on the road to nuclear power. Was Oppenheimer in any way responsible for Chernobyl?

To hold Oppenheimer (and the other scientists in the development of nuclear power) entirely blameless, we must say that knowledge and application are separate. The creation of knowledge is thus rendered inculpable, and only its application can be “evil”. But by this approach, a scientist can research into a biological weapon capable of killing all life on the planet and claim no responsibility when it is eventually employed. This separation of duties into a required commitment to truth and an optional commitment to society or humanity is ludicrous.

But equally ludicrous is to blame Oppenheimer for Chernobyl, which after all he was not directly involved in at all. His contribution to the knowledge that lead to the building of the Chernobyl nuclear power station connects him tangentially, at best, to the disaster that occurred there. One may also argue: if it hadn’t been Oppenheimer, it would have been someone else.

Yet what kind of ethics are we suggesting if the justification for taking an action with the potential for tremendously negative ultimate effects is if I don’t do it, someone else will? Is it really acceptable for someone to conduct research which has as its natural consequence a fatal weapon simply on the grounds that if they didn’t someone else would? This sounds rather akin to the excuse, so often touted in the context of war crimes, that one was only following orders.

The basis of ethics is individual responsibility. If scientists can’t take responsibility for what they choose to research, the natural response from society would be to deny them access to the funding that they need to conduct that research. That this doesn’t happen is a sure indicator of just how addicted to technological progress we have become.

Of course, it is a simplification of the facts to claim that the only aspect of ethics of science that applies is a commitment to honesty. After all, there are ethical guidelines for the treatment of people (and, to a lesser extent, animals) in experiments – such as the requirement that test subjects provide informed consent. Yet science as a field is a long way shy of the kind of commitments we expect from our doctors – informally captured in the phrase “first, do no harm”. It may once have been the case that the potential for abuse was greater from doctors than from scientists, but in our modern world we see disaster scenarios behind every scientific frontier, from genetic engineering to high energy particle physics.

Many of the ways in which the public respond to these issues are reactionary – disproportionate to the matter at hand. But the reason that public outcries against genetic engineering of food and so forth happen is because it is seems to the general public that scientists will not take into account the consequences of their research as a factor in whether or not to pursue it. This inevitably leads to excessive backlash in those cases where the possible problems have the potential to be extremely severe. The scientific community will not and cannot make a commitment to ‘do no harm’ – because no scientist has any greater capacity to predict the future than any non-scientist. Humans are just naturally poor at predicting the future. Just as Oppenheimer could not foresee Chernobyl, no scientist can truly foresee the negative consequences of their research.

But of course, that Oppenheimer could not foresee Chernobyl does not bear on the question of Hiroshima. Oppenheimer could certainly see that what they were making was a bomb, and that this bomb would be used on human targets. It seems that Oppenheimer justified this in his own mind in the belief that the creation of the atomic bomb would necessitate an end to war – in his own words “The atomic bomb made the prospect of future war unendurable.” He hoped that the threat of nuclear war would drive humanity into global co-operation. Was this hope enough to make it ethical to pursue the research for such a weapon in the first place?

Ultimately, Oppenheimer acknowledged that there was no ethical decision at the root of the Manhattan Project. He said of it: “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” Here, we have the honest assessment of a scientist: I must conduct my research; what it is used for is a problem only thought about after the fact.

Scientists are reluctant to admit the dangers inherent to unrestricted research. There is a tacit assumption with many scientists that the scientific community – as intelligent, educated men and women – would not instigate research with severe negative consequences, despite the many historical instances to the contrary.  An idealised view of science persists in the mind's of many scientists, one which sees science as not only harmless but presumptively beneficial, while other traditions are publicly berated for their hypothetical dangers.

I believe we can stave off the worst disasters that might come out of the scientific process. But we would have a much better chance if we acknowledged that science is not a blameless process, uncovering truth independently of its consequences, but the force feeding the rapacious technological progress that has lead us to a world in which we consume our natural resources in a manner apparently leading to a global disaster of our own creation. What is the cause of the rape of the environment if not the technologies born of our science, and the people that use them?

Now the most urgent voice in this apparent calamity is concerned with global warming, which has become a highly politicised topic. This is a contentious issue, and it goes to the heart of science. Basically: climate scientists mostly agree that we are the cause of a currently small but potentially disastrous greenhouse effect that is warming the planet. Some climate scientists, albeit a minority, disagree. The public is more divided. Who do you believe? There are scientists on both sides, so do you trust the majority or the minority? Do you look at the evidence and make up your own mind? Well, some do the former and some do the latter, but either way there's no way to resolve this issue except by exercising your own freedom of belief.

It's up to you to interpret science. It's up to you to interpret everything.

Here are the facts, as far as I can ascertain. The greenhouse effect is a genuine theory, first reported by French mathematician and physicist Joseph Fourier, and subsequently verified by the Swiss chemist Svante Arrhenius. We have an excellent example of what it can do in the form of the planet Venus, which is a roiling hell world thanks to the positive feedback of a greenhouse effect caused by the carbon dioxide in its atmosphere. Venus is believed to have got that way because there was no life there to recycle the carbon dioxide what keeps our planet so delightfully habitable is in part its diverse ecology which moderates the gas cycle. We're killing off that ecology in a variety of ways, and in a manner far more direct than the global warming we're worrying about.

On the other hand, the evidence for global warming is very slight the mean temperatures have risen, but the claim that humanity is responsible for this change is quite tenuous and difficult to prove. Crucially, the existence of a majority of climate scientists on one side of the debate doesn't change the fact that climate predictions are very difficult to make. It would be premature to conclude the matter of human-influenced global warming one way or the other, and the threat of disaster is apparently insufficient by itself to force decisive action at this time.

What it comes down to is this: do you trust the climate scientists? In the case of global warming there seems to be no reliable way of establishing the extent of the crisis or the degree of urgency. But the climate scientists aren't the only people warning of the onset of an environmental catastrophe. According to a 1998 survey, 70% of biologists believe we are at the beginning of a mass extinction event, not unlike the ones that killed the dinosaurs and many other earlier forms of life, but this time driven by human activity. We have had at least 36 extinctions in the last century,  not counting the species that went extinct before scientists could identify them, and in geological terms that's precipitously rapid. The technological society we have built with the assistance of science is killing off all life on the planet   slowly from our perspective, in the blink of an eye from the viewpoint of the planet itself.

Having said all this, science also carries the potential to rescue us from the very environmental disaster it has given birth to. After all, it is the scientists who have made us aware of the collapse of biodiversity, and there are scientists committed to research into sustainable energy solutions. With co-operation from society, we may yet be able to rescue much of our the ecology of our planet which, I might add, is essential to our continued existence – even the extinction of a small collection of species such as the worms could lead ultimately to our own extinction by collapsing the food web. There is some hope that it won’t come to this, and if not, we will have in part the action of committed scientists to thank, although it will not be the value they have placed upon truth that will have lead us from the brink of catastrophe.

How long can we continue with this situation whereby scientists value truth and only truth, especially in the wake of Kuhn’s realisation that science is not so much a journey towards truth, but rather a process of adaptation and specialisation? What is it that we want science to adapt itself to? Do we even know? Perhaps we need to rethink the ethics of science so that we can position this tradition, with its potential for tremendous benefits, into a better relationship with society... perhaps it is up to society to create the necessary pressures to ensure that science is its servant, and not its master.

Beyond the current catastrophe of environment, and the future crisis that genetic engineering will place squarely at our doorstep, there are further disasters and crises that science will bring about in time, presuming we survive long enough for it to do so. (Not to mention the many other distant threats such as asteroid impacts that science could help protect us from if we were to consider such things important.) Given this, we should seriously consider how we wish to proceed with the scientific process.  To do that, we must first be willing to talk about the role of science in our societies and decide what kind of future we will try to make.

For translucy, who may have different views on the subject.


The Mythology of Science

Dancing_light_by_alan_jaras_3 What is the nature of change in scientific theories? Is it reasonable to consider science as a journey from ignorance to truth? Or is there a mythology of science that distorts the nature of the scientific endeavour?

When I first went to university, it was to study astrophysics. I was enthusiastic about the subject, and eager to learn. My expectation to some degree was that the workings of the universe were going to be explicated to me, and that I was going to be shown experiments which underpinned the theories of nature that had been developed, and from which those theories could be derived. But in fact, the theories were taught as strictly factual, and only then were the experiments enacted. Students, having already learned the prevailing models of reality, approached each experiment with the certain knowledge of the expected results.

When we were asked, for instance, to take measurements from which to calculate the gravitational constant of the universe, there was no doubt as to what the correct answer was expected to be. In practice, few if any students produced results that would yield an answer sufficiently close to that dictated by prior theory. In the face of conflicting experimental evidence, most students would attempt the experiment again, usually once again yielding results which were not of the kind expected. After a few failed attempts, many students then adjusted their data in order to more closely resemble the expected results of the experiment.

This experience greatly challenged my expectations of science. It was not that the theory in question was fundamentally in error – if we had, for instance, averaged all the data gathered in the lab by all the students, the mean result would almost certainly have resembled the expected results. But what I observed in the physics laboratory was students of science faced with unexpected data and then, instead of reporting this honestly as my preconceptions of the scientific method demanded, changing their measurements to conform with the expected results. That this was the way to get the highest marks from the laboratory experiments is not in doubt, but if the central value of science is truth, the students were not learning this value – they were learning how to toe the line with existing theory.

Scientists, in general, are not taught philosophy of science, and as a result rarely question their abstract beliefs about the nature of the scientific process. As a result, science has developed a persistent mythology, central to which is the idea that science uncovers the truth, and as the theories and techniques of science develop, science moves closer and closer to absolute truth.

The celebrated historian and philosopher of science Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions (first published in 1962) put forward one of the decisive criticisms of this view. He observed, from study of the development of European science, that scientists’ commitment to particular theoretical frameworks acted as a barrier to seeing data in a different light – even when that data was entirely contradictory to the theory. Like the students in the physics lab I encountered, theory was considered to trump observation to some degree.

Kuhn’s historical analysis resulted in a model of science that denied the conventional belief in science as a process of accretion. The idea that science gradually assimilated building blocks and advanced in steady and discrete steps did not match up to the historical record. Instead, Kuhn saw periods of what he termed normal science, during which scientists worked to adapt their current theories to a range of experimental observations, and periods of scientific revolution – when the old theories came into crisis, and were eventually supplanted by new theories.

The view of scientific fields proposed by Kuhn was that a particular field does not emerge within science until scientists working in this area begin to develop a common framework. Kuhn terms this framework a paradigm, a term he uses ambiguously in a number of subtly different contexts. On the one hand, a paradigm describes the collection of symbolic generalisations, experimental methods and common assumptions shared by practitioners of a given scientific field. On the other, paradigms represent specific exemplars of scientific puzzle solving, including both experimental and theoretical results to problems. We can see a paradigm as a set of common beliefs that a group of scientists share. Despite the intense faith that is placed in science, these beliefs do not necessarily correspond to reality – rather, they provide a framework that enables the scientists to investigate reality in a particular way.

Kuhn suggests that during periods of normal science, scientists are mostly attempting to “force nature into the preformed and relatively inflexible box that the paradigm supplies.” No paradigm explains all the facts with which it can be confronted, and the general process of science therefore works on the solutions to puzzles – problems in adapting the theoretical models of that paradigm to the description of reality. This commitment to a particular paradigm is essential to the scientific process in Kuhn’s opinion. Normal science is only able to proceed on the predicate that scientists know “what the world is like.”

Anomalous data – that which cannot be made to fit into the box provided by the paradigm – is usually ignored, or interpreted in a manner consistent with the paradigm’s assumptions, or else the theoretical framework of the paradigm is adjusted to compensate for the discrepancy. Experienced scientists, having had great success with their paradigm, are unwilling to abandon it in the light of contradictory evidence. This can be seen as a necessary commitment, because to give up a particular paradigm without adopting a new one would be to abandon science entirely: there can be no science without a particular model by which the investigatory process can proceed.

Kuhn observes that new paradigms gradually replace old ones as a result of a growing view (usually among newcomers to the field, since these are least committed to the old beliefs) that the old methods are no longer able to guide the exploration of a particular area in which the old paradigm used to lead the way. But the transition between paradigms does not often proceed without difficulties – the commitment to the old paradigm remains strong, and it is hard for rational discussion to take place between two individuals who are using different paradigms.

Often, the elder scientists are unwilling or unable to abandon the paradigm which has lead to such success and progress during their time, and this is only natural. As Max Planck observed: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

There is no objective means of resolving these disputes between competing explanations. Scientists must necessarily premise their own paradigm in order to argue in its favour, a circular process that can nonetheless be quite persuasive. Although arguments are usually couched in terms of the capacity of the competing theories to explain experimental facts, it is not generally possible to determine which is the better theory by comparison with facts. All significant scientific theories fit the facts to some extent. But when two theories are competing with each other, the issue is not how much each theory fits the facts, per se, but rather whether or not one theory fits the facts better than another.

Even this may not be the means in which the commitment to a paradigm is changed. What is being asked for is a fundamental alteration to the way a particular scientific field is practiced, and for this argument to be persuasive what must be offered is something more than correspondence to facts – as already mentioned, most scientific theories can be made to adequately explain the facts they are faced with. For example, chemists abandoned phlogiston theory, but at the time that the theory of combustion in the presence of oxygen was forming, phlogiston adequately explained the majority of the observed facts. Many of these facts presumed phlogiston in their formulation. But the new idea carried with it the promise of future progress, and it was faith in this possibility that helped drive this paradigm change forward.

The model of reality suggested by one paradigm generally involves quite different entities to its rival. Consider the view of the world presented by Newton versus the view presented by Einstein. In Newtonian physics, discrete matter had a property, mass, which innately attracted other objects with mass by a process called gravity. This innate attraction was at first seen as a flaw, but later accepted as the nature of gravity once Newtonian mechanics grew in stature. Compare Einstein’s theories of Special and General Relativity, which saw mass as form of energy, and gravitational attraction as the result of a curvature in space caused by the presence of energy. These are quite different models of reality – they contain entirely different entities, and even those elements they seem to have in common are not equivalent. Newtonian mass does not in any way anticipate the idea of mass that results from Special Relativity.

Part of the mythology of science is to see changes like these (from an old successful theory to a new one) both as progressing towards truth, and also to view the old theory not as having been disproved but as a special case of the new theory. This tendency is particularly prevalent in relation to the paradigm shift from Newtonian to Einsteinian physics, and scientists can indeed demonstrate how the Newtonian equations can be derived from the later theories. But this process is misleading. An abandoned theory can always be viewed as a special case of its successor, but to do so one must transform the original theory in the terms of the new one – and this is something that can only be done in hindsight.

This rewriting of history is endemic to science. The scientists of earlier ages are represented in textbooks and the like as having worked on the same set of problems, and in accordance with the same guidelines, as modern science. This makes science appear to be cumulative, but it is a fiction created by ignoring the crises that accompany scientific revolution. Earlier scientists worked on very different problems, because their models of the world lead them to different puzzles to solve. Because the results of scientific research do not seem to depend upon the history of their development (that is, scientific theories do not depend upon their historical origins for their veracity) it seems acceptable to abstract over the details of their development. In doing so, the nature of the scientific endeavour becomes distorted to appear both linear and cumulative.

Furthermore, this view distorts the nature of the abandoned paradigms. There is a tendency to look at certain discarded theories as myths that have been disproved – but this perspective will not stand up to scrutiny. The old paradigms were derived through the same essential scientific process as the new paradigms, and from a historical perspective both the old and the new must be seen as scientific – else all scientific knowledge can be accused of being a myth. It is not that the old beliefs were false and the new ones true – for a future paradigm shift may well render the current ideas false by this reasoning. Rather, the science of the past contains views and beliefs wholly incompatible with the models of present scientists, yet all such paradigms are still by their very nature scientific. They proceeded from fact and measurement, with the goal of providing explanations of these facts.

Kuhn challenges our preconceptions of science as evolving towards objective truth, and instead suggests we should understand scientific progress as evolution from the community’s prior state of knowledge. There is no perfect conception of reality that science is travelling towards; there is no ideal goal state that science will ultimately evolve into. Such ideas are anachronistic fallacies that do not match up with the history of science.

The idea of goal-directed progress was abandoned in the adoption of Darwin’s theories of natural selection, and biology was no longer seen as evolution towards something. Instead, we now see a process that moves from primitive beginnings into the steady yet intermittent appearance of more complex and more specialised organisms. Kuhn contends the same situation applies in science – the progress we observe is not advancing towards anything, but proceeds from primitive theoretical beginnings into the steady yet intermittent appearance of more complex and more specialised paradigms.

Ultimately, scientific knowledge is, like language, an intrinsic property of a group of people. To understand that knowledge, it is necessary to understand the nature and characteristics of the groups that create and use this knowledge. Science is the name we give to the practices of scientists, who by dedication to an empirical view of the world gradually refine their ideas, and produce exceptional instruments which in turn allow for the creation of new technologies. Later scientific theories show progress – they are better at solving puzzles in often very different environments to those of their predecessors – but it is a mythological view of science that sees science as truth, or evolving towards truth. Science evolves as scientists refine their perspectives, but this refinement is an adaptation to new conditions of knowledge, and not an inevitable march towards perfection.

The opening image is Dancing Light, a refraction caustic by Alan Jaras which I found here. As ever, no copyright infringement is intended and I will take the image down if asked.


Top Ten Science & Religion

What are the top scientific theories? What are the top religious belief systems? For the purpose of this piece, we shall be ranking both kinds of tradition by the same criteria – the number of years that people have found them useful. We shall only consider theories and religions which are to some degree extant, and the existence of people who uphold a particular tradition (scientific or religious) shall be treated as proof of utility as a matter of expediency.

The number of years stated have been approximated and rounded for convenience, and many of the dates of origin or tracings of lineage are open to dispute.

 

Top 10 Science

Fields are listed in italics

(1) 2,500 years: Thales (and Anaxagoras), Theory of Ellipses (Astronomy)
(2) 2,400 years: Democritus, Theory of Atoms (Physics)
(3) 2,300 years: Aristarchus, Theory of Heliocentrism (Astronomy)
(4) 2,250 years: Eratosthenes, Theory of Earth’s Diameter (Geography)
(5) 1,850 years: Galen, Principles of Physiology (Biology)
(6) 1,475 years: Li Tao Yuan, Principles of Palaeontology (Biology)
(6) 1,475 years: John Philoponus, Theory of Inertia (Physics)
(8) 1,400 years: Chao Yuan Fang, Diagnosis of Disease (Biology)
(9) 1,125 years: Khindi, Experimental Method (N/A)
(10) 800 years: Maimonides, Psychosomatic Medicine (Biology)

Runners up:

(11) 600 years: Ulugh Beg, Astronomical Tables (Astronomy)
(12) 500 years: Nicolas Copernicus, Observational Astronomy (Astronomy)
(13) 450 years: William Gilbert, Electromagnetism (Physics)
(14) 400 years: Galileo Galilei, Motion of Falling Bodies (Physics)
(15) 375 years: William Harvey, Theory of Blood Circulation (Biology)

 

Top 10 Religion

Experiences are listed in italics

(1) 30,000 years: Shamanism (Panenhenic)
(2) 6,000 years: Fertility Cults e.g. Ninna in Mesopotamia (Numinous)
(3) 3,900 years:
Canaanite religion/Judaism (Numinous)
(4) 3,800 years: Zoroastrianism (Numinous)
(5) 3,350 years:
Vedic pantheon/Hinduism (Numinous/Contemplative)
(6) 2,950 years:
Kami worship/Shinto (Panenhenic)
(7) 2,550 years:
Taoism (Panenhenic)
(8) 2,525 years:
Jainism (Contemplative/Numinous)
(9) 2,500 years:
Confucianism (Panenhenic)
(10) 2,475 years:
Buddhism (Contemplative)

Runners up:

(11) 2,000 years: Christianity (Numinous)
(12) 1,400 years: Islam (Numinous)
(13) 500 years: Sikhism (Contemplative/Numinous)
(14) 150 years: Baha’i (Numinous/Panenhenic)
(15) 50 years: Discordianism (Contemplative)

Note that many of these religions have other experiences entailed; for instance, Islam has Sufi as its contemplative form, but is primarily focussed on numinous experience. For more on religious experiences, see here.

Disagree on some point of fact or interpretation? Why not share your views in the comments!


Lying

Should we always tell the truth, or are there times when it is better to lie? What are the negative consequences of lying? Are there circumstances under which it is permissible to lie? And what, on careful consideration, constitutes a lie? 

We have likely all lied at some time in our lives, even if only as a child, discovering this ability for the first time. Indeed, it is one of the psychological benchmarks – the onset of Machiavellian intelligence around four and a half years of age. Children tend to lack not only the moral awareness of when to avoid lying, but the skills required to lie convincingly, while in adults (and especially in politics), lying can become quite sophisticated.

It is worth being clear about what is entailed in lying. To suggest something, only to discover later that this was incorrect, is not prima facie lying as we would generally consider lying to be an act with the intention to deceive – if we are mistaken, there can have been no intent, and thus no lie. Thus the usual conception of lying is when one says something that one believes is false, with the intent that the listener will believe it is true.

Already we run into problems: must we be able to distinguish true from false with accuracy in order to know what is lying? This, in the subjective world of human affairs, is far too much to ask. What is at task, therefore, is the aforementioned intention to deceive. If a theist says “there is no God” this is a lie, just as if an atheist says “there is a God”. Lying should therefore be understood as an action; one does not establish whether a statement is a lie by comparison to the facts, although one may of course show that a statement is false by such a comparison (but of course issues of metaphysics – such as discussion of God – are never testable).

As a practical test of this: did Bill Clinton lie when he said “I did not have sex with that woman” in 1998? The facts of the case seem to show that he did receive oral sex from Monica Lewinsky, but one could make the case that ‘oral sex is not sex, per se’. But clearly: by saying “I did not have sex with that woman” President Clinton acted with the intention to deceive. Even if this was a lie of omission (“I did not have sex, but I had oral sex”) the intention was still there. Furthermore, this was a statement made under oath – it was not just a lie, but perjury – not merely dishonest, but illegal.

There are many reasons why we might lie, so what are the reasons for not lying? Kant held an exceptionally hard line against lying, claiming it was never permissible. It was his view that by lying we failed to treat the ethical as the universal (the first part of Kant’s yardstick) – since if everybody lied, no-one could trust anything that was said. Furthermore, he claimed that lying violated mutual respect, since by lying we denied the person being lied to their right to make a rational decision. Famously, Kant argues in his essay "On a Supposed Right to Lie" that even in the case of a murderer at the door asking for the whereabouts of an innocent victim, it would not be permissible to lie since to do so would be to treat the murderer as a means and not an end i.e. to deny mutual respect. This is not a viewpoint that many modern individuals are likely to agree with! 

The idea that lying erodes trust is perhaps the primary argument against lying, since our societies are all based upon trust. From this perspective, Kant’s position is understandable, even if it seems more than we are willing to commit to. If we accept that lying is undesirable because it destroys trust, we open up the question: is it ever permissible to lie?

Most people disagree with Kant in the case of the murderer-at-the-door, and feel that to hold honesty as more valuable than human life is wrong. This is an agent-focussed approach to the problem: lying is undesirable if honesty is held as a value, but another value – compassion, for instance – might be more important. Kant’s position is rights-focussed, and all such approaches tend to prohibit lying, but even from this perspective it is possible for their to be a conflict of rights that might create permissibility – does the victim’s right to life not outweigh the murderers right to honesty, after all? Finally, from an outcome-focussed position, lying is permitted if the consequences of the lie are more beneficial than the consequences of the truth – although how this would be determined is very difficult to gauge. 

We can see that whichever style of ethics one employs, lying produces something of a grey area. From our modern sensibilities, we strongly disagree with Kant about the murderer-at-the-door. Few if any of us believe that it would have been morally wrong for a person harbouring Jewish refugees in Nazi Germany to lie to the Gestapo if they knocked on the door – the lives of the people being protected outweigh the duty to honesty for most of us.

Does this undermine Kant’s yardstick? It need not. Whatever Kant’s view on the application of his system, there is always room for alternative interpretations. The Kantian commitment to communal autonomy – to attempt to act in such a way such that the goals of all people can be aligned – accepts the difficulty of any such an attempt, and in doing so accepts the inevitability of conflicts between competing ends. It may be the case that lying to the murderer usurps their end (to murder), but it also protects the ends of the victim (to live) and for that matter the whole of society (to not endorse murder). One must seriously consider if there are ends for which no mutual respect is due, and murder (and by extension genocide) are obvious candidates – since the murderer fails to provide mutual respect to their victim, why should we respect this end?

Presumably one of the reasons that Kant takes such a firm line is because he was a Christian (although his views on God were far from traditional theism). It is often presumed that Christianity is against lying on account of the ninth commandment (eighth for Catholics and Lutherans): “thou shalt not bear false witness against thy neighbour”. But there is no doubt that this commandment explicitly prohibits perjury (and by extension false allegations) – it does not expressly preclude lying. Therefore President Clinton violated this commandment, but someone lying to the murderer-at-the-door would not be. 

Of course, a lack of prohibition is not the same as an endorsement, and the Old Testament does include a considerable number of verses which prohibit lying, such as Leviticus 19:11: “You shall not lie: neither shall any man deceive his neighbour,” although here there are problems as Leviticus famously records the laws of an ancient Jewish culture much of which simply does not apply to the modern world. After all, Leviticus 19:19 prohibits wearing garments made from two types of thread, yet few modern Christians consider cotton-polyester blends to be sinful!

Buddhism takes a firmer line against lying. One of its precepts prohibits lying, and part of the Eightfold Path that is central to most Buddhist practice calls for “Right Speech”, which asks for more than just refraining from lying: 

Giving up false speech he becomes a speaker of truth, reliable, trustworthy, dependable, he does not deceive the world. Giving up malicious speech he does not repeat there what he has heard here nor does he repeat here what he has heard there in order to cause variance between people. He reconciles those who are divided and brings closer together those who are already friends. (Anguttara Nikaya, 10.176) 

In Buddhist thought, therefore, the issue is about more than simply not lying; it is a request that people think carefully about how they use their words, and aim to do so wisely. Interestingly, while lying is considered against “Right Speech”, Buddhism cautions against presuming that one person’s truth is the only truth. This understanding of subjectivity is common to both Buddhism and the Hindu tradition (which predates and was an influence upon Buddhist thought).

(In respect of Hindu beliefs, there is no explicit prohibition on lying in the Veda, although there is a responsibility to avoid “untruth”, but this falls more on the shoulders of the listener than the speaker.) 

Of all the major world religions, Islam has perhaps the most forgiving attitude to lying. While lying is definitely undesirable – “And do not say that of which you have no knowledge” (Surah 17:36) and “Truly Allah guides not one who transgresses and lies” (Surah 40:28) – there are circumstances under which Muhammad does seem to allow for a permissible lie. The Muslim oral tradition (hadith) quotes Muhammad as saying: “Lying is wrong, except in three things: the lie of a man to his wife to make her content with him; a lie to an enemy, for war is deception; or a lie to settle trouble between people.” (Ahmad, 6.459).

Islam actually specifically allows for a Muslim to lie for the preservation of life. The concept of taqiyyah (self-protection), based upon Surah 16:104-108 of the Qur’an, permits Muslims to hide information, and even to conceal their own faith, if to do so will avoid persecution or harm and no useful purpose could be served by being open. In the case of Kant’s murderer-at-the-door, most Muslims would have no issue protecting the victim and deceiving the killer. 

The grey areas surrounding the question of lying are so vast as to defy summation – is it acceptable for parents to lie to children by teaching them about Santa Claus or telling them that babies are brought by the stork? I see little harm – such stories are part of the special world of childhood – but some Christians feel otherwise. Is pretending to be someone else to a stranger, perhaps in a nightclub, or while hitch-hiking, morally wrong if no advantage is taken of the other person? I cannot condemn imagination, and playing is not necessarily lying (we lie in jokes without qualms!) but the willingness to ‘come clean’ must be there, or else one can dig a dangerous hole.

Ultimately, it is up to the individual to establish their own stance on lying – both in terms of what constitutes a lie, and also under what circumstances it is permissible to lie. For myself, lies pertain solely to matters of fact. In matters of opinion, we have such a wide range of responses I find it impossible to establish truth and falsehood and therefore treat such subjective issues as beyond the issue of lying entirely. If you ask my opinion of a particular painting, for instance, I can couch my response in terms from the forgivingly supportive to the harshly critical – it is my choice how to interpret my own internal mental and emotional states, and such an opinion is never a matter of fact, always a question of interpretation. Such is often the case in ethics. 

And what about you? Do you believe it is permissible to lie? Would you lie to the killer-at-the-door? To a stranger? To a friend? Or do you believe that there is a duty to truth, a virtue of honesty? It is a question that only you may answer. 


Game Literacy

Hc_longtail What is the distinction between a Hardcore gamer and a Casual gamer? Are these distinctions still useful to us? Is it valuable to define a third state in between? And what, if anything, can we learn from this terminology?

For many years now, and with origins cloaked in mystery, the crudest audience model has persisted as the one most commonly used – namely the split of players into Hardcore gamers and Casual gamers. It is probably the simple nature of this dichotomy that has allowed it to spread, as humans take to ‘us and them’ distinctions rather too easily. Marcus of Verse Studios suggests that the focus on these terms is entirely misleading, and we should just concentrate on making games that are fun. I admire the sentiment – as long as we remember that one person’s fun can be another person’s horror.

When my company began the research into the gaming audience that produced the DGD1 model, we were investigating a particular hypothesis: that those people who constituted the majority of so-called Hardcore gamers belonged to a particular psychological mindset, denoted in Myers-Briggs typology by I_TJ –Introverted, Thinking and Judging. In effect, we expected Hardcore gamers to show to be people who kept themselves to themselves, favoured pragmatism over affiliation, and who displayed obsessive-compulsive tendencies.

To conduct the research, we had to decide how to determine if someone could be considered a Hardcore gamer, and to do this we used to methods: firstly self-selection. Players were asked to answer if they considered themselves a Hardcore player, a Casual player or didn’t know. Secondly, we enquired after how much time was spent playing games, and how many different games were purchased and played. As it happened, all of these methods proved to be broadly equivalent: there is significant statistical overlap between players who self-identify as Hardcore (sometimes reluctantly!), players who spend a lot of time each week playing games, and players who buy and play a lot of games. We shouldn’t be entirely surprised! 

The hypothesis behind our research was largely disproved. Although it was validated that Hardcore players were more Introverted (in Myers-Briggs terms) than non-Hardcore players, the presumed Thinking and Judging preferences turned out to be indicative of a pattern of play independent of Hardcore status, however determined. In fact, Intuitive bias (i.e. preference for abstract thinking) turned out to be a better indicator of Hardcore status. This discovery completely changed the way I thought about the gaming audience, and lead to the development of our research into the many different play styles that exist as an entirely separate issue to Hardcore or Casual status.

(More on the subject of this research can be found in our book 21st Century Game Design).

It is important to understand that while this research showed that Hardcore players (both in terms of self-identification, and commitment of time) were predominantly Introverted and Intuitive (in Myers-Briggs terms), neither of these factors are necessarily reliable indicators of a Hardcore player. In particular, there are people who express both traits but have no interest in videogames at all!

So what do we mean when we talk about a “Hardcore player?” Putting aside the subtle and confusing shades that this term has acquired, at heart we mean a player who spends a lot of time playing videogames. I have suggested a better term for such a player is a gamer hobbyist, someone who pursues videogames in the manner of a hobby, rather than as a distraction and diversion. This term is more descriptive than “Hardcore”, and comes without the baggage the old term has acquired. 

Is a “Casual player” then someone who doesn’t spend much time playing videogames? Well, as it happens there are many players who do not self-identify as “Hardcore players” and who do not buy and play many games who still rack up a lot of hours playing games. They play the same games over and over again (especially games such as Tetris and Solitaire).

I therefore suggest that if we are to rescue the crude “Hardcore versus Casual” partition and make something more worthwhile of it, we should consider the underlying distinction to be game literacy. By this, I mean the individual’s familiarity with the conventions of videogames, and thus by extension their ability to pick up and play new games with little or no instruction. 

It will probably not have escaped notice that videogames evolve along quite channelled lines – that is, genres within videogames show many marked similarities. The potential deviation between one first person shooter and another is vast, yet most have a lot in common. Similarly for real time strategy or for almost any well-established genre. This is inevitable: the audience likes to buy games that are similar to the games they have enjoyed in the past (and the games industry, as an employer of gamer hobbyists, likes to make games similar to the games they enjoyed in the past). The inevitable consequence of this dependency is “genre conventions”. They may be bent, twisted and and eventually superseded, but each game genre has it’s own habitual tenets, and game literacy represents in part a player’s ability to interpret a new game in the context of their prior experience of these conventions.

The “Hardcore gamer” or gamer hobbyist therefore represents a player with high videogame literacy. Such a player can play any and all games they choose – they have the requisite knowledge to do so although the actual games they enjoy will vary from person to person. They require little or no tutorial for a game that fits into their existing experience comfortably – perhaps just an explanation of how the conventions of the new game differ from their expectations. A typical gamer hobbyist will have acquired between 15 and 50 man months of experience playing games, and will also have played 20 to 100 different games in that time. (Note that when I say ‘man month’, I mean a month of continuous play time totalled up). 

The “Casual gamer” therefore becomes the player with low or limited gamer literacy. This is an explanation for the simplicity of successful Casual games like Zuma, Bejewelled, Bookworm and Solitaire – to succeed in an audience with low gamer literacy, one must make games that do not require this domain-specific knowledge. Thus a successful Casual game draws from experiences familiar to people from outside of videogames – accuracy (for Zuma), logic puzzles (for Bejewelled), word puzzles (for Bookworm) and card games (for Solitaire). Equally, a successful Casual game requires the player to learn only two or three rules. Thus, the barrier to entry is lowered.

I prefer to term the Casual gamers as the mass market, in keeping with the usual terms used in business. After all, that’s what we’re talking about here: the largest group of consumers, those who lie under the long tail of a particular industry – that’s the mass market, and that’s what I believe we are usually talking about when we talk about Casual gamers.

But of course, what we are talking about here is a continuum: from the spike of the gamer hobbyists, the most game literate, to the tail of the mass market, the least game literate. “Hardcore”, as previously used, refers to that spike, and “Casual” refers to the tail.  (See the diagram above for a visualisation of the distinction between the 'head' of the market the gamer hobbyists and the 'long tail' the mass market players.)

Of course, being a continuum we can break it up in many different ways. We could split it into three, as Jenova Chen and That Game Company did by defining “Core” as a midpoint or intersection between the two extremes, or we could split it into any other number of segments – say, a sevenfold division into (say) hardcore, hobbyist, experienced, core, inexperienced, casual, mass market – but what would be the point in doing so? We know we are dealing with a continuum, the clearest way to denote such a phenomena is to label the poles (hobbyist and mass market, or Hardcore and Casual) and remember that the majority of people fall between the extremes. That said, render whatever models help you make your games – a model is just a model, after all.

As my company gets ready to launch its new research into the gaming audience (the long delayed DGD2 survey), the issue of Hardcore versus Casual has slipped into obscurity for us. We will be exploring issues of game literacy instead – although we are still including the self-assessment question from DGD1 so that we can compare game literacy to self-assessment. But even this is a tangential element of the research we are conducting this time. The survey will go live in two weeks time, just in time for me to promote it at the Austin Game Developer’s Conference.

Thinking of the issue of “Hardcore and Casual” games in terms of gamer literacy adds clarity to the nature of the situation, and allows us to reason about how to proceed. Those games that used to be the centre of the marketplace are increasingly becoming niche markets for gamer hobbyists, while some – sometimes against all odds – have transitioned to the edge of the mass market (World of Warcraft and GTA being only the two most well known examples). 

This gives game designers a choice in how they approach their games: they can target the gamer hobbyist side of the audience, in which case the game either needs to be developed on a prudent budget or be lucky enough to be supported by a platform licensor (Sony, Microsoft, Nintendo) as a possible driver for early adoption or brand loyalty, as the audience will primarily be a minority (albeit a minority currently responsible for a significant share of expenditure on videogames). Or, they can target the “long tail” with simple games that do not require much if any gamer literacy to play. Or they can work in the space with the greatest potential for both profit and failure – the elusive middle ground between the two extremes. There is success to be found here, but it requires careful consideration of how the games will support players with low game literacy, intelligent structuring, and more than a modicum of luck.

It's possible (even probable) that the genres beloved by hobbyists can support commercially viable niche markets, and we will see a widening of the gap between such players and the mass market. It is perhaps more likely that the mainstream videogames of the future will need to learn how to balance the needs of the game literate player against the mass market players with little prior gaming experience in order to maintain commercial viability.  But the problems to solve in this journey – riddles of difficulty and related issues in game modes to name just two – ensure that the field of game design still has much to learn about how to take videogames forward into the twenty first century.


Biblical Literalists Can't Have It Both Ways

This piece addresses Christians, therefore it presupposes God.

A common belief among modern fundamentalist Christians living in the United States is that of Biblical inerrancy, the idea that there can be no errors in the Bible. This is a strange position, since the Bible was written and composed by humans, with their own fallibility, and no prophecy or testimony exists which supports the idea that a particular collection of books should enjoy such special stature. Beginning from the idea that Bible cannot be in error, a certain psychological perspective emerges in which the Bible is taken literally, instead of the far more viable view that parts of the Bible represent allegories, metaphors and parable. In almost all cases, Biblical literalism seems to result from an ignorance of either the history of the Bible, or of the content of the Bible itself. 

Although I support people’s freedom to choose their own beliefs, this does not exclude the possibility of criticism. In particular, Biblical literalism has become so confused as to what it represents that it can no longer be seriously considered a consistent belief system. I call upon those who believe in Biblical inerrancy to either adopt a position consistent with their own beliefs, or to abandon Biblical literalism and thus join the billions of Christians who understand the Bible as inspired by God, but not to be understood literally. 

One of the most apparent flaws in a literal interpretation of the Bible comes from taking the Book of Revelation (also known as the Apocalypse of John) as a literal description of the end of days. This idea is not necessarily problematic, except many such believers seem to also believe that the end of days is now. These two propositions are necessarily disjunct!

Consider this: if the Book of Revelation is to be taken literally, then there is no ambiguity as to how to interpret its events. For instance, in Revelation 8:7 it says “The first angel sounded, and there followed hail and fire mingled with blood, and they were cast upon the earth: and the third part of trees was burnt up, and all green grass was burnt up.” This clearly states a rain of hail, fire and blood will occur, and that one third of all trees will be destroyed in flame. Has this happened? No. The rest of this chapter contains equally explicit descriptions of disasters, none of which are happening today. 

Consequently, Biblical literalists must bear this vital point in mind: until you see the Four Horsemen of the Apocalypse described in Revelation, chapter 6; until you experience the disasters described in the later chapters first hand, until you see with your own eyes the seven-headed red dragon cast down from heaven in chapter 12, you have no basis for concluding that “the end of the world is nigh”.

Of course, one can subscribe to Biblical inerrancy and not take the book of Revelation literally – opening the door for it to be read in the manner of, say, Nostradamus, with the content being interpreted figuratively or metaphorically. For example, Irvin Baxter Jr. takes the fact that Chernobyl is Ukrainian for “wormwood” as the fulfilment of Revelation 8:10-11, which tells of a star named wormwood that falls upon the Earth, causing many to die because a third of the waters become bitter.

But if one opens the door to such interpretations, one has ceased to take the Bible literally (a positive step!) If the Book of Revelation does not need to be taken literally, then why should Genesis be taken literally? Abandoning the commitment to literalism allows for a vast variety of interpretations, and once this step has been taken one is no longer in a position to say that such-and-such a part of the Bible must be taken literally. If the opposition to evolution by those who believe in Biblical inerrancy is based on a literal interpretation of Genesis, then taking Revelation figuratively opens the door to the not-so-challenging idea that evolution and the Bible need not be incompatible – that evolution was part of God’s plan for bringing life to our planet. 

A historical analysis of the Book of Revelation does not support the idea that the events it describes are occurring now. In fact, there are strong reasons to suppose that the preterist view of this book (the most common view except for Christian denominations founded in America after the 16th century) is valid. In particular, Greek and Hebrew gematria (a type of numerology practiced by Jewish and early Christian mystics) renders the name of Emperor Nero to the number 666, suggesting that he was the “Antichrist” mentioned. Additional support for this idea comes from certain early manuscripts of the Book of Revelation placing this number at 616 – the value derived from rendering Nero’s name in Hebrew. This being so, many of the prophecies contained in this book were resolved in 70 AD.

This is the Catholic view on this matter; that all that remains to be fulfilled is the second coming of Christ, while the Eastern Orthodox Church do not read this book of the Bible publicly at all, and chastise those who would presume to know when the end times will be, since they will come at a time of God’s choosing. 

If one wishes to accept the Biblical idea that the world will end in fire and destruction, I would suggest there is no stronger interpretation than the idea that this will happen as the sun reaches the end of its solar life cycle. In which case, we have some five billion years remaining before the sun becomes a red giant, boiling away the Earth’s oceans and rendering it uninhabitable. This interpretation is consistent with both the Bible and scientific investigation, and suggests a serious re-evaluation of Christian behaviour: if God intends that we inhabit this planet for a few more billion years, shouldn’t Christians take seriously their God-given obligation to take care of it until then?

There is another reason for Biblical literalists to reconsider their beliefs, and that is based solely upon sound reflection on the Bible. One of the central tenets of all the Abrahamic faiths is to avoid idolatry – that for a believer, God must be above all else. Treating the Bible literally renders the words and sentences of the Bible as more important than the content of the teachings of Jesus, the commandments given to Moses by God, and the teachings of the Prophets. These teachings do not focus on wishing for the end of days, but upon loving one’s neighbour and striving to be a good person. To ignore these teachings in favour of esoteric readings of scripture is to elevate the stature of the Bible too greatly. This is a kind of idolatry – a book is treated as higher than God.

If Biblical literalists can cast aside their arrogance in believing that they alone have the true interpretation of the Bible, admit to God their vanity, and enter into a spirit of fellowship and discussion with other Christians who hold diverse yet equally sacred views, perhaps Christianity can be saved from an idolatry which poisons the very message of Jesus a message that should and must be central to the beliefs and behaviour of all honest Christians.


Towards the Future

A brief review of where we are in the “Ethics Campaign” is apposite; permit me to think aloud for a moment.

The first part of the Ethics Campaign (from Relative Ethics to Ethics of Metaphysics) was primarily about Meta-Ethics, and could be entitled Part One: Relative Ethics, while this intermezzo in which we are having open discussions about ethical issues can be considered to be part of our discussions of Applied Ethics which will go on in the cracks between the main sections.

The final part of Ethics as a field is Ethical Theory – of which there are three approaches (as we saw earlier): agent-focussed, rights-focussed and outcome-focussed. I am unable to pursue agent-focussed ethics much further without some book recommendations and thus additional reading (suggestions welcome!), and we have already had more than half of the story of rights-focussed ethics from examining Kant’s yardstick. This still leaves outcome-focussed approaches, or Consequentialism, which is something we will have to look at quite directly.

Thus, the next part of the “Ethics Campaign” must be Part Two: Future Ethics, whose subject will be outcome-focussed ethics, particularly Utilitarianism, and the vital question: can there be a viable Ethics of the Future, or does any such attempt devolve into metaphysics? You may already be able to derive an answer to this question, but as the Trolley Problem shows, even if Consequentialism (outcome-focussed ethics) has fatal problems, we still fall back upon it when all else fails.

Several of the issues we are going to discuss depend upon establishing a position on Future Ethics (or discussion of Utilitarianism): justice between generations, freedom in the face of crisis, environmental preservation and hedonism versus self-development all fall into this camp.

(And, as a ‘note to self’, I really should write up Hannah Arendt’s The Human Condition as either a part of, or an introduction to, Future Ethics.) 

Then, beyond Future Ethics lies the borderland between ethics and politics, which will be our last major theme in this campaign. Let us suppose that this final section will be Part Three: Justice, but for all I know the tenor and focus of discussions may shift and this might not be the way our discussions of ethics will conclude – we will have to see when the time comes!

This section would naturally be home to several other issues including civil disobedience and its related topic extreme acts of protest, as well as freedom and drugs, for which we should first look at both Consequentialism and civil disobedience. 

That only leaves three issues remaining for the current interlude, namely animal rights (which could also wait for Justice), lying, and ethics of science. If I put up the piece on Kuhn I drafted for Freedom of Belief, we would be ready to discuss ethics of science, and I don’t need anything much to discuss lying, so I’ll aim make these the topics for the two weeks that remain before Austin Game Conference and endeavour to make a start on Future Ethics after that. 

We now return you to your regular scheduled programming…