Previous month:
August 2008
Next month:
October 2008

Questioning the Human Genome Project

DNA In 2003, after 13 years of research, the Human Genome Project (HGP) was completed. On the one hand, it was heralded as a major scientific breakthrough, and a landmark in our biological understanding; on the other, it was criticised on ethical and other grounds, and accused of being radically less valuable than had been claimed, most consistently by evolutionary geneticist and philosopher of science Richard Lewontin. Bill Clinton lyrically described the goal of the project as “learning the language in which God created life”, but five years after completion, what exactly have we learned? 

Like Lewontin, I was highly sceptical of the value of the Human Genome Project, which seemed to be woefully naïve in its assessment of what was being attempted, although ultimately it seems some good has come from the $2.7 billion project – but not so much of the kind that its advocates advanced at the time. The HGP has hammered fresh nails into the coffin of naïve belief in genetic determinism and dogmatic adaptionist philosophies, and undermined the assumptions that motivated the research in the first place. I consider this to be a great success in terms of philosophy of science!

I originally intended to title this piece, rather cryptically: “One Down, Four Billion to Go”, since a first and obvious criticism of the HGP is that it doesn’t map the human genome, it maps a human genome. Much of the DNA for the public project came from an anonymous male donor from Buffalo, New York, codenamed RP11. One wag noted in respect of the project that once all the chromosomes were fully mapped and sequenced “they'll tell us everything there is to know about one French farmer and a lady from Philadelphia.” We can’t claim to have complete human genetic knowledge until we’ve mapped a reasonable sample size, and that isn’t what’s been done. In fact, because of the genetic variation between individuals, there simply can’t be one definitive human sequence – and suggesting there can be creates serious questions about an appropriate definition for what should be considered ‘normal’.

But that’s just the tip of the iceberg when it comes to issues. The HGP claims to map a complete genome, but in fact what it has focused upon is identifying the 20-25 thousand genes present in DNA – and this isn’t necessarily the same as the complete genome. In fact, it’s about 5% of the genetic data. Some scientists, secure in their beliefs concerning genes, think this is sufficient, but this rests on very simplistic ideas about DNA. The non-coding regions, conventionally dismissed as “junk DNA” actually serve very important roles. Putting aside the putative role in the creation of new genes I have touted previously, the non-coding DNA contains many regulatory signals that are vital to understanding the relevant biology.  

To cast light upon some of the major misconceptions, I’d like to make an analogy between the data contained within DNA and a telephone directory. Mapping the genetic data is much like collating a list of telephone numbers – that volume of data is certainly useful, when you know what it’s for. But having a human gene sequence without the biological knowledge to go with it is like publishing a telephone directory without inventing the telephone. Actually, if you examine some of the bizarrely optimistic claims concerning the possibilities of genetic engineering that might be unlocked by the Human Genome Project, the metaphor becomes something like the belief that one can work out how to build a telephone by studying the telephone directory. 

There's a similar problem here to the one I discussed previously concerning genes and behaviour: scientists had ideas about inheritance prior to Watson and Crick discovering the DNA double helix, then assumed that genes would explain everything that was known about inheritance. This has lead to what Lewontin has called “the doctrine of DNA” – a belief that genes are the ultimate explanatory device, and that genes can be linked to everything in biology, psychology and beyond, that deciphering the genetic code gives us the knowledge to program our own organisms. He parodies this by saying: “Any computer that did as poor a job of computation as an organism does from its genetic ‘program’ would immediately be thrown into the trash and its manufacturer would be sued by the purchaser.”

The genetic engineering we are currently conducting is not really akin to what a programmer does in writing a functioning utility; it is much more like what a hacker does when they subvert an existing utility they didn’t write, and don’t necessarily understand. There are serious ethical issues surrounding modern genetics that stretch far beyond just the Human Genome Project. David Heyd noted that: “tampering with the natural biological process of species evolution and individual evolution… [is] a form of metaphysical trespass.” The Senior Editor of Science, Barbara Jasny, admitted that “the Human Genome Project [stretches] the limits of the technology and the limits of our ability to ethically and rationally apply genetic information to our lives.” 

Abby Littman of Montreal’s McGill University is another outspoken critic of the project. She contends that “Everything that's been done so far is about managing the genome project instead of questioning the whole issue of whether there should be a human genome project.” She accuses the molecular geneticists involves of focussing on the wrong part of the problem if the goal is better health, and of not having adequate public consultation on the project. She asks:

Why are we so busy mapping the genome? Why don't we map the environment instead of mapping the genome and worry about things that really make us sick that we don't know anything about? Why do we think it's so much easier to change genes than environmental conditions that put us at risk? Because it's more expensive to clean up the environment than to deal with people who are at medical risk because of the environment. 

Her view is that most human disabilities happen after we are born, and are caused by accidents, injuries or environmental factors. She also questions the eugenic implications of this kind of project: “Do we want to live in a society where nobody is born with Down's syndrome? If so, why? That's an ultimate aim of these tests. Does this make us a better society? I'd like to make geneticists think about these questions as they do their work.”

If the critics are correct that the health benefits of the HGP to the public are far lower than has been suggested, who are the benefactors of this research? One obvious candidate is biotechnology and pharmaceutical companies. Lewontin notes: “No prominent molecular biologist of my acquaintance is without a financial stake in the biotechnology business.” The data produced by this research could have been extremely valuable to pharmaceutical companies as a potential source of new drugs, and to other biotech companies, but even this value appears to be less than was anticipated. Craig Venter, the president of Celera, the company which conducted the private portion of the genome mapping, stated that “the drug industry has been saying 'one gene, one patent, one drug', but the uses for this approach can be counted on the fingers… The notion that one gene equals one disease, or that one gene produces one key protein, is flying out of the window.”

A more likely benefactor from the research are health insurance companies, who can potentially use information obtained from a sample of DNA to reject people who show genetic predisposition to certain medical conditions. Thomas Murray suggests that “the most important movement in the ethics of workplace genetic testing has been away from the original vision of a public health measure, to screening as a way of reducing illness-related costs with no effect on the overall incidence of disease.” So now we are looking at the Human Genome Project not as a great advance in health care, but as a means of insurance companies denying medical cover – the health benefits for the general public are thus worse than they were before the project began.

All this makes it sound that nothing good has come from the Human Genome Project, but as I said earlier there is a significant silver lining: by exposing the naivety of overly simplistic interpretations of genetics, the HGP has helped improve our understanding of biology by undermining old fashioned deterministic models. Venter notes, in assessing the impact of the project his company has assisted in completing:

There are two fallacies to be avoided: determinism, the idea that all characteristics of a person are hard-wired by the genome; and reductionism, that now the human genome is completely known it is just a matter of time before our understanding of gene functions and interactions will provide a complete causal description of human variability. In everyday language the talk is about a gene for this and a gene for that. We are now finding that that is rarely so. The number of genes that work in that way can almost be counted on your fingers, because we are just not hard-wired in that way.

We have learned that organic biochemistry is not as simple as dialing a telephone number. There are complex multi-dimensional relationships between the tens of thousands of genes involved in building and regulating our bodies, and these can only be fully understood when taken also in the context of the conditions we live in. To understand an organism requires more than a knowledge of its genes, we must know how those genes interact (something we barely understand) and how those interactions relate to the organism’s environment. The assumption of the linear influence of individual genes has suffered a fatal, yet long overdue, blow. Critics of the Human Genome Project have been saying this for decades now, but the biotechnology community has simply ignored the practical – and ethical – implications of their work, in the blind lust for either knowledge, money, or both.

A statement issued by Celera in 2001 admitted “because of the relatively low number of genes… it will be necessary to look elsewhere for the mechanisms that generate the complexities inherent in human development.” Responding to this, Ari Patrinos of the US Department of Energy (who funded much of the public research) said: “It’s kind of humbling, isn’t it?” Any project that can inspire a meek outlook among scientists is certainly exceptional, but at a cost of $2.7 billion I can’t help but wish that there were cheaper ways of teaching humility to the scientific community.


Black Squirrel Map

Squirrel map.Sept08

It gives me great pleasure to present this map tracking sightings of black squirrels, whose range can principally be found on the eastern seaboard of the North American continent. Black squirrels are actually of members of the grey squirrel species but with black fur, and appear to have originated in this part of the world in Canada,. They have been sighted further and further south during 2007 and 2008. Please continue to send in your black squirrel reports and I will annotate the map accordingly. Keep watching the trees!

You can explore the complete set of sightings thus far on Google maps.

UPDATE: I have started a new blog for black squirrel sightings and other sciurophile nonsense. Please go to Shadowtail to report any black squirrels in your area! Thank you!


A Secular Age (5): The Nova Effect

Hubble-space-telescope-crab-nebula Taylor describes an important aspect of the modern conditions of belief by saying that “we are now living in a spiritual super-nova, a kind of galloping pluralism on the spiritual plane.” This is what he terms the nova effect, which he sees as originating in the establishment of a viable exclusive humanism. The development of secularity in Taylor’s third sense (a world of diverse belief) passes through several stages; the first was the development of exclusive humanism as an alternative to Christianity in Western society (which we looked at two weeks ago), the second was the diversification this triggered as the different positions began to argue amongst themselves: 

The multiple critiques leveled at orthodox religion, Deism, and the new humanism, and their cross-polemics, end up generating a number of new positions, including modes of unbelief which have broken out of the humanism of freedom and mutual benefit (e.g., Nietzsche and his followers) – and lots else besides. So that our present predicament offers a gamut of possible positions which extend way beyond the options available in the late eighteenth century. It’s as though the original duality, the positing of a viable humanist alternative, set in train a dynamic, something like a nova effect, spawning an ever-widening variety of moral/spiritual options, across the span of the thinkable and perhaps even beyond. This phase extends up to the present. 

The increasing awareness of other cultural traditions, and the cosmopolitanism born of so many different cultures living side by side in an ever-shrinking world has only served to accelerate this nova effect in recent decades.

As mentioned last week, the polar extremes of the space opened up by the nova effect are orthodox religion and its mirror image, an orthodox materialist scientism which denies the validity of other belief systems as fervently as orthodox Christianity insists on itself as the only valid mode of belief. In both cases, the ridiculousness of such a rigid position in light of the vast panoply of ways that people now approach both belief and unbelief is rendered invisible by a premature certainty inflated by contrasting solely with the polar opposite beliefs, and not considering the intermediate positions at all. 

Taylor explores the materialist positions in order to demonstrate that there is more to unbelief than its opponents would tend to admit. From the materialist perspective:

We are alone in the universe, and this is frightening; but it can also be exhilarating. There is a certain joy in solitude, particularly for the buffered identity. The thrill at being alone is part sense of freedom, part the intense poignancy of this fragile moment, the “dies” (day) that you must “carpere” (seize). All meaning is here, in this small speck. Pascal got at some of this with his image of the human being as a thinking reed. 

This can be contrasted by what people of belief experience in the face of the modern cosmic imaginary:

Here is where a religious person will easily confess a sense of mystery. Materialists usually want to repudiate this; science in its progress recognizes no mysteries, only temporary puzzles. But nevertheless, the sense that our thinking, feeling life plunges its roots into a system of such unimaginable depths, that consciousness can emerge out of this, fills them too with awe.

In demonstrating that the sense of mystery exist among unbelievers as well as believers, Taylor quotes from Douglas Hofstadter who expresses a sense of “cosmic awe” at the image of the world opened by a reductionistic perspective, that from which “the most substantial and familiar of objects or experiences fades away, as one approaches the infinitesimal scale, into an eerily insubstantial ether, a myriad of ephemeral swirling vortices of nearly incomprehensible mathematical activity. This in me evokes a cosmic awe. To me, reductionism doesn’t ‘explain away’; rather, it adds mystery.” Hofstadter is thus sharing in the mystery of the universe experienced by believers, but from a materialistic perspective.

The sense of awe that is experienced from a position of unbelief can serve to “recapture the sense of connection and solidarity with all existence”, leading to a kind of “nourished materialism”:

And so materialism has become deeper, richer, but also more varied in its forms, as protagonists take different strands to the complex facets [of the modern cosmic imaginary]. The reasons to opt for unbelief go beyond our judgments about religion, and the supposed deliverances of “science”. They include also the moral meanings which we now find in the universe and our genesis out of it. Materialism is now nourished by certain ways of living in, and further developing, our cosmic imaginary; certain ways of inflecting our sense of the purposelessness of this vast universe, our awe at, and sense of kinship with it.

Of course, the shift in the cosmic imaginary did serve to change attitudes towards the idea of God, and not just because the early modern apologetics of design (‘there must be a God, because everything shows signs of design’) were thoroughly undermined in the Victorian era by a scientific explanation for this teleology (namely evolutionary theory). The need for an ordering presence behind the visible order is thus shaken. Taylor observes that the “vast unfathomable universe in its dark abyss of time makes it all too possible to lose sight of this ordering presence altogether.” Yet the mystery that is revealed can in itself be a powerful source of spirituality: “Our sense of the universe is not unequivocal… It can occlude all sense of order and meaning, but it also can be the locus of powerful spiritual meanings. When these are denied, the result is often a narrow and philistine scientism.”

In exploring the nova effect, Taylor considers various materialist beliefs without any obvious prejudice. These are not his beliefs, but he is not willing to dismiss them just because they are different to his own. In a fascinating footnote, he even considers what his beliefs might be were he a materialist:

…you don’t have to have faith to believe in the continuing saliency of independent religious motivation. You could think that evolution had played a cruel trick on the human race, and given us an unquenchable thirst for transformation to which no objective possibility corresponded. This seems to me the next most likely hypothesis after theism, and still more plausible than the Disappearance Thesis.

Although we can now see the nova effect spread throughout the whole of society (and the internet serves to make this grand variety of beliefs even more apparent), it took time for this to become part of the experience of everyday life for everyone. Initially, the widening of positions of belief and unbelief was constrained to “happening among social élites, sometimes – when it comes to the development of new forms of unbelief – only among the intelligentsia.” Only in the twentieth century did it begin to influence the lives of everyone in society – and in particular, after World War II. Taylor recognises that “the trajectories differ significantly between national cultures”, but lacks “the space and the competence” to explore all these changes in full. (He also notes that “probably the reader lacks the patience” – the story so far has filled four hundred pages, and is only half-way complete).

He summarises his interpretation of the process of secularisation, contrasted staunchly against mainstream secularity theory’s idea of the inevitable decline of religion, as follows:

Positively, my aim is to suggest, in place of the supposed uniform and unilinear effect of modernity on religious belief and practice, another model, in which these changes do, indeed, frequently destabilize older forms, but where what follows depends heavily on what alternatives are available or can be invented out of the repertory of the populations concerned. In some cases, this turns out to be new religious forms. The pattern of modern religious life under “secularization” is one of destabilization and recomposition, a process which can be repeated many times. 

Thus the principle characteristic of the religious landscape of today is not that faith or religion has declined – considering the world as a whole it is far from clear that this is the case – but rather that it has diversified.

The third and final phase of the move towards secularity in Taylor’s third sense is the arrival of a new “mass phenomena of ‘expressive’ individualism”. Expressivism had been the invention of the Romantic period of the late eighteenth century, as part of the growing nova effect. Intellectual and cultural elites had been searching for some authentic way of living or expressing themselves throughout the nineteenth century, and in the twentieth century this quest came to affect a much larger part of the population – particularly in the wake of the social revolutions in the Sixties. 

This is the dawn of what Taylor terms the Age of Authenticity – a movement within the nova effect which brings about a large-scale shift in the understanding of the good to a position whereby each individual is granted the freedom to determine their own beliefs and morality, with the sole injunction that it must be ‘authentic’, which is to say “each one of us has his/her own way of realizing our humanity, and that it is important to find and live out one’s own, as against surrendering to conformity with a model imposed on us from outside, by society, or the previous generation, or religious or political authority”.

Thus the spiritual super-nova, accelerated by the adoption of an expressivist ethic of authenticity, has multiplied the number of possible beliefs to a near-infinite degree so that the situation in the West (and elsewhere) is such that:

…the gamut of intermediate positions greatly widens: many people drop out of active practice while still declaring themselves as belonging to some confession, or believing in God. On another dimension, the gamut of beliefs in something beyond widens, fewer declaring belief in a personal God, while more hold to something like an impersonal force; in other words a wider range of people express religious beliefs which move outside of Christian orthodoxy. Following in this line is the growth of non-Christian religions, particularly those originating in the Orient, and the proliferation of New Age modes of practice, of views that bridge the humanist/spiritual boundary, of practices which link spirituality and therapy. On top of this more and more people adopt what would earlier have been seen as untenable positions, e.g., they consider themselves Catholic while not accepting many crucial dogmas, or they combine Christianity with Buddhism, or they pray while not being certain they believe. This is not to say that people didn’t occupy positions like this in the past. Just that now it seems to be easier to be upfront about it. In reaction to all this, Christian faith is in the process of redefining and recomposing itself in various ways, from Vatican II to the charismatic movements. All this represents the consequence of expressivist culture as it impacts on our world. It has created quite a new predicament.

In the wake of the nova effect, and the impact of the Age of Authenticity, what is the modern situation concerning religious belief? It is this to which we shall turn our attention next week.

The opening image is a Hubble mosaic image of the Crab Nebula, the six-light year wide remnants of a supernova witnessed in 1054 AD by Chinese and Japanese astronomers.

Next week: Religion Today


The Battle for the Hardcore

Xbox-vs-playstation2 In the market war for domination between Sony and Microsoft, which console is winning?

We’ve looked quite a lot at the tremendous success Nintendo is enjoying right now on the back of their decision to court mass market “casual” players, using techniques that leave me struggling to resist  saying “I told you so” when I examine the situation. What I haven’t looked at recently is the other side of the marketplace battle – the struggle between Sony and Microsoft for the hearts and minds of the hardcore players.

Now I don’t want to suggest that Nintendo have given up or abandoned the hardcore audience – they are still supporting their popular franchises like Zelda and Super Smash Brothers that don’t really have mass market appeal, but it takes time for Nintendo to produce these games, and even the Nintendo zealots need other games to keep them busy between fixes, which means you need another console on the side. (With hardcore players and Nintendo, either you’ve been sucked in and accept that you will have to have a Nintendo console alongside whatever else you decide to buy, or you hate Nintendo and would never consider buying their machines. There doesn’t seem to be much middle ground). The net result is that the market war for hardcore loyalty is essentially a two company race between Sony and Microsoft.

Sony came into this round of the console wars with a huge advantage – a record-breaking 140 million PS2s sold, versus a paltry 24 million units of Microsoft’s freshman green-and-black brick. Unfortunately, this commanding lead gave way to hubris, and Sony’s insistence that they were untouchable helped drive a mass exodus from Sony fandom in favour of Microsoft’s shiny new console. The collapse of the culture of console exclusives, typified by Capcom's shareholder report statement earlier this year (“All major titles launched during the next fiscal year or thereafter will be developed as multi-platform games”) has leveled the playing field, making gaining an early lead even more important this time around. At the time of writing, Sony’s PS3 is coming up on about 15 million units, while the Xbox 360 is enjoying a marginal lead at 19 million units. (The Wii, meanwhile, is approaching 30 million units and still selling briskly).

Last week, Microsoft announced a price drop for the Xbox 360, bringing its cheapest version below the retail price of even the Wii, and Microsoft UK regional director said in respect of this: “We think that for high definition entertainment Xbox is first choice... once you get to the price points we’re now at, the mass market can see the opportunity to step into the HD world.” I particularly like the phrase “for high definition entertainment Xbox is first choice” which translates roughly as “we recognise we have no hope of beating the Wii this season, but we think we can thrash Sony’s PS3.” And they may well be right, unless demand for Blu-ray is increasing faster the sales data suggests. (They are now a more popular purchase than DVD players, I’m told, but this doesn’t say much). More farcical is the claim “the mass market can see the opportunity to step into the HD world” – sure, if they already have a Hi-def television. It looks like about 25% of households in the US have at least one HDTV and I’ve seen 20% quoted as an estimate for HD penetration in Western Europe.

Let me put this bluntly: if you chose to make a high definition videogame console at this point in time, you choose to target a fraction of the market instead of the whole of the market. That will change, over time, but who has the time to wait for consumers to catch up on the household technology when you are engaged in an epic market share struggle with your competitors?

Microsoft’s talk about hitting the mass market with the Xbox 360 is fairly ridiculous – only 18% of consumers buying a HDTV do so in order to connect to a games console according to one source; having the 360 cheaper than the Wii isn’t much of a draw if you have to spend three times as much as the console buying the new TV you need to run it on. I appreciate that the marginally improved graphics on the power consoles practically requires HD to make it look any better, but that just reiterates why banking on a step up in graphics power at this time wasn’t necessarily the best choice. The hardcore gamers are techheads who salivate at the thought of new kit; the mass market are nothing of the kind.

At least talk of the Red Ring of Death has declined.  That’s not to say that the hardware flaw which transforms Xbox 360s into a seven and a half pound brick has gone away, just that people seem to have stopped caring. Microsoft has extended its warranty, but they haven’t to my knowledge fixed the fault which causes this fatal crash. It might even have helped them slightly: it looks like the failure rate was about 15% or so; if you subtract these units from the total number of Xbox 360’s shipped, Microsoft are only about 2 million in the lead, instead of 4 million. Now that’s irony!

But no matter what uncomfortable situation is facing Microsoft right now, Sony’s bright red cheeks can outshine all competitors. It’s embarrassing enough for the multinational corporation that they have squandered away the biggest market lead in the history of videogames, without examining the manifest problems with the PS3 itself.

The main thing I want to draw attention to today is the mediocre quality of the PS3’s operating system. Japanese companies are rarely known for producing great interfaces, to be honest, perhaps because Japanese consumers are happy to learn to use whatever they are given. In the West, the geeks have more demanding standards, and like to attack any company that transgresses their ideals. Next to the 360’s OS, the PS3 seems positively inadequate – and I don’t often say nice things about Microsoft operating systems which, frankly, have a long history of being subpar.

I’m a big fan of older game styles, so I appreciate the PS3 supporting lots of 2D games in its online selection. What I wasn’t expecting was for Sony to fully replicate the experience of playing on an old 8 bit machine by bringing back loading times. Not since the days of the Commodore 64 and the ZX Spectrum have I spent so long waiting for games to ready themselves... Whereas the Xbox 360 downloads and installs demos automatically, the PS3 does neither of these things without first wasting the user's time. When you select a downloaded game, it must first install – a process which can take several minutes – time during which you must impotently sit and watch the progress bar slowly fill up. (At least most Spectrum games treated you to a loading image during this time!)

Without even beginning to dig into the corners of the situation, the poor quality of the PS3 operating system next to the Xbox 360’s surprisingly well-planned effort (it not only installs downloads automatically, it does so even when the machine is playing a game or on standby) has helped many hardcore gamers – even those previously loyal to Sony – to switch that allegiance to Microsoft’s latest console, and for the most part to remain there. Indeed, the reason that noise about the Red Ring of Death is subsiding might be because once the furious ire of the player whose console has self-destructed has passed, the majority still prefer to buy a replacement Xbox 360 if the alternative is buying a PS3.

Plus, may I say that the PS3 is an astonishingly ugly machine, only slightly more attractive than the original Xbox, although doubtless different people have different tastes in this regard.

In the battle for the hearts and minds of the hardcore gamers, Microsoft appears to be winning by a wide margin. With fewer exclusive titles to turn the tide, small extras can make a difference – the bonus downloadable content for GTAIV on 360, for instance. Other analysts say this mega-title did not generate additional hardware sales in the month of its release, but since we're dealing with hardcore players the decision to buy would largely have been executed months prior to the game launch (Microsoft announced the exclusive content at E3 two years earlier, remember). With the players for whom this was a key title, the 360 must surely have had an advantage.

It’s almost indisputably the case that the PS3, with its secret weapon of a built-in Blu-Ray player, maintains better prospects for crossing into the mass market when compared to the Xbox 360 – but remembering that the principle evangelists of any system to the mass market are the hardcore players, Sony’s ability to capitalise upon their sole mass market advantage is going to be severely hamstrung unless they do something to win back hardcore support in the next year or so. Exclusive titles will be vital to this: Metal Gear Solid 4 has already given Sony a burst of extra sales, and LittleBigPlanet may even court some  mass market players, but there's still a lack of breadth to the PS3 exclusives.  Sony may be quite surprised to find the PixelJunk games giving them a boost of appeal with the hardcore players – especially since I hear rumours that a top Sony executive chewed out Dylan Cuthbert's Q-Games for making 2D games. (Personally, I feel PixelJunk Eden is the most interesting game so far on the PS3 – and I don't just mean on PSN).

Despite gaining a crucial lead that could potentially land them in second place (after Nintendo) in this round of the infinite console wars, Microsoft have to be careful. Sony have one other advantage up their sleeve, and that’s the design of their console’s chipset which, much as happened with the PS2, is not being fully leveraged in the current games because it’s too hard to program for, and it takes a while for the programmers to work it out. In a few years time, the PS3 could be outperforming the Xbox 360 in technical terms, leaving Microsoft with the tricky question of whether to replace the 360 in an attempt to jump over Sony’s technical edge (at massive cost to Microsoft, and possibly angering the fans if it is done too soon), or whether instead to limp on through this console cycle... Honestly, though, I find it hard to believe that any step up in graphical quality (which is likely what is entailed) would be enough to turn the tide in Sony’s favour if they’ve already yielded the loyalty of the hardcore audience to Microsoft.

The battle is far from over, and Sony have more up their sleeve right now than Microsoft appear to be able to muster, but by stealing hardcore loyalty from their market rivals, Microsoft has gained an edge that could allow them to give Sony a seriously bloody nose this time around. But if it came to 360 versus Wii in a battle for the mass market, Microsoft should save their warchest and call it quits – it may be the hardcore gamer’s ideal machine right now, but even in a dream scenario the 360’s installed base is probably going to top out at about 40 million (not coincidentally, roughly the size of the installed base of Sony’s hardcore-friendly PSP handheld).

 The Wii might not have the legs to beat Sony’s 140 million PS2’s, since that was the consequence of a convergence between gamer hobbyist and mass market support which doesn’t exist now that the console manufacturers have torn the market dynamic into two very different halves, but with a good tailwind it could match or exceed the 75 million DS handhelds Nintendo have sold. I suspect it will outsell the 360 by 2:1, perhaps 3:1.

And the PS3? It’s final unit sales may depend more on the uptake of the Blu-ray format than anything else, so at least it will help Sony with the promotion of their media format, even while it teaches the multinational the lesson the N64 taught Nintendo: you still have to be nice to people when you get to the top, as it’s a long way down...


Branded Stockholm Syndrome

You know Stockholm syndrome, the psychological reversal that sometimes causes hostages to become emotionally attached to their captors? I can’t help but feel that something quite similar happens with the corporations from whom we purchase an increasing proportion of the equipment and supplies for daily life.

Take my current situation. While living in the US, I had a Hewlett Packard printer – nice unit, actually. But both of its ink cartridges contained a pathetically small volume of ink, forcing me to replace them almost weekly, not to mention the printer did not support a setting to force printing in black and white (to do this, I had to physically remove the colour cartridge, something that I had to do regularly). I grew to loathe HP for this, and swore blind I wouldn’t buy another HP printer. Yet, when the office here in the UK needed a new printer, I ended up actioning the purchase of, yes, another HP printer.

Or look at my new laptop purchase decision. Ever since seeing Matt Mower’s swanky Macbook do things my laptops have never done (like come back from standby in less than an eternity), I’ve been thinking of ditching Microsoft’s bloated and embarrassingly ill constructed operating systems for the stylish dictatorship of Apple. Yet, when I played with a Macbook Air in the shop the other week, I got cold feet – just using a mouse with a single button was enough to freak me out. I’m sure I’d learn to use it, but it was sufficiently different to make me nervous. And now it looks like I will be getting yet another Windows-based laptop (although I pray I will not be lumbered with Vista any time soon).

Which makes me wonder: am I suffering from some kind of brand-version of Stockholm syndrome? I hate these companies, yet continue to give them my money. And this is doubly bizarre since elsewhere in my life I maintain such staunch boycotts on political grounds – Nestlé are still a company I will not purchase from after the African powdered milk scandal of the 1970's (despite the fact that I can’t work out what action it would take from the multinational packaged food company that would end the boycott at this point), and my pointless boycotts of EA (such as FEAP, the Futile EA Protest) have remained a feature of my life for some time as a vain attempt to protest their lack of investment in original titles. 

Psychologists suggest that Stockholm syndrome, and similar situations such as abused spouses who remain loyal to their partner (which they term trauma bonding, or bonding to the perpetrator) are a result of power relations – we bond with our powerful abuser, they say, because it is in our best interests to do so. I have some sympathy with this view – I certainly see it to some degree in non-human social animals, such as meerkats, who need to maintain tight social groups in order to survive. From the individual perspective, though, the fear generated by the abusive situation can become unbearable; the switch could thus be seen from another perspective as anxiety management (psychology is never as neat as physics, there are always multiple forces at work, and thus multiple explanations available).

Wondering whether I have trauma bonded with Microsoft and HP after years of abuse is a disturbing idea… these companies have brought me such suffering over the years, admittedly on a grossly trivial scale as far as human suffering goes, yet I stick with them – albeit with great reluctance. (In respect of the triviality, I am reminded of Mel Brooks’ line: “tragedy is when I slip on a banana peel and fall down, and comedy is when you fall down a manhole and die”). 

Partly, however, my failure to escape has been a result of other circumstances. I was not keen on getting another HP, but of the units available in the shops around me (I’m reluctant to buy equipment I haven’t at least seen, although I sometimes examine units in the shops and then order online) only Hewlett Packard had one that checked all the boxes. Plus, it was grossly reduced in price, and this seemed like an incentive rather than, say, a trap. Similarly, I can’t ever truly escape Windows while I work in the videogame industry and am developing for Microsoft operating systems on a regular basis. These other factors cloud the issue somewhat.

I wonder if other people have any similar experiences. Have you ever noticed something akin to Stockholm syndrome in your dealing with companies? Ever stuck with a company that has abused you, or found yourself buying repeatedly from a corporation you hate? Let me know your thoughts in the comments!


Internet Burp

Owing to an internet burp of some kind, Thursday's serial post was inadvertently posted to the site today. I have taken it down, and it will reappear on Thursday as intended. Anyone who saw the post over the RSS feed and has come to the site to comment, I apologise for the inconvenience. Please wait until Thursday when the post will appear as scheduled.


Don't Talk About Creationism, Or Else

Following this week's exploration of “religion” versus “science”, I’d like to conclude this theme by looking at this issue in current affairs.  

Michael_Reiss Earlier this week, Professor Michael Reiss (pictured) stepped down as director of education for the Royal Society, a prestigious learned society which serves as the academy of science in the United Kingdom. The implication of this resignation is that Professor Reiss has been forced out of his job as a result of his comments concerning creationism last week.

What were these scandalous comments? Professor Reiss suggested that rather than dismissing creationism as a “misconception” teachers should see it as a cultural world view, take the time to explain why creationism had no scientific basis, and added that this was more valuable than simply banging on about evolution. In a later interview he added:

An increasing percentage of children in the UK come from families that do not accept the scientific version of the history of the universe and the evolution of species. What are we to do with those children? My experience after having tried to teach biology for twenty years is if one simply gives the impression that such children are wrong, then they are not likely to learn much about the science that one really wants them to learn. I think a better way forward is to say to them ‘look, I simply want to present you with the scientific understanding of the history of the universe and how animals and plants and other organisms evolved.’

(Why are there an increasing number of children with creationist views? It is because of the increase in Muslim families in the UK, a proportion of which have traditional creationist beliefs. The percentage of belief in creationism in the UK has been estimated in 2006 to be about 20%; Professor Reiss suggests roughly one in ten students hold creationist beliefs).

The problem, it seems, was not the actual comments that Professor Reiss made, but the way they were reported in the media. Even the moderate BBC news coverage ran with the misleading headline “Call for creationism in science”, and the fact that Professor Reiss is a Church of England Minister as well as a biologist doubtless added to the confusion.

The media coverage inflamed the anti-creationist lobby leading, to comments like the following from Mr. Porteous Wood: “Creationism is anti-science. Teaching it to children is a form of intellectual child abuse, because it gives them the wrong facts about life.” This viewpoint clearly draws on Professor Dawkins polemic that teaching anything but ‘the correct science’ is child abuse, a position that is monstrously naïve in terms of philosophy of science, presuming that the science of any one given era can be treated as absolute truth despite the manifest evidence that scientific beliefs change over time. 

My view on this debacle is echoed in the voices of other scientists in the UK. Lord Robert Winston, professor of science and society at Imperial College London said in respect of the resignation: “I fear that in this action the Royal Society may have only diminished itself. This is not a good day for the reputation of science or scientists. This individual was arguing that we should engage with and address public misconceptions about science - something that the Royal Society should applaud.”

The chief executive of the British Association for the Advancement of Science, Roland Jackson, stated: “I was at the actual discussion and what I heard him say, however it has been reported, was essentially the position advocated by the Royal Society.” He also suggested the Royal Society “should have supported him and used this opportunity to further a reasoned debate”.

All this dovetails with my position that creationism is a reasonable topic to discuss in schools when the students hold this belief, and attempting to suppress discussion is utterly counter-productive. Creationism is no longer considered a valid scientific theory, but this does not make it anti-science, and the discussion of why creationism is not a scientific theory is a legitimate scientific topic. It lays out the boundary conditions for our modern understanding of the scientific process. Similarly, phlogiston is no longer considered a valid scientific theory, but there is nothing anti-scientific about discussing phlogiston – in fact, discussing failed theories teaches far more about the scientific method than enforcing a dogmatic interpretation of science. The use of phrases like “anti-science” reveals partisan psychology at work.

Modern scientists should be sufficiently secure in their scientific beliefs to have discussions on any topic relating to science without it being a cause of cognitive dissonance. When this is not possible, the objective perspective which scientists strive to attain (but never truly can) is entirely lost, and in the absence of even the intent of objectivity the essence of the modern scientific method is obscured. Since the scientific community disagrees on so many facts, it is absurd to treat any aspect of modern scientific knowledge as dogma – when this happens, we have fallen into the nightmare of epistemic dictatorship I lampooned earlier this week in the nonsense piece about the Science Pope.

Professor Reiss’ resignation is an embarrassment to the Royal Society, whose distinguished history spans almost 350 years. When the Royal Society was founded, various forms of creationism were the dominant theories of the origin of life. To contend that these older beliefs are no longer permissible topics for discussion is to irreparably harm the credibility of the scientific endeavour, to violate its assumption of neutrality, and to radically fail to uphold the grand tradition of discussing any and all aspects of human knowledge in a spirit of open debate, upon which all scientific institutions are alleged to be founded.

Sir Isaac Newton, one of the progenitors of modern science, held views that accorded with those of the proponents of Intelligent Design, saying of the Universe: “This most beautiful system could only proceed from the dominion of an intelligent and powerful Being.” Holding this belief did not invalidate his scientific work, which included the publication in 1687 of the most influential book in the history of science, Philosophiæ Naturalis Principia Mathematica. But he knew full well that future scientists would see further than him:

I do not know what I may appear to the world, but to myself I seem to have been only a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Today’s would-be Science Popes would do well to heed Sir Isaac’s advice that “tact is the art of making a point without making an enemy.”


A Secular Age (4): “Religion” versus “Science”

Thomas_Carlyle_1 Why are these terms in quotes? It is because if one were to examine the whole of the phenomena collected under the term religion, and under the term science, one would find more conflict internal to each domain than between them. Yet there is undoubtedly a sense in which people believe in the reality of this conflict. What we mean by “religion” in this sense is not the entirety of religious phenomena, but simply a position of artificial confidence created by particular orthodox kinds of Christianity, and what we mean by “science” is rather a position of artificial confidence created by “the success of post-Galilean explanations”. Taylor notes:

The pure face-off between “religion” and “science” is a chimaera, or rather, an ideological construct. In reality, there is a struggle between thinkers with complex, many-leveled agendas, which is why the real story seems so confused and untidy in the light of the ideal confrontation…

The roots of this conflict lie several centuries in the past – in the “scientific revolution” that commenced in the sixteenth century, and in the resulting tensions which reached their apex in the nineteenth century – but even then (as we shall see shortly) the situation is not as clear cut as it is usually imagined. There is an obvious public side to the echoes of this conflict today in the battle between ideological Darwinians and Biblical fundamentalists in the United States, about which Taylor notes the following:

So then as now, in post-Galilean Europe and post-Scopes trial America, a fragilization of faith partly due to disenchantment, combined with an internalization of this disenchantment, produces a face off between “religion” and “science” of a strangely intra-mural quality. This is the face-off which figures so prominently in the ex parte “death of God” story so popular among unbelievers. One party, moved purely by the interests of “science”, that is, finding an adequate explanation for the undeniable facts, squares off against another, mainly actuated by an extra-scientific agenda, that of maintaining cherished beliefs and/or traditional authority… But the actual history doesn’t fit this dramatic picture. If we look at the period we’re examining, we see that the mantle of sober scientists was often seized by the defenders of orthodoxy.

This was a point that the philosopher Paul Feyerabend was also keen to observe in his reassessment of the Galileo controversy: it may seem in retrospect that Galileo was fighting for “truth”, but there were profound flaws in his actual research for which the establishment were quite justifiably cautious in accepting many of his claims. The fact that his position would come to be seen as factual was not enough in this case – Galileo may have intuited the actual situation from his observations, but his scientific research was in fact insufficient to carry his claim at the time. This does not, of course, exonerate the Church’s behaviour in this controversy, but it does render the historical situation more complex than is usually considered.

The driving forces behind this conflict were changes to the cosmic imaginary (that which “makes sense of the ways in which the surrounding world figures in our lives”, and is the parallel to the social imaginary discussed two weeks ago). No longer was the idea of an ordered cosmos the basis of humanity’s concept of its position in the grand order of things. Rather, a vast – perhaps infinite – universe was seen to lie in the space beyond our world: “Cosmos to universe: the way the world is imagined changed…”  

This change created conflict with particular religious beliefs at the time of Galileo, and the problem was to reoccur with increasing force in the Victorian era with Darwin’s theory of natural selection (Darwin himself never used the term evolution, as this had a different implication at the time). Taylor is keen to try and place this aspect of the story in adequate context, as it is easy to be misguided by focusing on the replacement of one theory by another, but the transformation of the cosmic imaginary requires much more than this. Kuhn and other philosophers of science have demonstrated in recent decades that “without an adequate alternative framework of explanation, the most refractory facts will not budge us from our established beliefs, that they can indeed, often be recuperated by these old beliefs.” Taylor accedes:

This is not to deny that science (and even more “science”) has had an important place in the story; and that in a number of ways. For one thing, the universe which this science reveals is very different from the centred hierarchic cosmos which our civilization grew up within: it hardly suggests to us that humans have any kind of special place in the story, whose temporal and spatial dimensions are mind-numbing. This, and the conception of natural law by which we understand it, makes it refractory to the interventions of Providence as these were envisaged in the framework of the earlier cosmos, and the connected understanding of the Biblical story. Seen in this light, “Darwin” has indeed, “refuted the Bible”.

But the usual perspective we have about the role of Darwin in the transition to widespread unbelief is misleading because it presents the theory of natural selection as the pivotal point of this conflict. Yet in fact the Victorian era was already grappling with this problem long before the publication of On the Origin of Species by Means of Natural Selection in 1859.

Taylor identifies the historian Thomas Carlyle (pictured above) as a central figure in this debate. Carlyle had been raised in a strict Calvinist family, and his family had expected him to become a preacher; however, while studying at the University of Edinburgh he lost his Christian faith. Carlyle's work brought into the public awareness a wholly new perspective on history, and revealed what Taylor terms “the dark abyss of time” against which humanity was an almost insignificant speck. This was an important part of the background conditions in which Darwin’s theory appeared:

…evolutionary theory didn’t emerge in a world where almost everyone still took the Bible story simply and literally… this world was already strongly marked by ideas of impersonal order, not to speak of the dark abyss of time; and… an influential formulation had already been given to the displacement of Christianity by a cosmic vision of impersonal order, that of Carlyle. This doesn’t mean that Darwin was without impact. His theory gave an important push towards a materialist, reductive view of the cosmos, from which all teleology was purged (because explained away on a deeper level). But it enters a field in which many people had already felt the pull of the primacy of impersonal order; it did not initiate this pull on its own.

One of the most fascinating parts of Taylor’s account in connection with “religion” versus “science” is his exploration of the experience of conversion that people have when they abandon (say) Christian beliefs in favour of scientific materialism. He demonstrates quite comprehensively that this move does not occur as a result of “some rigorously demonstrated scientific conclusion”; in fact:

…the appeal of scientific materialism is not so much the cogency of its detailed findings as that of the underlying epistemological stance, and that for ethical reasons. It is seen as the stance of maturity, of courage, of manliness, over against childish fears and sentimentality… the story that a convert to unbelief may tell, about being convinced to abandon religion by science, is in a sense really true. This person does see himself as abandoning one world view (“religion”) because another incompatible one (“science”) seemed more believable…

But by “science” here we mean scientific theory plus “a picture of our epistemic-moral predicament in which science represents a mature facing of hard reality”. It is this whole package which beats out “religion”, which is to say religious faith plus a rival epistemic-moral predicament. The actual findings of science are less important here, and certainly do not prove the impossibility of God – Taylor examines both the proofs and the disproofs of God (as have other philosophers before him), and shows once again that neither really hang together in any convincing fashion. As with all metaphysics, such issues can never be resolved decisively one way or another. Thus:

When “science” beats “religion”, it is one such [epistemic and moral] vision which expels another… But once this happens, then the very ethic of “science” requires that the move be justified retrospectively in terms of “proofs”. The official story takes over.

As a theist, Taylor has a unique perspective on conversions to unbelief, and notes that part of the appeal here is “the relief of revolt”. When one has a childish conception of God, for instance, as a protecting father who could prevent our suffering but does not, then the pain of holding onto one’s faith in the context of the unkindness of everyday existence can become unbearable – the only way to alleviate the cognitive dissonance is to ‘flip’ into atheism. Against this, Taylor shares his own image of God as suffering with us, but is keen to note that any kind of faith in God can be seen as childish from some perspective.

Returning to the core theme of secularization, Taylor identifies a pervasive “unthought” which can mislead unbelievers when they consider religion just as easily as equivalent “unthoughts” may lead believers into strange ideas. This hidden outlook is strong among intellectuals and academics who, having undergone a conversion to unbelief (or perhaps having begun there) believe that religion must decline either because science shows it to be false, or because disenchantment invalidates it, or because it depends upon authority and cannot survive the modern importance placed upon individual autonomy. This forms part of what Taylor terms the “intellectual hegemony” in the academic world, which excludes or renders irrelevant the study of religion, especially in the fields of social science, history, philosophy and psychology. Academics who study religion are often met with surprise, as if religious matters no longer had any bearing on the modern world. The extent of this problem, I can attest, is greater than most people give credit.

The historical battle between “religion” and “science” marked a powerful transition of the cosmic imaginary, one which did indeed create problems for conventional Christian faith both in the Victorian era and beyond. But this transition was not a change to an imaginary where scientific materialism is inevitable and religious faith is impossible:

…the salient feature of the modern cosmic imaginary is not that it has fostered materialism, or enabled people to recover a spiritual outlook beyond materialism, to return as it were to religion, though it has done both these things. …it has opened a space in which people can wander between and around all these options without having to land clearly and definitively in any one. In the wars between belief and unbelief, this can be seen as a kind of no-man’s-land; except that it has got wide enough to take on the character rather of a neutral zone, where one can escape the war altogether. Indeed, this is part of the reason why the war is constantly running out of steam in modern civilization, in spite of the efforts of zealous minorities.

We live in a world where the landscape of belief has diversified into an unfathomably variegated patchwork quilt of possibilities, against which both the narrow cleaving to ancient tradition represented by “religion” and the equally blinkered flattening of religious beliefs to irrelevance represented by “science” are merely the polar extremes.

Next week: The Nova Effect


Difficulty is Difficult

My old sparring partner from Develop magazine and elsewhere, Daniel Boutros, has a piece about game difficulty up on Gamasutra (originally in Game Developer) entitled Difficulty is Difficult. Here's an extract:

When your company budgets for tuning of difficulty levels, the result is extended longevity for the game, and increased enjoyment for players at all levels. It goes without saying that games should be appropriate for their target players on all levels, providing challenges for each skillset. One important thing to remember, which has been creeping into games recently, is that players aren't often able to determine their own difficulty levels. They may think their skill level is far above or far below the actuality.


Dan's thoughts on game design have continued to improve with each passing year, and this piece is his best yet, combining interviews with the designers on various well-known game projects with his own perspective.


Playing Together

What’s the difference between a gamer who prefers to play multiplayer games, and a gamer who prefers to play alone? In this final look at the DGD2 survey (for now, at least – I have some major posts to come, but they will have to wait) I will very briefly examine differences between players who prefer to play alone, and those that prefer to play with other people. (Values in brackets are the statistical significance of a 2-tailed t-test – the lower the value, the more significant the finding). 

The 1,040 respondents in this survey divided more or less evenly into the two camps. 40.6% preferred to play single player games on their own, with an additional 7.1% preferring single player play, but enjoying playing such games with other people via pad passing and similar play-sharing techniques. The remaining 52% of the survey preferred a form of multiplayer gaming: multiplayer in the same room was the most popular at 17.1%, followed by virtual worlds and MMORPGs at 16.3%, and multiplayer gaming over the internet 13.6%. Finally, team or clan play over the internet represented just 5.3% of the sample.

One thing immediately stands out of the results: those who prefer multiplayer are much more focussed on challenge (and thus fiero – the emotion of triumph over adversity) than those who prefer single player. Multiplayer respondents gave much higher ratings for challenge-oriented play (.000) and both for the emotion of fiero (.009) and the fiero-enhancing emotion of anger (.009). That’s not all: multiplayer-preference players had a statistically significant higher preference for social emotions (.000) and random elements in games (.000), and a lower preference for sandbox play (.017). 

This paints a particular picture of these two kinds of players.

Multiplayer gamers (statistically speaking) tend to be challenge-oriented, and willing to be aroused to anger as this enhances their eventual reward in fiero when they attain victory. They are not only enjoying fiero, though, they are also enjoying the social element of multiplayer games such as the sense of belonging to a team, feelings of envy and gratitude, and the feeling of naches – the satisfaction of seeing someone you taught to play perform well. 

Conversely, single player gamers (statistically speaking) are showing greater interest in having control over the space of their play. This is one way to interpret the lower interest in random elements – these add variety to play, but they also mean the player has less direct control over outcomes. The higher interest in sandbox play can also be interpreted as an increased interest in having complete control over the play space, although undoubtedly other interpretations are possible.

Regarding the skills of play, multiplayer gamers rate themselves fractionally higher on basic game literacy (.001) – perhaps a sign of higher self-confidence rather than anything connected with game literacy – and (in the context of Temperament Theory) had a slightly higher mean rating for Logistical skills i.e. tolerance of repetition (.028) although this was a marginal result at best. However, the multiplayer gamers rated themselves much higher in terms of Tactical skills i.e. real time decision-making and action (.000) which is not surprising given that the most popular games to play in multiplayer all depend upon Tactical skills (first person shooters and racing games, for instance). 

We do not usually think about the split between those that prefer single player and those that prefer multiplayer games as enormously significant, but there are clearly patterns of difference to be detected. For one thing, it seems that the emotional reward of fiero may be more attractive when it is earned against (or with the assistance of) human players – beating a single player game might be less satisfying because it was not a person that was overcome. For the 36% of gamers for whom multiplayer competitive play is appealing, playing together is doubly rewarding: not only do they get the emotional benefits of social play, but the taste of victory appears to be all the more sweet when it is won from a human opponent.

This post concludes the statistical analysis of the DGD2 data for now. I may have one more analysis in the future concerning game genres, but this has not yet been conducted. Concerning the conclusions of the DGD2 study – this will have to wait!