I was lucky enough to meet Edward Castronova when we were both speakers at an academic conference on the subject of games in Manchester... This particularly stuck in my memory as it was the first time that I met someone whose paper I had read, and whose name I remembered from the paper. Ed is one of those rarest of people, a liberal academic with a sound head on his shoulders who possesses great insight but wears it lightly. His (shared) blog, Terra Nova, was the first blog I signed up for when I started plunging into the curious world of blogging.
He recently posted a musing about machine intelligence, which can be found here. Here's an extract:
If AI is like electricity and engines, it too will dramatically change how we live. Many good things will be gained, but some good things will be surrendered, too. We should absolutely be, if not fearful, at least prudent. We could have done cars and electricity a different way. Los Angeles did not have to turn into what it is today. We really should think about how to build intelligent machines. Hawkins does not realize that we will share society with them, and for social reasons, we need to think about how to get along.
I agree with Ed; intelligent machines (which we will probably see in our lifetimes - although I do not believe we will see human-like cognition in my lifetime) will share society along with us, and therefore will need to relate to us. After all, we get emotionally attached to cars, and they are still clearly tools. As soon as we are dealing with an ambulatory device capable of following verbal commands, we will naturally anthropomorphise, and by consequence, the machine will need to relate to us appropriately.
I've been thinking about issues such as this on and off for years. I did a Masters degree in Artificial Intelligence (slash Cognitive science) and although I enjoy science fiction yarns with hostile robots as much as the next being (especially Greg Benford's Galactic Centre sequence) I do feel there are too many academics with excessive fears about AI. Ed's approach is balanced - he merely points out that another person's reasons for dismissing fears about "the robot menace" seem unfounded in the light of the social relations between very basic AI and humans in MOGs.
If you want to see how the intelligent machines of the future will relate to us, you only need to look around at how we relate to other animal species today. Dogs all but never attack their owners, because they are fully social species and as such surrender completely to the chain of command. Intelligent machines similarly will need to be programmed to be fully social. (This is the likely implementation of the first two of Asimov's laws of robotics - which in reality are only a thought experiment, as logical statements will not be the basis of complex AI).
Similarly, I relate very well with squirrels, and indeed can sometimes be found sat in the park having them climb up onto my knees and eat nuts from my hand. They do this because they trust me, and know that I am not a threat. (Plus, of course, I am a source of easy food). They are not a fully social species, but they do not enact violence because their notion of confrontation is based upon brinkmanship, not physical harm. It does not even begin to occur to the squirrels that they could attack me and steal all the nuts from me - even though they sometimes have sufficient numbers to do so. (And if you're interested in the esoteric notion of what would happen if squirrels acquired human-like consciousness, then you should buy my obscure e-novel, Dreamtime). Their consciousness just doesn't include the notion of operating collectively for anything other than early warning of the approach of a predator.
My point is this. In principle, we have nothing to fear from intelligent machines themselves, as the mechanisms we will put in place to ensure their usefulness will inevitably assign them a very low status - lower even than the family dog, which conventionally has the lowest rung on the family ladder.
However, that does not mean we don't have something to fear from intelligent machines. I see two areas of risk. The first is the most obvious; if any foolhardy nation decides to replace soldiers with machines - a theme that is explored in my favourite anime series, Gundam Wing. However, balance of power on the global stage could and should prevent this from happening, much as the threat of mutual destruction has prevented the nuclear catastrophe that the intelligentsia believed was inevitable - the original parameters of the Drake Equation show that people in 1961 thought that nuclear-powered civilisation would last only ten years before destroying itself.
The second area of risk is more serious, and it is one we face today. Man-made viruses. That is to say, the malicious reprogramming of intelligent machines. To this, there is no real defense. As long as their are marginalised intellectuals who are improperly socialised (by, for instance, bad experiences in high school, or aggressive foreign intervention in their country) there will be people whose only source of attention is clever vandalism. Since cryptography is only ever an arms race, it seems unlikely we can protect absolutely against interference.
It follows, therefore, that it may not be appropriate for wide scale implementation of intelligent machines until we improve our social, cultural and intercultural foundations to minimise the risk of people feeling alienated. While individuals can feel alienated, can wish to cause malicious harm, all technology carries with it certain risks.