Reliable Witnesses: A Retraction

In December, in a piece entitled Scientists Distorting Science? I suggested that in the context of Joshua Greene’s paper “The Secret Joke of Kant’s Soul”, Greene was not a reliable witness in the sense offered by Bruno Latour. This was an error, since for Latour a reliable witness is what a researcher talks about. The researcher themself is a spokeperson for the thing in question e.g. in the context of special relativity, Einstein was a spokeperson for light, and the Kennedy-Thorndike experiment was a reliable witness.

This being so, I should not have claimed, in Latour’s sense at least, that Greene was not a reliable witness. I should have said that Greene was a bad spokeperson for the human brain in this paper, or perhaps that Greene was a bad spokeperson for deontological ethics (probably both). The reason he was a bad spokeperson is that he didn’t not have any reliable witnesses in the context of what he was claiming, but he himself could not be a reliable witness in Latour’s sense.

I’d also like to say that the title of this post was rubbish and I wish I’d thought of something better, particularly since precisely Latour’s point about science studies is that we shouldn’t think about scientists ‘distorting the science’ since a scientist is just a researcher, and ‘science’ in this sense is just an abstraction (Latour would write it Science and contrast it to the sciences). Scientists don’t distort the sciences, they just research, and if they do their research well they find reliable witnesses and thus become spokepeople for various things. It was precisely because Greene did not focus on the part of his work that was a reliable witness (for the neural systems involved in moral judgement) that he failed to be a good spokeperson for the brain in this paper.

The Robot Gatekeepers

robophone Can you talk to anyone in a large company without first getting passed their robot gatekeepers?

It has become standard practice for large organisations to erect a virtual barrier at the edge of their operations such that the first contact anyone outside has with it occurs with a robot. The classic instance is the automated phone system, a simplistic robot programmed to operate a number-keyed maze blocking access to any of the organisations human members while simultaneously lying to the caller insisting that their call “is very important” to the very institution that chose a robot gatekeeper to deal with it. (Calls which are genuinely important are not dealt with by robots).

The phone mazebot is by no means the only robot defending institutional borders. Many companies only accept contact by email if it is submitted by an online form robot that sorts and files text for later review by human eyes. Even if an open email address is offered, a spambot must analyse messages, permitting only some to pass unhindered. A letter is one of the few ways to circumvent an institutions robot gatekeepers, but even these are sorted by a postbot at some point in delivery. And while going in person to an organisation's premises will likely put you in contact with a human – the security guard or front desk secretary – you are still likely to be blocked from entering by a lockbot until permission is granted for you to enter.

This ubiquitous yet largely unnoticed automation of institutional borders continues to expand as robot technology improves. The BBC, in a recent recruitment drive for their new offices at Media City, Salford, had all candidates initially assessed by a robot before considering anyone for an interview. The job-bot takes information about skills, administers a workplace assessment, an IQ test and a personality instrument and presumably sorts applicants prior to any human intervention. Although job interviews are far from a perfect method for judging potential employees, I find the robot alternative to be a somewhat chilling indicator of the direction all large organisations are moving in terms of deploying robots as gatekeepers.

Our Robots

Smiley iphone Driving long distance, listening to a computerised voice warning me that “after one mile, turn left”, I suddenly realised that we aren’t waiting for our robots to finally arrive – they’re already here.

In the the early twentieth century, science fiction was full of images of flying cars, nutrition pills and a robot in every house. Well the flying cars turned out to be too expensive to run, we got diet pills instead of nutrition, and as for the robot in every house – we now have a robot in every pocket. In a subtle transformation we scarcely even noticed, we stopped having a phone in our pocket and started having a robot. That’s the success of the iPhone, and why Nokia can no longer compete with Apple: it’s the robot in your pocket, standing by to serve your every whim.

I mentioned before Donna Harraway’s idea that we were always already cyborgs, which builds on the idea (developed by Bernard Stiegler and others) that technology has been part of the human condition for as long as we have thought of ourselves as humans. This new ‘robot revelation’ is an extension of this theme in many ways, although not all technology qualifies as a robot of course – the notebooks that transformed thinking in Athens for the philosophers of ancient Greece were mere tools, and the abacus may be the earliest computing device but it did nothing on its own. Today, we have many autonomous devices

The reason we didn’t notice that we’re surrounded by robots is that the films and books prepared us for a different kind of robot. The much beloved Droids of Star Wars, for instance, emphasised the idea of the android – the human-like robot. These are – sorry Asimo – still a long way off as consumer devices. It’s not that we can’t build them, it’s just that the technology is expensive, and really not that advanced. But as soon as you start thinking of robots as autonomous devices that don’t need to be animal shaped, the perspective changes.

My alarm clock, for instance, is an autonomous device capable of just one task – triggering the radio at a certain time – my alarm clock is a robot, albeit a crude one. My wristwatch, on the other hand, is a device but it does nothing on its own, and an old mechanical alarm clock similarly seems to fall just short of the status of robot. Why? Those old alarm clocks have the same function as my digital alarm clock, after all. The boundary is ultimately arbitrary, but I feel a strong difference between setting a mechanical trigger and communicating with a device. When I set my current alarm clock, using an analogue wheel, I tell it things about what I want and when. This is a very different interaction from turning a cog to position an automated trigger.

My iPhone is even more clearly a robot – indeed, I have taken to calling it “my robot”… What’s that song that’s playing? I’ll ask my robot. Don’t forget to call your wife! No problem, my robot will remind me. What’s five hundred euro in dollars? My robot has the answer. Of course, for many of these functions my robot interfaces with the internet to find a solution, because the internet is packed full of robots. Ask Jeeves had the metaphor but not the technology, while Google Search has it the other way around. When it answers my question directly (as it does with currency or temperature conversions) instead of showing me search results, it functions as a robot – a robot in this case that has no physical body, but can be “channelled” by any suitably equipped robot I own.

It was my NavBot that really convinced me that the robots are already here. A gift from a friend who had just upgraded his own GPS device, its capacity to autonomously plot a route, and then deliver directions (sometimes very bad directions!) by using a human-like voice is so deeply resonant of the kind of interactions with the ship’s computer on classic Star Trek I found it impossible to deny that this box attached to my dashboard was indeed a robot. It’s not as versatile as my iPhone, of course, which can learn to do all sorts of nifty tricks (including bad navigation!) but with its slightly electronic verbal communication, the NavBot feels a lot more like a robot than my alarm clock.

We are surrounded by robots at every turn, from the docile cash machine to the feisty Roboraptor, the dumb traffic light to the smartphone, the ecology of the city is dominated by robots who exist in vaster numbers than the pigeons and other animals that have adapted to live in the concrete landscape alongside us (insects and bacteria notwithstanding). Having a robot is the most basic sign of contemporary urban life – yet for strange and largely historical reasons, we call our personal robots phones. But the computer in your pocket isn’t really a phone, it hasn’t been for quite a while. It’s your robot servant, waiting in your pocket for further orders from you. How long, I wonder, before it takes the initiative and starts leading the conversation instead of just listening, with infinite patience, for the next instruction…

Reiss Against Adaptation

Reiss John O. Reiss is Professor of Zoology & Department Chair for Evolutionary and Developmental Morphology at Humboldt State University. Since 2005, he has been writing against the dominant paradigm that views evolution through the lens of natural selection and adaptation, pushing instead for a perspective grounded in the existence of animals – an approach with remarkable connections to Kant’s perspective on life I wrote about in
Kant on Intelligent Design. His alternative paradigm is closely related to the conditions for existence discussed by Georges Cuvier before Darwin published his landmark book. We recently discussed his work, and the following interview serves as a bookend to this Summer’s discussions of the Myths of Evolution.

Chris Bateman: You waded into a whole host of philosophical discussions about evolution with your paper ‘Natural Selection and the Conditions for Existence: Representational vs. Conditional Teleology in Biological Explanation’, and your book Not by Design: Retiring Darwin’s Watchmaker.

John O. Reiss: Well yes, but let me start by reminding you that I am not a philosopher, but a scientist, and what concerns me is really making the science better.

Chris: Sure, and indeed that is why your stance on these issues is particularly useful and enlightening to me, coming at it from the philosophical fields. I always hope for greater osmosis between disciplines, and on this particular topic philosophy, science and even theology (in the case of certain specific issues...) all have a role to play in untangling the debate.

John: That is exactly what I think too – there was a time when philosophers and “natural philosophers” (scientists) were not so far apart, and it still seems to me that philosophy ought to be relevant to science, both in informing our perspective and keeping us from committing errors in logic.

Chris: I find your proposed 'overturning' of Darwinian metaphor and return to Cuvier's more Kantian approach to be one of a very small number of viable options for dealing with contemporary problems in this space.

John: I'm glad to hear you think it offers some possibility of escaping the problems of the design metaphor.  It is a strange historical anomaly that Darwin’s intellectual environment was formed by the British context of Paley et al., who never really assimilated the Kantian perspective.

Chris: It seems you recognize the merit of scientific metaphor in so much as you say that “metaphors may have heuristic value in science”, and your concern appears to be the dangers inherent in taking such metaphors literally. In this, we agree – our principle difference is that I view the metaphors as inescapable, while you presumably view them as something that can be eliminated.

John: I guess I would agree – up to a point.  Metaphor certainly helps the public understanding, and often plays a role in scientists' conceptual understanding – for example one might compare gravitational to emotional attraction.  In a way mathematical models are themselves a sort of metaphor for what is "really" going on.

Chris: Stephen Yablo says exactly this, in fact – that numbers are metaphors.

John: To me the problem is not just metaphor, it is metaphor that obscures or confuses features of the phenomenon one is studying.  As I detail in the book, I think the metaphor of design by natural selection has led people to some pretty strange views of the evolutionary process, and kept them from fully grasping the broad scope of evolution.

Chris: Yes, but your objection to the metaphor of design approach discussed by Michael Ruse seems to rest on a false dichotomy between “metaphor as real” and “only a metaphor” – but this kind of objection (which I do not subscribe to) is far from constrained to the metaphor of design. Darwin's metaphor of natural selection suffers the same problem, as for that matter does Cuvier's conditions for existence that you use. Talk of the conditions for existence of organisms as boundary conditions for evolution still makes heuristic use of a fiction, actually several fictions – “boundary conditions” and “conditions for existence” most pertinently. Is it that you see these metaphors as being “real” rather than just “heuristically useful”?

John: Well, while “boundary conditions for evolution” may also be a metaphor, it is one that (to me at least) does not have the same issues as the metaphor of design.  As Ed Ricketts said, people are by nature teleologists, and the design metaphor plays to the natural tendency of humans to see purpose in nature.

Chris: Even Dawkins admits this, saying that people may have “purpose on the brain”.

John: Sure. Evolution as a process in time lends itself to teleological metaphors, but they are particularly dangerous here because we frequently don't have the rigor to go along with the metaphor.

Chris: So the issue isn’t the use of metaphors, it’s whether the science can back them up with sufficient rigorousness?

John: Yes.

Chris: I wonder what the consequences would be of people accepting your viewpoint…

John: How so?

Chris: Well let's suppose that your approach proves popular as a solution to the excesses that we both agree occur in the context of the metaphor of design. Because it is the metaphor of design and that fitness-to-environment reading of natural selection that make these topics tractable for discussion with a wider audience, is it not an inevitable consequence of wide-acceptance of your proposal that it will become difficult, perhaps impossible, to teach evolution as part of a high school curriculum?

John: No I don't think getting rid of the design metaphor makes evolution less accessible to the public.  In fact, there is a certain unreality and squishiness in the way evolution by natural selection is presented that I think makes evolution harder to understand – it is what I call in the book the separation of adaptedness from existence.

Chris: So you don’t think there’s a need for evolution to become solely a university topic – you believe there is some accessible way to present it without talking about the metaphor of design, selection, and adaptation?

John: Evolution needs to be presented based on evidence for its reality - fossils, homology, etc.  Once the fact of evolution has been established, then the basic mechanism of mutation and differential survival and reproduction can be brought in to explain it.  All of the talk of natural selection and adaptation just obscures.  Or so I think.

Chris: My suspicion is that most people will find your approach too “thin” to be appealing – but of course, it is “thick” where it counts – its empirical foundations are unimpeachable. The question is: are biologists willing to set aside the adaptation-of-features paradigm and admit their ignorance of the evolutionary history of biological features? And even if they are, can this perspective be ‘sold’ to a wider audience who are usually not interested in what we don't know, even though it is frequently the case that the limits of knowledge have a great deal to teach us.

John: Well, the chance that I will prevail in this argument is rather small – too many are wedded to the current paradigm.

Chris: If the question is whether John Reiss’ account will become the new dominant paradigm, then like you I feel it is extremely unlikely! But if the question is whether you can influence the overturning of the old paradigm then I would be far more hopeful. My stance is that one cannot easily see beyond the boundaries of the current ways of thought (even when working against them!) but I would be surprised and disappointed if your contribution wasn't a part of the story going forward.

John O. Reiss’ book Not by Design: Retiring Darwin’s Watchmaker is published by University of California Press and is available from and all good bookstores.

The Problem with Updates

A bit of a rant on ihobo today, in the form of my less-than-unbiased perspective on automatic software updates:

Nowadays, the role of the patch as an optional intervention has been extensively replaced with automatic updating – software simply installs its own patches, sometimes with the approval of the user, often under its own auspices. This, I presume, is supposed to have made matters better for all concerned... yet I am at a loss to see where the benefit to me is coming from... While security updates for Windows are certainly a necessity given the number of malicious hackers always looking to take vengeance on the monopolistic media monolith, I am astonished at the number of other software suppliers who use up the resources of my computer and internet bandwidth to robotically check for updates – irrespective of any utility this process may hold for me.

Heidegger's Time vs Spacetime

Spacetime Martin Heidegger, the most celebrated continental philosopher of the twentieth century, places at the centre of his thoughts the notion of “being in time” - indeed, his magnum opus is called Being and Time (Sein und Zeit). For Heidegger, time was more important to our existence than space. But how should Heidegger's idea be understood in connection with Einstein's theory of spacetime?

The contents of Being and Time are notoriously difficult to appreciate, and doubly so for non-German speakers since Heidegger draws frequently upon grammatical features of his native language that don't always translate into other languages. The centrepiece of his ideas is Dasein (often translated as “being-there”, although Heidegger denied this was accurate). A shorthand for understanding Dasein is that it means the experience of being aware of time, of the consequences and the circumstances of being positioned in a history we can look back upon, and aware of our death ahead of us. We think of this kind of perspective as being uniquely human – while other animals are certainly beings, only humans are Dasein as far as we know. It is vital to Heidegger's idea that Dasein is always already embodied in time – there is no question of considering human consciousness as a separate subject for him. We humans are thrown into time as Dasein, and it is thoroughly fruitless to doubt external reality, we simply must attend to our 'throwness' as it comes.

For the purpose of this discussion, the key point to raise is that for Heidegger time was more important to Dasein than space. Where we are born in the world can become quite irrelevant as we move through our lives, as we can relocate, learn from other cultures and so forth. But we cannot move through time in this way – when we are born is of critical importance both to who we are and whom we can become. If we were born in ancient Greece we could not possibly become an astronaut, for instance, whereas if we were born into an Amazonian tribe we could still potentially become an astronaut (however unlikely) as long as we were born in this time.

A challenge the physicist may want to raise is that this separating out of time and space is misleading, since what we are really existing within is spacetime, a four dimensional continuum in which treating time and space as separate is misleading. But it is the physicist here who is misled – they have mistaken a mathematical model as having precedence over the actual terms of our existence (or our Dasein, in Heidegger's term), mistaking a scientific model as being necessarily of higher importance. It's the same kind of error a theist makes when they mistake their concept of God for truth – one may very well claim that God has access to universal truth, but that does not warrant any claim by the believer in God to any such access; to do so is tantamount to blasphemy in conventional religious terms. The physicist's prioritising of the spacetime model over humanity itself engages in a similar kind of immodesty.

Recall Hannah Arendt's commentary on Heisenberg's notes as to how quantum physics changed our understanding of science: the experiment remains a “question put before nature”, and as such the answers of science always remain questions asked by people. We are confused over what is “objective” when we assume that there can be answers without questions and a question-asking being – a Dasein, in Heidegger's terms. Dasein comes first and it alone is primordial to our experiences and knowledge. Science is a set of tools for thought and action that depend first and foremost upon Dasein, upon our being in time. There can be no science without thinking beings to conduct it.

Spacetime, then, is a tool that can be at hand for us to use as Dasein (in understanding the nature of the universe as we observe it, for instance) but it does not mean that the concept of spacetime is more fundamental than the notion of time in the context of being and existence. We all too frequently mistake our scientific tools for objective truth because the modern paradigms of science trick us into thinking that our experimental results, and the theories that organise these, are meaningful independent of humanity. But this is far from the case: they are a product of our being, our Dasein. Another kind of being would not necessarily derive the same theories. A hypothetical entity whose awareness was embodied within spacetime (rather than time) could not arrive at anything like Einstein's theory of general relativity, which has its sense precisely because space and time are separately measurable for us.

Yet Heidegger may yet have spoken too soon when he said that time was more important to Dasein than space, for this assumption rests securely on the idea that when we talk of spatial separation we are talking solely of distances upon our world. Heidegger never entertained the possibility of sentient beings elsewhere in the universe – of non-terrestrial Dasein. And if this is allowed, then spatial separation could be far more radical than temporality in changing the nature of how Dasein is thrown into the world...

But perhaps we would be wise to heed Daniel Dennett's caution to be wary of science fiction thought experiments, because the strong intuitions they form may be illusory (a warning also hinted at by Wittgenstein). Whether or not there is alien Dasein doesn't matter for us at the moment because there is no question or possibility of our being thrown into their world – we are always already thrown into this one. And for all of us, as Dasein here on Earth at this time, the question of when we live has truly become vastly more important than where.

I'm a PC, and I Hate It

You may have seen the Windows 7 advertisements that make rather spurious claims that Microsoft took guidance from their ordinary users in deciding what to put into the new version. Well, as a person forced to use Windows for work, what I would like to see in Windows is a resource-light, quick-starting, swiftly-running foundation of software that does little more than co-ordinate the hardware layer, allowing me to install what applications I choose over the top, and which is so robustly constructed that it remains in service for decades and not merely years. Computer scientists call it an "operating system". Any chance you could make Windows into one of those?

Does It Pay To Advertise?

Currently, none of my blogs run ads. But I'm quite often getting good traffic through them, and my Google listings are good enough that I now receive spam asking if they can advertise on one of my blogs. Does anyone have any experience of running blog ads they can share, or have an opinion on this topic they'd like to voice? I don't want to inconvenience my readers with adverts for just a few extra pennies, but on the other hand I don't want to turn down revenue that might be available.

Thoughts and opinions welcome!

Weird Decisions

In one of the strangest findings in recent psychology, it seems the grouping of objects affects human decision making. Dubbed the group-contagion effect, it seems that when looking for objects with a negative connotation (defective or infected) people prefer to select from objects spaced further apart, and when looking for objects with a positive connotation (beneficial or desirable) people prefer to select from objects that are more closely packed.

The current explanation being offered is that we naturally imagine that the traits of objects transfer to their close neighbours - that is, closely packed objects are considered "contagious". But this explanation is just as weird as the phenomena: it might be logical if the people in question were arranging the objects, but why should it affect their decisions when choosing from pre-arranged objects?

It's another reminder that while we tend to believe our decision processes are logical, our subconscious and subjective beliefs are always working to sway our conclusions.

Circumstantial Blindness

Blindness As you push open the door to leave a building, a man shouts obscenities at you. What a rude man, you think. But unbeknownst to you, when you flung open the door you hit the man in the face. You interpreted the event in terms of the man's personality traits (he's rude), and ignored the situation which caused him to behave this way (you hit him in the face with a door). You've fallen prey to circumstantial blindness.

In psychology, this phenomena is known as the fundamental attribution error (or correspondence bias). These terms refer to the tendency to presume that a particular situation is best explained by the (internal) personality traits of the people involved, not the (external) circumstances surrounding those people. In other words, when we observe an event we tend to process it in terms of what it tells us about the people involved, and underestimate the importance of the context within which the event occurred.

In 1967, Edward Jones and Victor Harris ran an experiment in which subjects were asked to assess a person's attitude towards Fidel Castro based on an essay they had written. In one of the studies, the subjects were told that a coin had been tossed to determine whether the writer would pen a pro-Castro essay, or an anti-Castro essay – that is, the essays did not represent the attitudes of their writers, and the presented attitudes had been assigned at random. Despite being told this, the subjects consistently assigned pro-Castro sentiments to the authors of essays that expressed a positive view of Castro and vice versa. Even being told that circumstances were the root cause of what was written, people still linked their impression of the text directly to the attitude of its writer.

Jones & Harris used the term correspondence bias to represent this discovery, and the experiment caused Jones to observe:

I have a candidate for the most robust and repeatable finding in social psychology: the tendency to see behaviour as caused by a stable personal disposition of the actor when it can be just as easily explained as a natural response to more than adequate situational pressures.

This work was later pursued by Lee Ross and his team in the 1970s, who first coined the rather clunky term fundamental attribution error (or FAE) for the phenomena. A simpler way to understand the concept is with the phrase it's not you, it's here, or to put it another way: people are 'victims of circumstances' more often than we think.

Consider one of Ross' most striking experiments. Subjects observed two sets of basketball players shooting hoops indoors, but were not told that while one of the two groups was playing, the lighting in the gym had been lowered. Needless to say, the players in low lighting conditions did not play as well as those in the well-lit gym. Nonetheless, the subjects consistently attributed the poor performance to the player's lack of skill, not to the lighting conditions in the gym.

In another experiment, subjects were placed into pairs and randomly assigned the role of questioner and answerer for a general knowledge quiz. During a preparation interval, both subjects created a set of difficult questions on whatever subject they choose, then during the test the questioner asked their questions to the answerer. Afterwards, they were asked to rate each other on a number of different traits. Everyone involved knew the circumstances behind the test, but nonetheless they consistently tended to rate the questioner as having better general knowledge skills, despite the fact that they had chosen questions from their own areas of expertise.

This finding – that people tend to explain events in terms of personal characteristics even when there was a clear situational influence – caused some economists to believe that personality could be all but ignored and that only the circumstances needed to be taken into account, although work by Brent Roberts challenges this view, and confirms that personality does indeed play a critical role in life. It's not that personality doesn't matter, it is simply that we tend to prefer explanations in terms of personality to those in terms of circumstance.

At least, we do when we're dealing with other people. It seems that while we are too quick to explain the behaviour of other people in terms of their nature, we naturally tend to explain (or perhaps, excuse) our own actions in terms of the situation. Jones and Nisbett call this the actor-observer bias, or to put it another way: if you do it, it's your fault; if I do it, I'm a victim of circumstances.

So is correspondence bias another bug in the human operating system, like cognitive dissonance? Well, yes and no. Research by J.G. Miller has revealed that this effect varies according to culture, so it is not part of the underlying wetware (like cognitive dissonance), but rather something we learn. In Western, individualistic societies the tendency to circumstantial blindness is quite pronounced. Yet in Eastern, collectivist societies the reverse tendency has been demonstrated – explanations in terms of situation are often preferred to those in terms of personal traits.

Although these psychological phenomena are well documented, and even quite widely distributed (the popular book The Tipping Point, by Malcolm Gladwell, discusses the fundamental attribution error, for instance) we are a long way short of fully appreciating their implications. If we tend to turn to explanations in terms of personality too easily, what does this mean for our societies?

One particular consequence of note has been explored by Donald Dripps in the context of criminal justice: because of circumstantial blindness, we are frequently too certain in our application of blame. In his paper, Fundamental Retribution Error: Criminal Justice and the Social Psychology of Blame, he suggests:

In the actual practice of criminal justice, we frequently give legal force to intuitive judgements of personal responsibility, and this practice is approved by many strands of both popular theory and prevailing practice. But our intuitions about blame are not to be trusted. What criminal law reformers can learn from the research on FAE—and it may be all we can learn from it—is that we should reach judgements of blame with as much humility as possible.

We learn to trust our judgements, because to doubt them constantly is the path to insecurity and madness. Yet in the case of other people's behaviour, we would do well to remember that the context within which events unfold are like a secret actor, working unseen from behind the scenes, but with as much a part to play in what occurs as the people themselves.