So I finally got around to reading the Mercier & Sperber paper that was buzzing around the blogosphere recently, Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(02), 57-74. I love an argument and so would be predisposed to like a paper that tries to show that argumentation is the basis for human reasoning. And it turns out that Mercier is going to be one of the featured speakers at this conference I'm going to.
In some ways this is a glaringly obvious thesis to me, but only because I've been turning my thinking in this direction for quite awhile. It seems very similar to Latour, in some abstract way, although he's not cited (these guys are cognitivists, which means they think about what goes in inside the head, while Latour is a sociologist who things primarily about what goes on outside). But the emphasis on the agonistic nature of reasoning is the same; the idea that the purpose of representation and thinking is fundamentally to strengthen a position. Latour and M&S seem to be coming at the same insight from two rather different approaches. Latour is coming at it from a combination of sociology and advanced metaphysics; M&S muster tons of experimental evidence to show that people are better at shoring up existing beliefs with argument than coming up with objective truths.
What's odd to me is that this position seems like a very natural fit for computationalists, yet most AI people are stuck thinking that representations are mere symbols, that squat in the brain and have a magical rapport with things in the world. But the essence of computational thinking (in my version of it anyway) is to be acutely aware of the relationship between representations and the processes that use and generate them. So if there is an argument or other interested process behind thoughts, that should come as no surprise, but apparently it still is.
Agre & Chapman and others analyzed this problem and tried to fix it, ages ago, but it didn't seem to take. In fact I now recall that one of Agre's hacks was a dialectical situated action agent that would argue with itself about how to cook breakfast, or something like that. I wonder if it's time for another run at the problem?