Tuesday, December 14, 2010

Nightmares of Reason

Over at tggp's blog I was sucked into a fairly pointless discussion about the meaning of "technocrat". That led me to read this article about Robert McNamara, that portrayed his career as somehow paradigmatic of a certain generation of mangers, and did so in a much more sympathetic way than I am used to. (via)

I have a certain idea of McNamara in my head as some kind of monster of rationalism, a bloodless bureaucrat presiding over horrific violence and death without the slightest bit of human compassion softening his considerations. Sort of Eichmann-lite. From this article (and also from Errol Morris's film The Fog of War) he appears to be an altogether more appealing person, a tragic figure who simply was lead astray in his efforts to put his strengths into service. Those strengths were rationality, measurement, and goal-directed action. These talents worked pretty well for him in his career prior to the Kennedy administration, but utterly failed in government, when politics and conflict enter into the picture.

So was Vietnam "blundering efforts to do good", as McNamara would have it, or just another in a long line of evil imperialist actions, as the Chomskyite left would have it? I find myself caught between these two irreconcilable views. Can't it be be both? Can't McNamara be a good man who found himself unknowingly caught up in a bad system? Someone whose worldview left him blind to the effects of his own actions? Thinking along these lines leads to wondering about the nature of evil and if even Hitler was doing good by his own lights.

If McNamara's story is a tragedy of reason, the story of the left since the Vietnam era is a tragedy in the opposite direction. The war and the failure to put a stop to it led large segments of the cultural and political left to be suspicious of reason as such and to abandon it, for new age nostrums or smug deconstructionist pseudo-critique. Essentially, it prompted a new round of romantic reaction to the failures of the modern world, in this case represented by the button-downed rational managers of the postwar military-industrial complex.

In my own career I've been on the fringes of the artificial intelligence field, which had its origins in the same cold war rationalism that McNamara exemplified. The field has also suffered from the failings of narrow instrumental rationalism, which constricted the set of allowable models of intelligence to a very small and boring set. When I was in grad school I was loosely connected to a set of people trying to reform and break away from those limitations. Most of those people, myself included, instead drifted away from AI to pursue other areas (biology, sociology, user interface research, Buddhism...) I now find myself in closer contact with the old-fashioned kind of AI than I have been in years, and remembering why I never could be as enthusiastic about the field as I needed to be to work in it. It's not just the explicit military applications; it's an entire concept of what it means to be intelligent that is just so overwhelmingly wrong that it makes me want to scream. Yet the field chugs on, possibly even making some advances although it's hard to see what they are. The "peripheral" areas of AI, like robotics and vision, tend to make steady visible progress, but the more central areas like planning, reasoning and representation seem to be stuck, working on the same problems they were 20 or 30 years ago.


Anonymous said...

"... a certain generation of mangers..."

I note you made the same typographical error both here and in TGGP's comment section. A Freudian slip?

Meaningness said...

Thanks for the (Buddhism) link!

Oddly, I was thinking about the history of AI research just yesterday. (I have the same opinion about it that you do, unsurprisingly.)

The reason I was thinking about AI history was odd. I had just read an appendix to Ken Wilber's postmodern philosophical novel Boomeritis. The appendix is titled "Boomeritis Buddhism". After a very large dose of tiresome Wilberist theory, the last three paragraphs deliver a right-on-target explanation of what's wrong with mainstream American Buddhism now.

Since that was quite accurate, I thought I probably ought to read more. Apparently the novel is about a character, named "Ken Wilber", who is a brilliant young researcher at the MIT AI Lab, who encounters Continental philosophy, and on the basis of that realizes that AI is bogus, and sets off to reform it.

That is rather disturbing -- since at one point, Phil Agre (whom you also linked) and I were the two young researchers at the MIT AI Lab who encountered Continental philosophy, and on the basis of that realized that AI was bogus, and set off to reform it.

Which is ancient history, and I wouldn't have thought that anyone remembered it, so I'm not sure that the novel is actually about me, but it doesn't exactly seem like something that would happen by coincidence, either.

In retaliation, I am tempted to introduce a character named "David Chapman" into my philosophical vampire romance. Logically, "David Chapman" would have to be a pop philosopher who recycles bad ideas from dead Germans.

mtraven said...

That is moderately weird, and amusing.

I think you underestimate your own influence; your work and Agre's still gets cited pretty regularly. I supposed Wilber could have come across it at some point; it's a small world.

I've never read anything by Wilber; he looked sort of interesting but a bit too in love with his own systemizing (or something like that). Sounds like you aren't a big fan. Although I guess he admires your work if he's recycling and re-inhabiting your past lives.

David Chapman said...

I had no idea we were still getting cited! I completely stopped paying attention sometime around about the Ordovician.

I've wondered how Wilber might have heard about us. It could be, of course, that he just read Bert Dreyfus' work, and did the extrapolation himself, and it has nothing to do with us. That would be rather impressive. Another theory is that he ran into Timothy Leary's AI Lab spy. They would probably run in the same circles.

Someday I am going to have to tell the story of how Timothy Leary sent a beautiful spy to the MIT AI Lab to turn us all on to MMDA and thereby immanentize the eschaton. It is a true story, as far as I know. It prominently features my stuffed Hedgepiggy, who you may recall was subsequently on the board of directors of Afferent. He also plays drums in the virtual band Spürîöùs Ḍĩąçṙi̋t̵íc̆åł M̱ar̛kṡ. When we get a major-label deal, the story will probably come out. (You can't seriously promote a band without wild tales of sex, drugs, and situated agent architectures.)

I haven't figured out what to think about Ken Wilber yet. I haven't read much of his stuff. Hegel is his #1 dude, which shows seriously bad taste, but some of the other things he says seem insightful, so I'm giving him the benefit of the doubt for now.

mtraven said...

Well, Google Scholar says so. I'm not involved enough in the AI world to tell firsthand (and probably don't want to be for reasons you can guess). I just heard a talk from one of the leading lights of the semantic web, where he described his struggles to do exactly the kinds of structural representation that Winston and others did in the early 70s, except this time with Logic. I was not impressed.

That Leary story certainly sounds worth telling. I actually hung out with him one time, through Media Lab/LA computer graphics world connections. My recollection is hazy as one might expect, but I think we were discussing ideas for the next iteration of his Mind Mirror software.

David Chapman said...

Those who are ignorant of the history of AI are doomed to repeat it.

I've had your same reaction to most of what I've read about the semantic web (although some less-ambitious parts of it will probably turn out to be pragmatically useful).

I think the history of AI could be summarized as "The first twenty-five ideas terrifyingly smart people have about how to do AI turn out not to work. And there isn't a twenty-sixth idea."