Continued elsewhere

I've decided to abandon this blog in favor of a newer, more experimental hypertext form of writing. Come over and see the new place.

Monday, August 16, 2010

My Singularity Summit Awards


Here's my highlights and lowlights of the Singularity Summit. All in all it was a better use of my time than I had anticipated. For better or worse, libertarian memes were not very visible (the Seasteading Institute had a table, that was about it). Links here.

Best Presenter:
Ramez Naam. The Digital Biome

I wouldn't say I learned anything radically new, but he had excellent slides, very engaging speaking manner, and gave a talk that was mostly grounded in the reality of actual environmental problems.

Most New Information:
Brian Litt, The past, present and future of brain machine interfaces
Lance Becker Modifying the Boundary between Life and Death
Ellen Heber-Katz: The MRL mouse - how it regenerates and how we might do the same

These were all actual scientists (including two MD researchers) presenting interesting new developments.

Most Annoying:
Michael Vassar, The Darwinian Method
Ray Kurzweil, The Mind and How to Build One

Partly for speech style (lispy and drony respectively); but mostly for speaking in huge generalities and saying nothing new.

Weirdest:
Steve Mann, Humanistic Intelligence Augmentation and Mediation

For wearing computers on his head and playing a water piano. Weird in a good way. Runner-up: the guys making very lifelike and creepy robots modelled on Einstein and Philip K. Dick. Now that I think of it, there was nothing really all that weird, which was somewhat disappointing.

Person I would most like to argue with:
Eliezer Yudkowsky: Simplified Humanism and Positive Futurism

Highest BS level:
Jose Cordeiro: The Future of Energy and the Energy of the Future

For claiming, among other things, that in 30 years we will have unlimited free energy (from space-based solar, I think, but he was all over the place). Extra points for having an MIT degree and presumably capable of thinking more critically.
[[update: PZ Myers takes Kurzweil to town. I pretty much agree, but I've heard Kurzweil's kind of stuff for so long that I just screen it out. Still, I'd put the odds of whole-brain simulation somewhat lower than free energy in 30 years, so maybe Cordeiro has to share his prize.]]

Most Disappointing:
Irene Pepperberg: Nonhuman Intelligence: Where we are and where we're headed

I'd heard a lot about this research, but the presentation had very little intellectual depth to it. Yes, birds and other animals can do some amazing things. And this proves what, exactly? She didn't say. Could be the talk was just pitched at too low a level.

Most entertaining:
James Randi, of course.

Oh well, I'm never going to be a member of this church but I didn't mind visiting for a service, in the ecumenical spirit.

7 comments:

Tyler said...

I attended this year's summit as well, and had similar observations. Pepperberg's research is interesting, but her pitch was juvenile and muddled. I also thought Cordiero's talk was filled with bold declarations, and little to no data. I thought the two MDs had the most polished and interesting presentations. I was also left with the feeling that I could have gotten something out of John Tooby's presentation if he wasn't such a terrible presenter. Was he making it up on the spot? What did you think of Shane Legg's and Ben Goertzel's talks? Probably the most relevant to the subject of the summit.

Matthew Fuller said...

Just a passer-bye here.

I've heard of yudkowsky and singinst.org years ago. There probably won't be videos up for months, if things go on like last year.

So, what was it that made you disagree with his premises or his values (or both!).

I can see general skepticism of the singularity but am not seeing what is wrong with his version of simplified humanism (but I didn't see the presentation).

mtraven said...

Tyler: Agree about Tooby. Interesting ideas, poor presentation -- he spent too much time griping about intellectual slights suffered 30 years ago. And he never quite got to saying how you could verify the reality of one of his hypothesized reasoning circuits. (paper is here).

I couldn't really form an opinion about Goertzel and Legg based on their presentations. Goertzel in particular had unreadable slides and was vague about what he was actually doing. Legg's seemed potentially interesting and at least told me enough that I might be motivated to look up his papers.

I guess it's a general rule that a talk is at best an advertisement; if you want to learn anything serious you have to go read something.

mtraven said...

Matthew: Well, it's hard to summarize my disagreement in a blog comment; maybe I'll do a post when/if his slides are up.

My most fundamental disagreement with singularitians, I think, stems from their devaluation of the social and overemphasis on the individual. Eg, this childish desire for immortality. It's not wrong exactly, it just shows a severe emphasis on the wrong things.

TGGP said...

Kurzweil says Meyer is attacking a second-hand strawman. Some fan of his by the name "Al Fin" provides commentary. Since you were actually present, could you tell us whether Kurzweil really was full of that straw?

mtraven said...

Al Fin's commentary was pretty good, so thanks.

My own impression of Kurzweil is that he drones on and on about exponential growth of technological capabilities. But there are at least two big problems I have with him: first, not every technology increases at the same rates, and such exponential growth curves inherently can't go on indefinitely (for instance, processor speed has plateued; further growth in computational capacity will have to come from parallelism), and the interesting thing is predicting how they will go. Good technology prognostication requires a realist rather than a transcendentialist appraoch.

Second, the fact is that after some decades of AI we still don't have very good theories about the relation of computation and intelligence, so blithely assuming that having large computational capacities is going to inexorably lead to intelligence drives me crazy. It can't hurt, but it isn't magic either. Google has an awful lot of computational capacity but they aren't Skynet yet.

TGGP said...

I read Kurzweil's "The Singularity is Near" and there he acknowledged that engineering increases plateau. He just assumes that some other exponential will appear to take its place. Robin Hanson says that even assuming the most optimistic Kurzweil trajectory, eventually we will know everything there is to know and grab all of the available resources in the universe, ending in eventual Malthusianism. That's too far in the future for Kurzweil's horizons though.