Continued elsewhere

I've decided to abandon this blog in favor of a newer, more experimental hypertext form of writing. Come over and see the new place.

Sunday, July 16, 2017

The Machinery of Destruction

Who are you more afraid of – psychopathic individuals, like Ted Bundy, or psychopathic systems, like communism or Nazism? Or capitalism, which while it may not be as inherently murderous as the others, seems to be far more efficiently destroying us? Which of these scare you most, and emotional reactions aside, which are actually the most likely to do harm? What if the entities in questions were endowed with superhuman intelligence, like the fictional but archetypal Hannibal Lecter, or capitalism with better technology?

This thought was prompted by another SSC post, which makes a case for putting more resources preventing possible catastrophic consequences of artificial intelligence. In the course of that, he dismissed some common counterarguments, including this:
For a hundred years, every scientist and science fiction writer who’s considered the problem has concluded that smarter-than-human AI could be dangerous for humans. And so we get these constant hot takes, “Oh, you’re afraid of superintelligent AI? What if the real superintelligent AI was capitalism?”
Well: my number one most popular post ever was exactly that hot take; I՚m dismayed to learn that it՚s a cliche. I posted that in 2013 so maybe I was ahead of the curve, but in any case I feel kind of deflated now.

But my deeper point was not that it՚s dumb to worry about the risks of AI since capitalism is much more dangerous – it՚s that AI and capitalism are not really all that different, that they are in fact one and the same, or at least descended from a common ancestor. And thus the dangers (both real and perceived) of one are going to be very similar to the dangers of the other, due to their shared conceptual heritage.

Why do I think that AI and capitalism are ideological cousins? Both are forms of systematized instrumental rationality. Both are human creations and thus imbued with human goals, but both seem to be capable of evolving autonomous system-level goals (and thus identities) that transcend their origin. Both promise to generate enormous wealth, while simultaneously threatening utter destruction. Both seem to induce strong but divergent emotional/intellectual reactions, both negative and positive. Both are in supposed to be rule-based (capitalism is bound by laws, AI is bound by the formal rules of computation) but constantly threaten to burst through their constraints. They both seem to inspire in some a kind of spiritual rapture, either of transcendence or eschaton.

And of course, today capitalism and AI are converged in way that was not really the case 40 years ago – not that there weren՚t people trying to make money out of AI back then, but it was very different AI and a very different order of magnitude of lucrativeness. Back then, almost every AI person was an academic or quasi-academic, and the working culture was grounded in war (Turing and Weiner՚s foundational work was done as part of the war effort) and the military-industrial-academic complex. The newer AI is conducted by immensely wealthy private companies like Google or Baidu. This is at least as huge a change for the field as the transition from symbolic to statistical techniques.

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That's where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Note: the identification of AI with a narrow form of instrumental rationality is both recent and somewhat unfair – earlier generations of AI were more interested in cognitive modelling and were inspired by thinkers like Freud and Piaget, who were not primarily about goal-driven rationality. But it՚s the more constricted view of rationality that drives the AI-risk discussions.