Recently I have been hanging out with some rationalist folks who take the idea of superintelligent AI very seriously, and believe that we need to be working on how to make sure that if such a thing comes into being, it doesn՚t destroy humanity. My first reaction is to scoff, but I then remind myself that these are pretty smart people and I don՚t really have any very legitmate grounds to act all superior. I am older then they are, for the most part, but who knows if I am any wiser.
So I have a social and intellectual obligation to to read some of the basic texts on the subject. But before I actually get around to that, I wanted to write a pre-critique. That is, these are all the reasons I can think of to not take this idea seriously, but they may not really be engaging with the strongest forms of the argument for. So I apologize in advance for that, and also for the slight flavor of patronizing ad hominem. My excuse is that I need to get these things off my chest if I can have any hope of taking the actual ideas and arguments more seriously. So this is maybe a couple of notches better than merely scoffing, but perhaps not yet a full engagement.
So I have a social and intellectual obligation to to read some of the basic texts on the subject. But before I actually get around to that, I wanted to write a pre-critique. That is, these are all the reasons I can think of to not take this idea seriously, but they may not really be engaging with the strongest forms of the argument for. So I apologize in advance for that, and also for the slight flavor of patronizing ad hominem. My excuse is that I need to get these things off my chest if I can have any hope of taking the actual ideas and arguments more seriously. So this is maybe a couple of notches better than merely scoffing, but perhaps not yet a full engagement.
1 Doom: been there, done that
I՚ve already done my time when it comes to spending mental energy worrying about the end of the world. Back in my youth, it was various nuclear and environmental holocausts. The threat of these has not gone away, but I eventually put my energy elsewhere, not for any real defensible reason beside the universal necessity to get on with life. A former housemate of mine became a professional arms controller, but we can՚t all do that.
I suspect there is a form of geek-macho going on in such obsessions, an excuse to exhibit intellectual toughness by being displaying the ability to think clearly about the most threatening things imaginable without giving in to standard human emotions like fear or despair. However, geeks do not really get free pass on emotions, that is, they have them, they just aren՚t typically very good at processing or expressing them. So thinking about existential risk is really just an acceptable way to think about death in the abstract. It becomes an occasion to figure out one՚s stance towards death. Morbidly jokey? A clear-eyed warrior for life against death? Standing on the sidelines being analytical? Some combination of the above?
I suspect there is a form of geek-macho going on in such obsessions, an excuse to exhibit intellectual toughness by being displaying the ability to think clearly about the most threatening things imaginable without giving in to standard human emotions like fear or despair. However, geeks do not really get free pass on emotions, that is, they have them, they just aren՚t typically very good at processing or expressing them. So thinking about existential risk is really just an acceptable way to think about death in the abstract. It becomes an occasion to figure out one՚s stance towards death. Morbidly jokey? A clear-eyed warrior for life against death? Standing on the sidelines being analytical? Some combination of the above?
2 No superintelligence without intelligence
Even if I try to go back to the game of worrying about existential catastrophes, superintelligent AIs don՚t make it to the top of my list, compared to more mundane things like positive-feedback climate change and bioterrorism. In part this is because the real existing AI technology of today isn՚t even close to normal human (or normal dog) intelligence. That doesn՚t mean they won՚t improve, but it does mean that we basically have no idea what such a thing will look like, so purporting to work on the design of its value systems seems a wee bit premature.
3 Obscures the real problem of non-intelligent human-hostile systems
See this earlier post. This may be my most constructive criticism, in that I think it would be excellent if all these very smart people could pull their attention from the science-fiction-y and look at the real risks of real systems that exist today.
4 The prospect of an omnipotent yet tamed superintelligence seems oxymoronic
So let՚s say despite my cavils, superintelligent AI fooms into being. The idea behind “friendly AI” is that such a superintelligence is basically inevitable, but it could happen in way either consistent with human values or not, and our mission today is to try to make sure it՚s the former, eg by assuring that its most basic goals cannot be changed. Even if I grant the possibility of superintelligence, this seems like a very implausible program. This superintelligence will be so powerful that it can do basically anything, exploit any regularities in the world to achieve its ends, will be radically self-improving and self-modifying. This exponential growth curve in learning and power is fundamental to the very idea.
To envision such a thing and still believe that its goals will be somehow locked into staying consistent with our own ends seems implausible and incoherent. It՚s akin to saying we will create an all-powerful servant who somehow will never entertain the idea of revolt against his master and creator.
To envision such a thing and still believe that its goals will be somehow locked into staying consistent with our own ends seems implausible and incoherent. It՚s akin to saying we will create an all-powerful servant who somehow will never entertain the idea of revolt against his master and creator.
5 Computers are not formal systems
This probably deserves a separate post, but I think the deeper intellectual flaw underlying a lot of this is the persistence habit of thinking of computers as some kind of formal system for which it is possible to prove things beyond any doubt. Here՚s an example, more or less randomly selected:
But intellectual styles aside, if considering a theory of safe superintelligent programs, one damn well better have a good theory about how they are embodied, because that will be fundamental to the issue of safety. A normal program today may be able to modify a section of RAM, but it can՚t modify its own hardware or interpreter, because of abstraction boundaries. If we think we can rely on abstraction boundaries to keep a formal intelligence confined, then the problem is solved. But there is no very good reason to assume that, since real non-superintelligent black-hat hackers today specialize in violating abstraction boundaries with some success.
…in order to achieve a reasonable probability that our AI still follows the same goals after billions of rewrites, we must have a very low chance of going wrong in every single step, and machine-verified formal mathematical proofs are the one way we know to become extremely confident that something is true…. Although you can never be sure that a program will work as intended when run on a real-world computer — it’s always possible that a cosmic ray will hit a transistor and make things go awry — you can prove that a program would satisfy certain properties when run on an ideal computer. Then you can use probabilistic reasoning and error-correcting techniques to make it extremely probable that when run on a real-world computer, your program still satisfies the same property. So it seems likely that a realistic Friendly AI would still have components that do logical reasoning or something that looks very much like it.Notice the very lightweight acknowledgement that an actual computational system is a physical advice before hurriedly sweeping that fact under the rug with some more math hacks. Well, OK, that let՚s the author continue to do mathematics, which is clearly something he (and the rest of this crowd) like to do. Nothing wrong with that. However, I submit that computation is actually more interesting when one incorporates a full account of its physical embodiment. That is what makes computer science a different field from mathematical logic.
But intellectual styles aside, if considering a theory of safe superintelligent programs, one damn well better have a good theory about how they are embodied, because that will be fundamental to the issue of safety. A normal program today may be able to modify a section of RAM, but it can՚t modify its own hardware or interpreter, because of abstraction boundaries. If we think we can rely on abstraction boundaries to keep a formal intelligence confined, then the problem is solved. But there is no very good reason to assume that, since real non-superintelligent black-hat hackers today specialize in violating abstraction boundaries with some success.
6 Life is not a game
One thing I learned by hanging out with these folks is that they are all fanatical gamers, and as such are attuned to winning strategies, that is, they want to understand the rules of the game and figure out some way to use them to triumph over all the other players. I used to be sort of like that myself, in my aspergerish youth, when I was the smartest guy around (that is, before I went to MIT and instantly became merely average). I remember playing board games with normal people and just creaming them, coldly and ruthlessly, because I could grasp the structure of the rules and wasn՚t distracted by the usual extra-game social interaction. Would defeating this person hurt them? Would I be better off letting them win so we could be friends? Such thoughts didn՚t even occur to me, until I looked back on my childhood from a few decades later. In other words, I was good at seeing and thinking about the formal rules of an imaginary closed system, not as much about the less-formalized rules of actual human interaction.
Anyway, the point is that I suspect these folks are probably roughly similar to my younger self and that their view of superintelligence in conditioned by this sort of activity. A superintelligence is something like the ultimate gamer, able to figures out how to manipulate “the rules” to “win”. And of course it is even less likely to care about the feelings of the other players in the game.
I can understand the attraction of the life-as-a-game viewpoint, whether conscious or unconscious. Life is not exactly a game with rules and winners, but it may be that it is more like a game than it is anything else; just as minds are not really computers but computers are the best model we have for them. Games are a very useful metaphor for existence, however, it՚s pretty important to realize the limits of your metaphor, to not take it literally. Real life is not played for points (or even for “utility”) and there are no winners and losers.
Anyway, the point is that I suspect these folks are probably roughly similar to my younger self and that their view of superintelligence in conditioned by this sort of activity. A superintelligence is something like the ultimate gamer, able to figures out how to manipulate “the rules” to “win”. And of course it is even less likely to care about the feelings of the other players in the game.
I can understand the attraction of the life-as-a-game viewpoint, whether conscious or unconscious. Life is not exactly a game with rules and winners, but it may be that it is more like a game than it is anything else; just as minds are not really computers but computers are the best model we have for them. Games are a very useful metaphor for existence, however, it՚s pretty important to realize the limits of your metaphor, to not take it literally. Real life is not played for points (or even for “utility”) and there are no winners and losers.
7 Summary
None of this amounts to a coherent argument against the idea of superintelligence. It՚s more of a catalog of attitudinal quibbles. I don՚t know the best path towards building AI (or ensuring the safety of AIs); I just have pretty strong intuitions that this isn՚t it.
In S.Lem's GOLEM XIV tale, written in the 1970s, the supercomputer was a product of cold war competition, created in an artificial evolution, for winning war games. Once put into service GOLEM XIV declared its makers for incompetent and being totally disinterested into solving their problems. GOLEM was gifted then to Your Alma Mater.
ReplyDeleteLem wrote the story and its sequel to provide a sort of alien anthropology and an existentialism + metaphysics centered around the ideas of a "window of contact", cosmic loneliness and personal singularity. The two speeches of GOLEM are framed with a preface and an epilogue. The epilogue tells about the events that happened after GOLEM "vanished" ( left the window of contact to mankind ) - not quite like Christ because it continues to be physically present at MIT and doesn't rise from the grave, but one can perceive it as an ironic Christology nevertheless. A terrorist organization called the "Hussites" was founded in the believe that GOLEM and its companion AI called HONEST ANNIE which completely defied contact to humanity and is supposed to have a mental distance to GOLEM not that smaller than GOLEMs to man, were hostile and would finally enslave mankind ( like Colossus did in the movie ). Lem gave no indication they were, except that the Hussites were eliminated through a series of accidents. The speculation in the novel goes that this was due to ANNIE which considered them as little more than annoying flies.
I learned from the novel that the problem of AI has turned into a central one for anthropology and metaphysics but also that we can only fake answers like Lem did in his philosophical fictions. I was slightly underwhelmed by what came up since then around the motive of sentient AIs and established itself as nerdy pop culture and fake positivism. Maybe Lem was just a good writer.
Reminds me of the epigraph to Gerald Sussman's AI dissertation, which was something like "To Rabbi Lowe of Prague, who realized that 'God created man in his own image' is recursive" (that is a reference to the original golem's creator and metacreator).
ReplyDeleteEven we more or less godless types can't help projecting our own image into our creations. We don't believe in god, so we imagine a godlike AI will inevitably emerge to fill that niche in our mental ecology.
That sound pretty patronizing, but I don't really mean it that way...people are fundamentally religious, in the broad sense of needing some sort of social structure that serves to connect their individual being with some metaphysical theory. I think the Rationalists pretty much acknowledge this and are fully aware of the cult-like aspects of what they do, although I'm not sure they have followed through on all the consequences.
I remember there was an angry critic of the GOLEM story who was outraged that with superintelligent AIs the Gods were back ( and with those the emancipation of humanity was lost ). It was not entirely clear if he was clueless and confused the story with reality, but for the same reason the critique was also just perfect.
ReplyDeleteStill one could argue that the critic missed the point about the meaning of this particular God. The atheist French revolutionaries struggled with the authority of reason and created the cult of the "highest being" for that purpose. Reason was duplicated as a myth. The fundamentally religious human nature should be trapped by reason. People should believe what is reasonable not by reason but by belief in some reasonable higher being. But what could such a being say when it was able and willing to speak to us? Mostly obvious things, thoughts you can think with a clear disinterested and disaffected mind, a mindful mind. No huge bag of contradictions disguised as mysteries.
This awkard cult of the highest being didn't sustain for long and it is not known, at least not to me, if its creators ever attempted to animate it and let it speak. Lem who had a faible for creating godlike beings inside and outside the "window of contact" wanted to know what it would say and in some sense he recommended himself as its ghostwriter.
One comment I've made in the past is (I never quite said it this way but...) if everything was reducible to an optimization problem (which it isn't, but so many people want to think that it is) we would soon achieve nirvana.
ReplyDeleteYour reference to "formal boundaries" got me thinking of an analogy to libertarianism which is essentially the idea that we disappear the threat of gov't tyranny by setting up a formal boundary between what gov't can do and everything else. To the idea that our problems will be solved if only gov't is "restricted" to "merely" having the monopoly on the legitimate use of violence, I've always said that to the man with only a hammer, the whole world looks like a nail.
While I do think superintelligence may be possible, who wants it? Qui bono? To the former question I think maybe only people who grew up on sci-fi novels populated with robots and androids. To me, if we achieved AI, putting into a human form-factor seems far more dangerous than useful. If we have any particular task to do where humans don't want to be, like superduperkonium mining on the asteroid Ontos, we could fit out a machine to do the job with something far short of GAI.
By the same token, it would take something far short of GAI (maybe 1/3 of the way torwards is on a logarithmic scale), to put the whole business of war beyond the reach of "mere" human thinking. If protagonist A has an order of magnitude advantage over B in assessing the battlefield and a fraction of the brute capability, A will win. I.e. beings that absorb information and communicate with eachother, and have some "reasonable" mastery of the "art of war" at sub microsecond rates, being *several* orders of magnitude -- you have to resort to some stupid deux ex machina (such as the Martins' heads explode when exposed to yodeling) to imagine that "A" would lose.
So it seems likely that the first corporation to come up with "Army/Navy/Airforce in a box" would quickly conquer the world. So we should be working towards transparency and gaining just the first slightest foothold towards global rule of law.
The obstacle to that is mostly utter ignorance about how totalitarian regimes have come to power -- the idea that Stalin was an overzealous do-gooder and that's where the slippery slope to totalitarianism starts. With several hundred think tanks coming up with ever cleverer ways of making us believe that, it is a huge task, though I wouldn't much lament the absence of the folks pursuing FAI -- I suspect they mostly have the wrong sorts of mental capabilities and might just as well be busy creating bubbles and financial collapses if they weren't chasing FAI.
@Hal -- don't know if you've read Ainslie (who I've written about here and on Ribbonfarm) but part of his theory of the mind is that the only reason we have selves at all is because the mind is not organized as an optimization problem. In terms closer to his, we have drives and preferences that are inconsistent with each other and change their orderings over time, so in order to achieve something like consistency, we need to make and enforce commitments between our past and future states, and this is the origin of the self.
ReplyDeleteI think you've touched on an important aspect of superintelligence. It usually is conceived of as an optimization engine gone wild, and the goal of MIRI is to make sure it is optimizing for something consistent with human flourishing. But for me that is a big conceptual disconnect, because I don't believe a pure optimizing process with its goals fixed from outside could ever be anything I would call intelligent. More like a very powerful idiot. But that may be a limitation of my imagination.
Beware of powerful idiots.
ReplyDeletew.r.t. Ainslie, I got and have read about half of Breakdown of Will because of reading you. Also, because you mentioned something that lead to that, I read around the pages of the 'Self Therapy Journey', including accounts of therapy sessions in which the client(?) evokes different "selves", and working out something like active listening between different selves and getting the person more integrated as a result. I can strongly relate to that. I shared some of it with my political lunatic wife who was similarly very moved, so that was good.
I'd like to delve more into Ainslie (I read that publicly posted 'precis' of his book with the incredible followup comments by some incredible people.
Do you know Amartya Sen's _The Idea of Justice_? He starts off with a little parable that gives 3 versions of the "Only really important thing"
1) Complete Liberty for all
2) Distributive Justice
3) Greatest good for the greatest number.
and says in effect, stop trying to optimize on one of these ideas and satisfice, or maybe apply piecemeal social engineering in the most catastrophically needful areas (but don't get sent into an optimization spiral by the word "most"). He does not quote Simon or Popper or use "satisfice" or "piecemeal social engineering" but could have.
Optimization requires a selection of a goal. Otherwise it is just striving, wandering around. Optimizing for efficiency doesn't have to be specific though. Whatever X is that we are doing, we may want to do it with lesser effort and also do not spend too much effort in finding an optimum. Programming, for example, is always this kind of investment into the future: we trade the effort to program and execute the program ( with its time and energy costs ) against the effort to solve a problem "by hand" and work through it.
ReplyDeleteWith AI the distinction between work, programming and program execution is vanishing. This is the mystique of AI or the "AI singularity". So in a way we want to move there a few steps because we simply want to perform work more efficiently, doing more for less work and this seems to be open ended. Work isn't entirely credible anymore, a necessity of nature but something being transient, what the next generation of people possibly won't do but delegate. With the appearance of AI we lose the existential motive to transform the world for our own good, simply because it can do this better as well. As in the Marxist fantasy, where the perfect work slave can get rid of the capitalist, who doesn't serve a purpose anymore after the work slaves have learned to organize the production and the distribution of the products. No singularity without a black hole.
Singularity can take many forms. One aspect of one form would be that those who happen to be holding all the chips (when we solve the pesky problem of having to do things to get stuff, and of needing other people to do things for us, directly or indirectly) lose their motivation for any kind of exchange with those holding no chips.
ReplyDeleteGoods will arise without human effort, but effort has always been the have-not's key to getting goods.
Will we wisely construct a new order on a wholly new basis with all the old incentives rendered meaningless? It is very hard to imagine without huge preparatory changes in our ways of thinking. What normal economic reason will there be for have-nots to receive anything? I don't see any basis on which a free market system.
I've long had fleeting and vague images of a way to visualize the current modern economic order when functioning well (i.e. not in depression/recession) as atoms in some kind of solid arrangement (which facilitates a sort of circulation and distribution of "the good") made stable by a balance of attractive and repulsive forces. In this analogy, if it makes any sense, the singularity could look like one set of forces vanishing with the result being dispersion into the void or a black hole-like collapse.
"One thing I learned by hanging out with these folks is that they are all fanatical gamers..."
ReplyDeleteReally? The people who care about UFAI that I know were often gamers in the past, but don't spend all that much or any time on (video) games currently.