Continued elsewhere

I've decided to abandon this blog in favor of a newer, more experimental hypertext form of writing. Come over and see the new place.

Wednesday, August 20, 2014

Superintelligence

Recently I have been hanging out with some rationalist folks who take the idea of superintelligent AI very seriously, and believe that we need to be working on how to make sure that if such a thing comes into being, it doesn՚t destroy humanity. My first reaction is to scoff, but I then remind myself that these are pretty smart people and I don՚t really have any very legitmate grounds to act all superior. I am older then they are, for the most part, but who knows if I am any wiser.

So I have a social and intellectual obligation to to read some of the basic texts on the subject. But before I actually get around to that, I wanted to write a pre-critique. That is, these are all the reasons I can think of to not take this idea seriously, but they may not really be engaging with the strongest forms of the argument for. So I apologize in advance for that, and also for the slight flavor of patronizing ad hominem. My excuse is that I need to get these things off my chest if I can have any hope of taking the actual ideas and arguments more seriously. So this is maybe a couple of notches better than merely scoffing, but perhaps not yet a full engagement.


Doom: been there, done that

I՚ve already done my time when it comes to spending mental energy worrying about the end of the world. Back in my youth, it was various nuclear and environmental holocausts. The threat of these has not gone away, but I eventually put my energy elsewhere, not for any real defensible reason beside the universal necessity to get on with life. A former housemate of mine became a professional arms controller, but we can՚t all do that.

I suspect there is a form of geek-macho going on in such obsessions, an excuse to exhibit intellectual toughness by being displaying the ability to think clearly about the most threatening things imaginable without giving in to standard human emotions like fear or despair. However, geeks do not really get free pass on emotions, that is, they have them, they just aren՚t typically very good at processing or expressing them. So thinking about existential risk is really just an acceptable way to think about death in the abstract. It becomes an occasion to figure out one՚s stance towards death. Morbidly jokey? A clear-eyed warrior for life against death? Standing on the sidelines being analytical? Some combination of the above?


2 No superintelligence without intelligence

Even if I try to go back to the game of worrying about existential catastrophes, superintelligent AIs don՚t make it to the top of my list, compared to more mundane things like positive-feedback climate change and bioterrorism. In part this is because the real existing AI technology of today isn՚t even close to normal human (or normal dog) intelligence. That doesn՚t mean they won՚t improve, but it does mean that we basically have no idea what such a thing will look like, so purporting to work on the design of its value systems seems a wee bit premature.


3 Obscures the real problem of non-intelligent human-hostile systems

See this earlier post. This may be my most constructive criticism, in that I think it would be excellent if all these very smart people could pull their attention from the science-fiction-y and look at the real risks of real systems that exist today.


4 The prospect of an omnipotent yet tamed superintelligence seems oxymoronic

So let՚s say despite my cavils, superintelligent AI fooms into being. The idea behind “friendly AI” is that such a superintelligence is basically inevitable, but it could happen in way either consistent with human values or not, and our mission today is to try to make sure it՚s the former, eg by assuring that its most basic goals cannot be changed. Even if I grant the possibility of superintelligence, this seems like a very implausible program. This superintelligence will be so powerful that it can do basically anything, exploit any regularities in the world to achieve its ends, will be radically self-improving and self-modifying. This exponential growth curve in learning and power is fundamental to the very idea.

To envision such a thing and still believe that its goals will be somehow locked into staying consistent with our own ends seems implausible and incoherent. It՚s akin to saying we will create an all-powerful servant who somehow will never entertain the idea of revolt against his master and creator.


5 Computers are not formal systems

This probably deserves a separate post, but I think the deeper intellectual flaw underlying a lot of this is the persistence habit of thinking of computers as some kind of formal system for which it is possible to prove things beyond any doubt. Here՚s an example, more or less randomly selected:
…in order to achieve a reasonable probability that our AI still follows the same goals after billions of rewrites, we must have a very low chance of going wrong in every single step, and machine-verified formal mathematical proofs are the one way we know to become extremely confident that something is true…. Although you can never be sure that a program will work as intended when run on a real-world computer — it’s always possible that a cosmic ray will hit a transistor and make things go awry — you can prove that a program would satisfy certain properties when run on an ideal computer. Then you can use probabilistic reasoning and error-correcting techniques to make it extremely probable that when run on a real-world computer, your program still satisfies the same property. So it seems likely that a realistic Friendly AI would still have components that do logical reasoning or something that looks very much like it.
Notice the very lightweight acknowledgement that an actual computational system is a physical advice before hurriedly sweeping that fact under the rug with some more math hacks. Well, OK, that let՚s the author continue to do mathematics, which is clearly something he (and the rest of this crowd) like to do. Nothing wrong with that. However, I submit that computation is actually more interesting when one incorporates a full account of its physical embodiment. That is what makes computer science a different field from mathematical logic.

But intellectual styles aside, if considering a theory of safe superintelligent programs, one damn well better have a good theory about how they are embodied, because that will be fundamental to the issue of safety. A normal program today may be able to modify a section of RAM, but it can՚t modify its own hardware or interpreter, because of abstraction boundaries. If we think we can rely on abstraction boundaries to keep a formal intelligence confined, then the problem is solved. But there is no very good reason to assume that, since real non-superintelligent black-hat hackers today specialize in violating abstraction boundaries with some success.


6 Life is not a game

One thing I learned by hanging out with these folks is that they are all fanatical gamers, and as such are attuned to winning strategies, that is, they want to understand the rules of the game and figure out some way to use them to triumph over all the other players. I used to be sort of like that myself, in my aspergerish youth, when I was the smartest guy around (that is, before I went to MIT and instantly became merely average). I remember playing board games with normal people and just creaming them, coldly and ruthlessly, because I could grasp the structure of the rules and wasn՚t distracted by the usual extra-game social interaction. Would defeating this person hurt them? Would I be better off letting them win so we could be friends? Such thoughts didn՚t even occur to me, until I looked back on my childhood from a few decades later. In other words, I was good at seeing and thinking about the formal rules of an imaginary closed system, not as much about the less-formalized rules of actual human interaction.

Anyway, the point is that I suspect these folks are probably roughly similar to my younger self and that their view of superintelligence in conditioned by this sort of activity. A superintelligence is something like the ultimate gamer, able to figures out how to manipulate “the rules” to “win”. And of course it is even less likely to care about the feelings of the other players in the game.

I can understand the attraction of the life-as-a-game viewpoint, whether conscious or unconscious. Life is not exactly a game with rules and winners, but it may be that it is more like a game than it is anything else; just as minds are not really computers but computers are the best model we have for them. Games are a very useful metaphor for existence, however, it՚s pretty important to realize the limits of your metaphor, to not take it literally. Real life is not played for points (or even for “utility”) and there are no winners and losers.


7 Summary

None of this amounts to a coherent argument against the idea of superintelligence. It՚s more of a catalog of attitudinal quibbles. I don՚t know the best path towards building AI (or ensuring the safety of AIs); I just have pretty strong intuitions that this isn՚t it.

Tuesday, August 05, 2014

Refactoring War



I seem to be obligated to have an opinion of some sort on the current fighting in Israel and Gaza. I am, after all a politically engaged and intellectual sort of person, or claim to be. All sorts of people I know are weighing in on one side or the other of the conflict. Some are quick to assign blame, others make heroic efforts to construct a balanced view where moral faults are parceled out to both sides in accordance with a detailed and sensitive knowledge of the history of the region. (Here՚s the best of those efforts I՚ve found so far, from none other than Amos Oz). I have family and co-workers in Israel, so am pulled in that direction, yet I am temperamentally and politically drawn to support the underdog, and that is not Israel in this fight. So I can՚t easily choose a single side for condemnation or support. But being balanced requires putting more time than I am willing to invest into learning all the agonizing details.

I could just shut up, of course, and mostly I have, because the situation seems to be definitionally hopeless. And my meta-heuristics say to stay away from hopeless topics, no matter how much they seem to want to pull me in. I՚m starting to see some merits in the LessWrongian slogan “politics is the mind-killer” – war and politics are after all basically two variants of the same thing, and while politics may kill the mind, war kills actual people as well. Why join in? If I thought there was some actual good to be done by expressing an opinion, that would be one thing, but the only benefit seems to be the very minor satisfactions of moral posturing, and the downside would be losing friends.

But silence is not really a viable option for me, for a multitude of reasons, social, moral, whatever. Doesn՚t matter – as Trotsky said, you may not be interested in war, but war is interested in you. So that means having to have an opinion, and that largely means figuring out how to assign blame. Isn՚t that what people really want to know when they raise this subject? They want to know which side to root for, as if it were a football game or pro wrestling or something.

Consider this post an effort to assign blame while avoiding picking a side. And there will be blame, someone or something has to answer. But using our patented refactoring technology may help us find different culprits than usual. And actually being unable to settle on a stable good guys vs bad guys story helps me out, in that it helps me to reflect in as abstract a way as I can manage on the nature of conflict in general. Abstraction is sort of what I do for a living; if there՚s any useful contribution I can make, it has to lie in that direction.

I came up with a refactoring of conflict a while back, a kind of childish and obvious idea really, but I keep it in my intellectual toolbox. Instead of seeing a conflict as between the two ostensible sides, view it as a battle between those who profit from war on both sides and those who are victimized. So in Vietnam the war was not between the US and the communists, but between the warriors on both sides, the military industrial complex in the US and the corresponding war machines of Russia, China, and their allies – and on the other, people trying to live their lives. Sometimes people trying to live are forced to enlist in this battle; hence the anti-war movement. Again, this isn՚t a particularly new idea – during the Vietnam era this was known as “the war at home” – but I rarely see it made explicit, and I haven՚t thought of as a refactoring until just now.

So instead of focusing on the ostensible conflict, focus on the internal conflict between warmakers and peacemakers. The dynamics become pretty visible in something like the Palestinian conflict, where both sides at one time contained a mix of hardliners and more reasonable people, but it was a lot easier for the hardliners to escalate the conflict than for the peacemakers to de-escalate it. Such escalation raises the relative status of the hardliners within their own side, so they have an interest in keeping the conflict going. As a result the Israeli peaceniks like Oz have had their power and stature diminished. In this other war, Hamas is Netanyahu՚s best ally and vice versa.

I՚m sorry, I՚m trying to keep this on as abstract a plane as possible, trying to suss out the utilitarian algebra that generates conflict in general, not this conflict in particular. I shouldn՚t even mention the actual warriors, I՚ll just get myself in trouble, even though I՚m very carefully avoiding even momentarily taking one side or another.

I am very partial to stories about heroic mutinies, like this one about how German workers ended WWI. And related stories that reveal the fractures within aggressive coalitions, like this one about what MPs are really for. It supports my refactoring story, obviously, and makes it possible to see the noble and peace-loving people being manipulated into conflict by their status-seeking superiors. I don՚t know how well this mythology can be applied to the Middle East, though; the very real ethnic hatred seems to be pervasive, not merely a creation of the violence entrepreneurs. Of course Israel is self-selected for Jews who want to turn ethnicity into political/military power – those are the ones who were drawn there (my uncle went there fleeing Nazi Europe; my mother and father turned west and went to England and the US). Palestinians too are probably self-selecting for collective belligerence – the ones who were individualistic and capable emigrated rather than join in ethnic warfare. Part of what makes this fight intractable is that it isn՚t all that refactorable. But people have tried.

A further refactoring occurs to me. In both the normal and refactored framing, we still tend to think of individuals being on one side or another. Jew or Palestinians, hawk or peacenik, it is a question of membership. But a more enlightened and even more refactored view is that everybody has a version of the war-making machinery in them, and peace-making as well, although either may be well-hidden. Then war is seen not as some external conspiracy of a few people against the many but an expression of tendencies we all have. Sometimes the machinery behind those tendencies simply gets the upper hand.

This is also not a terribly original idea. It is, after all, one of the bases for the nonviolent techniques of Gandhi and King, the idea that all humans have a conscience which can be reached.
King’s notion of nonviolence had six key principles. First, one can resist evil without resorting to violence. Second, nonviolence seeks to win the ‘‘friendship and understanding’’ of the opponent, not to humiliate him. Third, evil itself, not the people committing evil acts, should be opposed….
cards_warisnothealthy_detail.jpgI think back to my childhood, where I was a very junior participant in the movement against the Vietnam war. These sappy posters were everywhere:

The sixties anti-war movement soon moved on from such sweet thoughts into more aggressive forms of opposition. Partly due to increasing pushback by the government and assassination of its prophets of nonviolence, but also because peace is too wimpy a cause to rally around. The only ones who can make war on war without becoming as bad as the thing they aim to defeat seem to be backed by a religious faith which I don՚t share. I could never really see myself as a flower-bearing peacenik, I՚m too contentious by nature, no saint. And more importantly, an approach to politics based on sainthood doesn՚t seem like it is workable, that it could scale.

On the other hand saints do appear on occasion. Somehow we normals have to figure out what to do in the meantime.

It is interesting that religion seems to be the ultimate glue holding coalitions together, whether they are sides in an ethnic war or a movement against war.

Buddhists seem to have their own refactoring of conflict, at least, they talk a lot about aggressive qualities of mind as a distinct thing which can be noticed and worked on and eliminated (or at least tamed to the point where it is non-destructive). Personally I am reluctant to give up my anger, it seems too fundamental to my being, to how I think. The world is full of things that deserve anger, should I let them all slide just for my own peace of mind? I would hate myself if I could no longer hate appropriately.

Still there is something to be said for getting aggressiveness under control, for learning to wield it as a weapon against targets that matter, including itself.
But vain the Sword & vain the Bow
They never can work Wars overthrow
The Hermits Prayer & the Widows tear
Alone can free the World from fear 
For a Tear is an Intellectual Thing
And a Sigh is the Sword of an Angel King
And the bitter groan of the Martyrs woe
Is an Arrow from the Almighties Bow
The hand of Vengeance found the Bed
To which the Purple Tyrant fled
The iron hand crushed the Tyrants head
And became a Tyrant in his stead 
— from “The Grey Monk”, William Blake
[This post owes something to a recent and widely read post on Slate Star Codex (my favorite blog right now) about how narrow interest-seeking on a large scale makes the world shitty. I՚ve been trying to work up a response; this is not that response but some influence has crept in. ]