Continued elsewhere

I've decided to abandon this blog in favor of a newer, more experimental hypertext form of writing. Come over and see the new place.

Saturday, August 27, 2011

Libertarians for slavery

At this point I should be jaded, but I still get a little chuckle when I find the gods of libertarianism devoting their efforts to defending some of the most brutal enemies of human freedom. Here we see Murray Rothbard musing over Just War theory and deciding that the only ones he approves of are Revolutionary War and "the War for Southern Independence". That is, he is happy to support the collective rights of slavers over the individual rights of slaves, who barely register in his consciousness.
In 1861, the Southern states, believing correctly that their cherished institutions were under grave threat and assault from the federal government, decided to exercise their natural, contractual, and constitutional right to withdraw, to "secede" from that Union. The separate Southern states then exercised their contractual right as sovereign republics to come together in another confederation, the Confederate States of America. If the American Revolutionary War was just, then it follows as the night the day that the Southern cause, the War for Southern Independence, was just, and for the same reason: casting off the "political bonds" that connected the two peoples. In neither case was this decision made for "light or transient causes." And in both cases, the courageous seceders pledged to each other "their lives, their fortunes, and their sacred honor."
...
we must always remember, we must never forget, we must put in the dock and hang higher than Haman, those who, in modern times, opened the Pandora’s Box of genocide and the extermination of civilians: Sherman, Grant, and Lincoln.

Perhaps, some day, their statues, like Lenin’s in Russia, will be toppled and melted down; their insignias and battle flags will be desecrated, their war songs tossed into the fire. And then Davis and Lee and Jackson and Forrest, and all the heroes of the South, "Dixie" and the Stars and Bars, will once again be truly honored and remembered.
via. He has a grain of a point -- the rise of a strong federal government made it possible for the US to spend the next 150 years building the military empire we have today, which is also anti-freedom. But the cause of southern slavery is the worst argument possible against federalism.

Seeing libertarians like these guys and Bryan Caplan write about war is kind of painful. I presume they own a little chunk of real estate like the rest of the middle class -- are they under the impression that their title doesn't squat over an ocean of blood? That because they obtained their little territory by sitting down in a realtor's office rather than swinging a sword, it doesn't represent a conquest over other people who might have thought they had a right to live there?

Thursday, August 25, 2011

Working toward Steve Jobs

[[updated below]]

I have nothing against Steve Jobs, he's obviously done a lot of good in the world and I wish him the best of luck dealing with his medical problems.

But the tone of the headlines today really grate on my nerves. Is Apple Doomed? Well, if they are, that sucks, because it means a collectivity of thousands of people and enormous wealth and creativity is nothing more than the manifestation of the will of a single individual. Or more likely, it's just that the press and popular imagination can't envision the nature of a collective so have to project everything onto a single person. That sucks in a slightly different way.

Of course the work of many has gone into making Apple's products what they are, from the original inventors of important tools that Apple popularized (eg Doug Englebart (mouse, hypertext) and Alan Kay (windows UI, object-oriented programming)) to the lead Apple engineers (Bill Atkinson and Jef Raskin are two names who come to mind), through the thousands of lesser engineers who sweated the details to the anonymous Chinese drones who put the stuff together. Everyone knows this, but something in our cognitive structure can't handle large networks, so we fixate on a single person as the metonymic embodiment of the hundreds of thousands, and write glowing articles about him and his quirks rather than the organization he sits on top of.

Maybe this is just how things work. Maybe it's the case that any really great organization has to be led by a single individual who combines exceptional vision, charisma, and organizational capabilities, and can serve as the human embodiment of the organization. Maybe that's what makes "genius" or "leadership" and we should be thankful to have it on occasion. But it pisses me off. I want a more democratic world, where everybody's judgement and talent and contributions matter, not just that of a few dictator/leaders. Even supposedly decentralized, cooperative organizations like Wikipedia seem to coalesce around a leader and take on his personality and preferences. Having spent a few times in groups that tried to work on leaderless principles, I'd say that it very rarely works, people being what they are.

I am genuinely torn, because I find my values in conflict. On the one hand, the dictatorship of Steve Jobs is what elevates Apple above the level of other corporations. On the other hand, I don't like authority. But if you have to work in a hierarchical organization, I guess it's good if the leader is a man of both vision and taste. It is damn rare to have someone who can both lead a large organization and at the same time pursue a great personal vision. More often those who ascend to the apex of the pyramid do so by leaving any socially positive values behind. So until we solve the problem of anarchist organization, we need more Steve Jobs.

[[update: Here's another opinion:
It turns out that it is possible for ad hoc, loosely affiliated, impermanent groups of humans to, without direction or governance, collaborate on extremely complex and sophisticated tasks and achieve exceedingly specific ends.
Well, call me a bourgeois sellout, but (a) I didn't see anything all that objectionable about the NPR reporter's tone -- she's bemused but hardly as befuddled as IOZ paints her, and (b) yes, it is possible for loosely affiliated groups to accomplish things. But the kinds of things that anonymous does (destructive hacking and espionage) are for the most part not creative endeavors and are parasitical on the complex systems that have been created by others. In other words, not all that "complex and sophisticated". Can a loose affiliation create a computer or a network? Networks are distributed but their protocols are designed through centralizing processes, that's why the distributed nodes are able to talk to each other. IOZ would have been on better ground if he cited something like Linux or Apache or Wikipedia as an example, but even those examples draw on energy and ideas from centralized organizations and of course they do have "direction and governance".

The simpleminded opposition of distributed and centralized systems is a plague on the land; these are important issues and it's very rare to see them treated with any degree of critical realism. Speaking of Leviathan, I have Yochai Benkler's new book on order, maybe it does a better job, but I'm afraid it looks a bit too much like a cheerleading business book. We'll see.]]

Tuesday, August 23, 2011

Report from Inconsistency Robustness 2011

This was a small but very interesting and spirited gathering. Like many interdisciplinary events, many of the people weren't quite sure what the meeting was actually about or why they were there, but that just made things more interesting. The instigator, chair, and chief agenda setter was Carl Hewitt, known best for developing the Actor model of computation, which is something that seems deeply important but has never quite set the world on fire the way it promised to. Actors, for the uninitiated, is a radically distributed model of computation at the most basic level, replacing the Turing/Von Neumann model that is essentially the basis for everything in the field. The somewhat metaphorical extensions of this approach (eg, in The Scientific Community Metaphor and Offices are Open Systems) strike me as very fruitful ideas whose potential is yet to be realized.

Carl's recent efforts have been in the direction of remaking logic in much the same sort of way as he tried to remake computation. The linkage is clear; distributed systems that contain representations will naturally and necessarily end up with conflicts and inconsistencies. The real world is both distributed and inconsistent; it is only the useful but ultimately misguided vision of a single centralized processor (with its its implied objectivity) that creates the illusion that there can be a formalized, complete, and consistent representation of reality.

However, this foray into logic does not strike me as a very fruitful path. I think logic is simply the wrong approach; it brings along too much baggage; it is something that needs to be discarded along with the rest of the centralized view, rather than reformed. And I have not really been able to make sense of his recent work in direct logic. This may be a prejudice or failing of mine; I studied mathematical logic in my youth and eventually had an allergic reaction to it, to the point where using the kind of typographic symbols logicians use causes me to break out it hives or at least stop reading.

Nonetheless, there seem to be important and good intuitions there. Hewitt cites Latour and some of Latour's followers (Annemarie Mol, John Law), and the Latourian part of his approach (which seems to be underemphasized, but its there) is to foreground the importance of arguments over deduction. He says: "Since truth is out the window for inconsistent theories, we need a reformulation in terms of argumentation". This is good, but in his formalism arguments are still inferential chains, just like in standard logic. He still wants to achieve "precision and rigor". This seems like a mistake. To capture real-world reasoning in argumentative form, you need to include all sorts of fuzzy and unformalizable forms of evidence and reasoning. In some of his writing Carl seems to say the same thing, but his use of mathematical notation obscures this, I think.

The other attendees included a smattering of people working in AI and law (makes sense, law is an inherently argumentative form of reasoning and has long-settled technologies for being robust to inconsistency), some security people, some Media X people, Hugo Mercier, a bunch of programming language/runtime people, a sociologist, and assorted unclassifiables and luminaries. The most straightforwardly technical talk was by David Ungar, whose earlier work in prototype-based object systems and environments was an influence on my grad-school work. He presented a system for dealing with very large datasets used in business query systems by allowing some inconsistencies to creep in in a controlled fashion.

The most interesting thing I learned about there had pretty much nothing to do with inconsistency and was not a formal part of the conference; it's that Mark Miller, who has been working on capability-based security models for 20 years (a promising Actor-like idea, that never seemed to take off), is now at Google Research and has successfully gotten his ideas into the revised JavaScript standard so that in the very near future it will be part of everyone's everyday computational environment. This is amazing in several dimensions; JavaScript seems like the last language environment on earth capable of being made elegantly secured, but apparently he's pulled it off. And it means that while the desktop OSs will clunk along with their security model that hasn't changed since the sixties, the browser will have one that actually might be capable of dealing with the challenges of 21st century computing.

My own talk went pretty well (paper, slides). But outside of letting me get some ideas and complaints out of my system, not sure what it really accomplished now that I'm back at my day job. I would love to work on reinventing computing from the ground up, but unlike Hewitt or Miller, I don't really have either an obsessive devotion to my ideas, nor the patience to navigate the politics required in order to have large-scale effects. Still, if anybody wants to fund me to work on this stuff, please do drop me a line.

I also participated in the panel on the Singularity, and pretty much gave the same quasi-theological argument I did here (slides), having determined that yes in fact I could get away with it. And afterwards, much to my astonishment, somebody said that my presentation was one of the more lucid ones of the day.

I think that the issues raised at the workshop are pretty much the most important things in the world. Some seriously powerful design ideas are going to be needed to manage the computational objects in the increasingly distributed, embedded, always-on world we are creating willy-nilly. Something like a coarse-grained version of Actors would be a good start. "Inconsistency" seems like too weak a word to name the processes of merging and reconciling divergent representations that must necessarily arise once everything is talking to everything else, but it too is at least a step in the right direction.

Sunday, August 14, 2011

Working on my own supertheory



Through the maelstrom of the knowledge
Into labyrinth of doubt
Frozen underground ocean
melting - nuking on my mind

Yes give me Everything Theory
Without Nazi uniformity
My brothers are protons
My sisters are neurons
Stir it twice, it's instant family!

Friday, August 12, 2011

Argument as the basis for thought

So I finally got around to reading the Mercier & Sperber paper that was buzzing around the blogosphere recently, Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(02), 57-74. I love an argument and so would be predisposed to like a paper that tries to show that argumentation is the basis for human reasoning. And it turns out that Mercier is going to be one of the featured speakers at this conference I'm going to.

In some ways this is a glaringly obvious thesis to me, but only because I've been turning my thinking in this direction for quite awhile. It seems very similar to Latour, in some abstract way, although he's not cited (these guys are cognitivists, which means they think about what goes in inside the head, while Latour is a sociologist who things primarily about what goes on outside). But the emphasis on the agonistic nature of reasoning is the same; the idea that the purpose of representation and thinking is fundamentally to strengthen a position. Latour and M&S seem to be coming at the same insight from two rather different approaches. Latour is coming at it from a combination of sociology and advanced metaphysics; M&S muster tons of experimental evidence to show that people are better at shoring up existing beliefs with argument than coming up with objective truths.

What's odd to me is that this position seems like a very natural fit for computationalists, yet most AI people are stuck thinking that representations are mere symbols, that squat in the brain and have a magical rapport with things in the world. But the essence of computational thinking (in my version of it anyway) is to be acutely aware of the relationship between representations and the processes that use and generate them. So if there is an argument or other interested process behind thoughts, that should come as no surprise, but apparently it still is.

Agre & Chapman and others analyzed this problem and tried to fix it, ages ago, but it didn't seem to take. In fact I now recall that one of Agre's hacks was a dialectical situated action agent that would argue with itself about how to cook breakfast, or something like that. I wonder if it's time for another run at the problem?

Monday, August 08, 2011

How to avoid the singularity

I have somewhat unaccountably been asked to be on a conference panel on the topic "Inconsistency Robustness and the Singularity". What do I know about the singularity? It is unknowable by definition. So almost anything said about it is guaranteed to be nonsense.

Yet nattering on about the unknowable is long human tradition. It is increasingly obvious to me that Singulatarianism has the form of a religion, specifically, a religion of the monotheistic, transcendent, and eschatological sort. God is the original singularity, and the technological singularity that is so longed for is just the rapture for nerds.



This is not exactly a criticism -- there's nothing inherently wrong with religion in my view, but there is something wrong in practicing religion while failing to acknowledge it, and instead pretending to "rationalism" (at least when rationalism was young, this was explicit).

Monotheism is the source of much that is good and even more bad in our thinking. Whatever good it may be responsible for (science!) it's pretty clear that it died in the modern era and we are in the midst of its death throes. Singulitarianism is just one of the spasms, a church for shallow thinkers, people who thought they'd gotten rid of a theistic mythology only to replace it with a near-replicate. It is not radical enough, because it presumes that while enormous technical changes will happen, we (the nerds) will still be pretty much the same. Consider eg the obsession with cryonics, which is nothing more or less than the effort to sustain the atomic, isolated indvidual past the point of death.

The cure for singulatarianism lies is in the direction of sociology and network thinking in general. Monotheism wants to collapse the universe's locus of control into a single transcendent point; whereas the reality of human life has it distributed all over the place. The real radical changes will come not from hyper-empowered individuals but from the networks that are in the process of being woven, of which the current most visible (Facebook etc) are just a shadow, a hint. The world runs on networks and will be determined by them. Perhaps a different theology is required.

What's this have to do with inconsistency robustness? Well, the implications for computational systems is that they too need to deal with distributed control, divergence of beliefs, goals, plans. Traditional logic is to monotheism as distributed, inconsistentent, argument-based logics are to a network-based metaphysics. In a distributed world, inconsistency is the norm and consistency the exception. The social world has evolved techniques for producing consistency and cooperation; computation needs to learn to do the same.

Hm, well, I have no idea how much of this I can or should shoehorn into a presentation at a technical conference.

[update: my slides]

Monday, August 01, 2011

Introspection and meditation


I blundered into a conversation about introspection at Less Wrong (I have a sort of fondness for that community, although I disagree with their premises, they are smart and earnest, and I go over there every few months to stir up shit).

A couple of thoughts: one, the term "introspection" is misleading. We can't use some magical mechanism to peer into our minds. In some deep sense we are strangers to ourselves and have to cobble together stories about our own goals and behavior in the same way we do for other people.

Two, and here I'm on very shaky ground, but it seems to me the point of Buddhist meditation is not accurately captured by "introspection". In fact in my own limited experience with it, it is more like a cure for the pathologies of introspection. But maybe that's just me. I really don't know what I'm talking about in this area, so here's someone who perhaps does making roughly my point. My expert consultant on such matters is off at The Buddhist Geeks conference, but will perhaps chime in.