This was
a small but very interesting and spirited gathering. Like many interdisciplinary events, many of the people weren't quite sure what the meeting was actually about or why they were there, but that just made things more interesting. The instigator, chair, and chief agenda setter was
Carl Hewitt, known best for developing the
Actor model of computation, which is something that seems deeply important but has never quite set the world on fire the way it promised to. Actors, for the uninitiated, is a radically distributed model of computation at the most basic level, replacing the Turing/Von
Neumann model that is essentially the basis for everything in the field. The somewhat metaphorical extensions of this approach (
eg, in
The Scientific Community Metaphor and
Offices are Open Systems) strike me as very fruitful ideas
whose potential is yet to be realized.
Carl's
recent efforts have been in the direction of remaking logic in much the same sort of way as he tried to remake computation. The linkage is clear; distributed systems that contain representations will naturally and necessarily end up with conflicts and inconsistencies. The real world is both distributed and inconsistent; it is only the useful but ultimately misguided vision of a single centralized processor (with its its implied objectivity) that creates the illusion that there can be a formalized, complete, and consistent representation of reality.
However, this foray into logic does not strike me as a very fruitful path. I think logic is simply the wrong approach; it brings along too much baggage; it is something that needs to be discarded along with the rest of the centralized view, rather than reformed. And I have not really been able to make sense of his recent work in direct logic. This may be a prejudice or failing of mine; I studied mathematical logic in my youth and eventually had an allergic reaction to it, to the point where using the kind of typographic symbols logicians use causes me to break out it hives or at least stop reading.
Nonetheless, there seem to be important and good intuitions there. Hewitt cites
Latour and some of
Latour's followers (Annemarie
Mol, John Law), and the
Latourian part of his approach (which seems to be
underemphasized, but its there) is to foreground the importance of arguments over deduction. He says: "Since truth is out the window for inconsistent theories, we need a reformulation in terms of argumentation". This is good, but in his formalism arguments are still inferential chains, just like in standard logic. He still wants to achieve "precision and rigor". This seems like a mistake. To capture real-world reasoning in
argumentative form, you need to include all sorts of fuzzy and
unformalizable forms of evidence and reasoning. In some of his writing Carl seems to say the same thing, but his use of mathematical notation obscures this, I think.
The other attendees included a smattering of people working in AI and law (makes sense, law is an inherently argumentative form of reasoning and has long-settled technologies for being robust to inconsistency), some security people, some
Media X people,
Hugo Mercier, a bunch of programming
language/
runtime people, a sociologist, and assorted
unclassifiables and luminaries. The most straightforwardly technical talk was by David
Ungar, whose earlier work in prototype-based object systems and environments was an influence on my grad-school work. He presented a system for dealing with very large
datasets used in business query systems by allowing some inconsistencies to creep in in a controlled fashion.
The most interesting thing I learned about there had pretty much nothing to do with inconsistency and was not a formal part of the conference; it's that
Mark Miller, who has been working on capability-based security models for 20 years (a promising Actor-like idea, that never seemed to take off), is now at Google Research and has successfully gotten his ideas into the revised JavaScript standard so that in the very near future it will be part of
everyone's everyday computational environment. This is amazing in several dimensions; JavaScript seems like the last language environment on earth capable of being made
elegantly secured, but apparently he's pulled it off. And it means that while the desktop
OSs will clunk along with their security model that hasn't changed since the sixties, the browser will have one that actually might be capable of dealing with the challenges of 21st century computing.
My own talk went pretty well (
paper,
slides). But outside of letting me get some ideas and complaints out of my system, not sure what it really accomplished now that I'm back at my day job. I would love to work on reinventing computing from the ground up, but unlike Hewitt or Miller, I don't really have either an obsessive devotion to my ideas, nor the patience to navigate the politics required in order to have large-scale effects. Still, if anybody wants to fund me to work on this stuff, please do drop me a line.
I also
participated in the panel on the Singularity, and pretty much gave the same quasi-theological argument I did
here (
slides), having determined that yes in fact I could get away with it. And afterwards, much to my astonishment, somebody said that my presentation was one of the more lucid ones of the day.
I think that the issues raised at the workshop are pretty much the most important things in the world. Some seriously powerful
design ideas are going to be needed to manage the computational objects in the increasingly distributed, embedded, always-on world we are creating willy-
nilly. Something like a coarse-grained version of Actors would be a good start. "Inconsistency" seems like too weak a word to name the processes of merging and reconciling divergent representations that must necessarily arise once everything is talking to everything else, but it too is at least a step in the right direction.