The evolution of a mind in time has its own rules, which do not violate physics but may be said to transcend them, sort of.
Instead of a person, imagine that it's a computer we are talking about. All that circuitry is obeying the laws of physics, no doubt, but the evolution of the state of the processor from one cycle to the next is not well-described by physics, but by the abstract formalism the computer was designed to implement. You can talk about a computer in terms of physics, theoretically, but it doesn't get you very far.
What is even more confusing is that the computation is the same whether the computer is made out of silicon or tinkertoys. So it doesn't appear to have much to do with physics, does it? Considering that transhumanists seem to think they can upload their selves onto a different physical substrate, they must not consider themselves to be made up of physics, but Something Else.
"Properties of the relationship between things" is not a physical concept, so it indeed appears to be "something else".
Take the idea of the letter "A". It is composed of parts in certain relations -- three lines in a configuration. It's the same letter whether the lines are made up of pixels on a screen or ink on a page. Interestingly, it's the same letter even if some of the lines are curved slightly, or thickened, or enhanced with serifs -- Doug Hofstadter has written about this particular example. All of these cases are composed of physics, and no violation of physical law is going on, but the physics in the various cases have nothing in common. So whatever makes A-ness would appear to be "something else".
Here's my view of computationalism: the computer is a highly imperfect model of human thought. If you look at the historical development of the computer, it evolved as an attempt to mechanize thought. Despite its imperfections, it's the best model we have, and it helps us understand real brains. Various insoluble philosophical problems appear in the computer as engineering problems, which does not exactly solve the real problems but helps get a better handle on them.
For instance, the old problem of mind/body dualism was recreated in the computer and appears as the less mysterious hardware/software dualism. Suddenly we have a model for how physical systems and symbolic systems can depend on and interact with each other. That's very powerful. But I don't believe (as some of the more callow reductionists do) that we have thereby completely solved or gotten rid of the original question.
I never read van Gelder's dynamic cognition papers, but it seems to be rather similar to the critiques of GOFAI (good old-fashioned AI) that were made in the 90s by neural-net people and the situated action people. There is some validity to these critiques, a lot actually, but in a sense they are attacking a strawman. Nobody really believes the brain is a classic Turing machine; even if it is doing symbol processing it is doing it in a massively parallel, associative style. But it is doing some sort of computation (a variety of sorts, actually), and nobody has come up with a better way of theorizing about what it is doing that computationalism.
Practially, programming computers usually requires having understandings one or two levels below the level you would like to. If I'm coding something, I would like to think in terms of pure algorithms but end up having to think about clock speeds, memory locality, and (if you are Google) heating and electrical supply issues. Computers do a better job of separating out levels than biology, because they are designed that way, but in both cases you have different levels of operation built out of underlying levels.
To return to the original issue, the question of what is the ontological status of entities and processes that exist at higher levels of this stack. They are certainly made of physics, but are they physics? This is a hard question that refuses to go away, except by declaring at so as some reductionists would like to do.
More, from an earlier post:
Here's a question for reductionists: It is a premise of AI that the mind is computational, and that computations are algorithms that are more or less independent of the physical substrate that is computing them. An algorithm to compute prime numbers is the same algorithm whether it runs on an Intel chip or a bunch of appropriately-configured tinkertoys, and a mind is the same whether it runs on neurons or silicon. The question is, just how is this reductionist? It's one thing to say that any implementation of an algorithm (or mind) has some physical basis, which is pretty obviously true and hence not very interesting, but if those implementations have nothing physical in common, then your reduction hasn't actually accomplished very much.
In other words: let's grant that any particular mind, or algorithm, is physically instantiated and does not involve any magic non-physical forces. Nonetheless, it is mysterious how physical systems with nothing physical in common can realize the same algorithm. That suggests that the algorithm itself is not a physical thing, but something else. And those something elses have very little to do with the laws of physics.
"Algorithms are made from math" -- indeed, mathematical objects of any kind also have the peculiar properties that I noted. A hexagon is a hexagon no matter what it's made of. A hand is a hand not because its composed of flesh, but because it has certain parts in certain relationships, and is itself attached to a brain. Robotic hands are hands. While there is nothing magically non-physical going on with minds or hands, it does not seem to me that a theory of hands or minds can be expressed in terms of physics. This is the sense in which I am an antireductionist. There are certain phenomena (mathematics most clearly) which, while always grounded n some physical form, seem to float free of physics and follow their own rules.
I wouldn't call my view "vintage Platonic idealism", but maybe it is, I'm not a philosopher. I'm not saying that forms are more primitive or more metaphysically basic than matter, just that higher-level concepts are not derivable in any meaningful way from physical ones. Maybe that makes me an emergentist. But this philosophical labeling game is not very productive, I've found.
Nick said: Yudkowsky is saying that the Schrödinger equation provides a causally complete account of the program's execution.
The Schrödinger equation, let's agree, provides a mechanistic account of the evolution of the physical system of a computer, or brain, or whatever. But it does just as well for a random number generator, or a pile of randomly-connected transistors, or a pile of sand. Whatever makes the execution a sensible mathematical object is not found in the Schrödinger equation.
An algorithm can reduce to any of many very different physical representations. How is this any odder than saying 4 quarks and 4 apples are both 4 of something?
It isn't. Four-ness is also odd, just not as obviously so. Like algorithms, it too is not to be found in the Schrödinger equation. I'm hardly the first person in the world to point out that the nature of mathematical objects is a difficult philosophical question.
I'm not trying to introduce new physical mechanisms, or even metaphysical mechanisms. Let's grant that the universe, incuding the minds in it, runs by the standard physical laws. But the fact that mechanical laws produce comprehensible structures, and minds capable of comprehending them, is exceedingly strange. Even if we understood brains down to the neural level, and could build minds out of computers, it would still be strange.