[From Bill Powers (960315.0530 MST)]
Martin Taylor 960314 11:10 --
You insist on sustaining a conflict situation, whereas I was trying
to eliminate it by pointing out that when you define "algorithm" as
something in a mind, then it is something in a mind. When the word
is used to mean a process that delivers consistent results from
consistent data, then it is something in a world that may be
perceived by a mind.
I guess what we disagree about is whether, in the real world, there are
any processes that deliver "consistent results from consistent data."
The only one I can think of is the digital computer, which has been
constructed to approximate this ideal as nearly as possible. Even then,
the digital computer creates consistent results only when we view them
in the idealized terms of ones and zeros, ignoring the actual behavior
of the currents and voltages in the device.
But even this doesn't address the point I'm trying to make (insofar as I
have a clear enough idea of my own point to say that). Perhaps it's
significant that you speak of a process that creates a result from
"data," a term that implies some sort of computation. If you had said
that the process creates a consistent result from consistent "initial
conditions," you would have been closer to discussing my point. "Data"
are perceptions; "initial conditions" in a physical process are what
they are, whether all of them are known or not.
Here is an example of the problem:
A "chaotic" process is different from a noisy process. A chaotic
process will always give the same results, when provided with the
same starting state and initial conditions. In that sense, a
chaotic process is algorithmic.
The only way to provide a process with the same starting state and
initial conditions is in imagination. This very proposal is an
idealization, a process in the mind, not in nature. There is no way to
reset the world to a previous state. The best we can do is to imagine
that the initial conditions are repeated exactly, and then to imagine
that each step in the process takes place exactly as before. But in that
case the argument becomes a tautology; we're just running the tape of
memory again, not starting a real process anew from a real starting
condition. If we tried to repeat a real process, we would be unable to
set the initial conditions exactly the way they were, and we could not
be sure that each step in the process would take place exactly -- with
limitless accuracy -- as before. And even more to the point, we could
not reset the entire world to the state it was in before: our
"repetition" of the process would take place in a different world.
One feature of the kinds of chaotic process usually observed in the
world is that they often display a sort of near-periodic behaviour.
This brings up another aspect of what I think is my point. The "process
observed in the world" is already in the mind, simply because you have
said that it is "observed." The concept of periodicity is just that, a
concept, or at best a perception. You are speaking not of processes in
the real world, which I presume take place whether or not they are
perceived, but about processes that take place in the brain, and which
consist of relationships among perceptions, not among entities in the
world. Space and time are perceptions; they are even more abstracted
from reality when they are symbolized and we investigate the rule-driven
behavior of the symbols.
The basic feature of a symbol is that it refers to a range of lower-
level perceptions, this range being the set of states of the lower level
perceptions that is indicated by the same symbol. Inevitably, whatever
we say about the world using symbols is ambiguous, in that many
different states of the world are included in the meaning of each
symbol. A process that we describe symbolically thus necessarily refers
to an infinity of different processes in the world. It is only because
we can ignore those differences that we can even think of a parallel
between a symbol-manipulating process and an actual process external to
This applies even in the relationship between symbolic processes and the
behavior of lower-level perceptions in the brain -- we don't have to
bring in the "real world" of which we know nothing directly. We can
speak of a light being "off" or "on", and ignore the fact that one "on"
condition is dimmer or greener than another, and that one "off"
condition actually involves a slight amount of detectable light. When we
symbolize the two states, we no longer care about these differences; we
reason logically about the result and arrive at logical conclusions, in
a mental world where quantitative differences no longer exist.
Once a logical conclusion has been reached, this conclusion can't be
translated back into the lower-level world of perception on which it was
based. All that can be done is to recreate a world in which the same
"on" and "off" classes can be perceived. I meant to make this
observation earlier, relative to your "wasp-waisted" perceptron diagram.
The representation that exists at the narrowest point can't be
translated back into the same (larger) input set from which it was
derived, because in the convergence information has to be lost. The best
you could do would be to create one of many _different_ input states
which would have the identical representation at the wasp-waist.
Going back even farther in our discussions, I have always been somewhat
bothered by your usage of the "CEV," the Complex Environmental Variable.
Inevitably, this term carries the connotation that there must be
_something_ in the external world that corresponds to a perceptual
signal derived by complex processes from input intensities. But is this
true? Is it necessary that in the real world, there exists something
that corresponds to the taste of lemonade?
What we hope, or wish for, is that our abstract perceptions of the world
capture some larger level of organization that is independent of finer
states of the world. This is what I think is behind the way you speak of
the CEV. If we perceive a regularity, then the real world must have some
kind of corresponding regularity. If this assumption could be proven
true, we would have taken a giant step forward in solving the basic
problem of epistemology.
But I remain unconvinced -- unconvinced that the premise is true, and
unconviced that we can even test it. We do not know that when we
perceive the world to be in the same state as before, at any level, it
is actually in the same state as before. All we can know is that our
perceptual representations are in the same state as before, which is not
the same thing.
If I could put all of this more clearly, I would be smarter than I am.
Chris Kitzke 960314.0850 --
I don't know if this will help, but here is an example I've used before
of "hypersensitivity to initial conditions."
Imagine an idealized infinitely long straight railroad rail in a uniform
gravitational field. The top surface of the rail is perfectly
cylindrical, with a large radius; the center is the highest point. The
problem is to roll a perfect steel sphere along the center of the rail,
and predict which way it will fall off, to the left or the right.
Obviously, if you start the ball exactly at the center of the rail and
roll it exactly down the centerline, the ball will never fall off. But
if there is a slight misplacement of the initial position, or a slight
error in the initial direction, the ball will fall off to one side or
If we know what the error is, we can predict on which side the ball will
fall. But as we correct this error on successive trials, we find that
the prediction becomes more and more difficult, and we have to wait a
longer and longer time to see which way the ball falls. In the end, when
we establish the initial conditions as accurately as possible using our
best measurement methods, we become totally unable to predict the final
position of the ball: we are wrong half of the time. The actual
centerline and actual initial position are somewhere within the
uncertainty of our measurements, but we can't say whether the initial
conditions lead to one outcome or the other. Paradoxically, the more
carefully we set up the initial conditions, the worse our ability to
predict the outcome becomes.
If the rail had a top surface like a peaked roof, with a perfect
straight ridge, and if the ball were shrunk to a geometrical point, the
point-ball would fall off in a very short time, so we wouldn't have to
wait long to see the outcome. But we would still find that the outcome
is unpredictable, because now any infinitesimal error in initial
placement or direction would lead immediately to falling off the rail.
No matter how small the uncertainty in the initial conditions, we would
still reach a stage where we could no longer predict where the point
would be in, say, 10 seconds.
The centerline of the rail, in either case, embodies a bifurcation in
causality. In the region around the centerline, there is a very simple
kind of hypersensitivity to initial conditions. But as Martin Taylor
says, there is more to chaos than this.
Lars Christian Smith (960314 16:00 CET) --
How can we learn more about your models? Are you writing them up?
Everything I have done is available as source code or runnable code on
our Web page, or has appeared on CSGnet. Very little has appeared in the
normal literature; I no longer try to write for that market. Somebody
else is going to have to pick up that torch.
Any plans for a CSG meeting in Europe this summer?
There were plans, but you'd have to ask Marcos Rodrigues about them (his
email address as of 1 January was m.a.rodrigues@DCS.HULL.AC.UK). I think
that 1997 is more likely.
Best to all,