Algorithms etc., chaos

[From Bill Powers (960315.0530 MST)]

Martin Taylor 960314 11:10 --

     You insist on sustaining a conflict situation, whereas I was trying
     to eliminate it by pointing out that when you define "algorithm" as
     something in a mind, then it is something in a mind. When the word
     is used to mean a process that delivers consistent results from
     consistent data, then it is something in a world that may be
     perceived by a mind.

I guess what we disagree about is whether, in the real world, there are
any processes that deliver "consistent results from consistent data."
The only one I can think of is the digital computer, which has been
constructed to approximate this ideal as nearly as possible. Even then,
the digital computer creates consistent results only when we view them
in the idealized terms of ones and zeros, ignoring the actual behavior
of the currents and voltages in the device.

But even this doesn't address the point I'm trying to make (insofar as I
have a clear enough idea of my own point to say that). Perhaps it's
significant that you speak of a process that creates a result from
"data," a term that implies some sort of computation. If you had said
that the process creates a consistent result from consistent "initial
conditions," you would have been closer to discussing my point. "Data"
are perceptions; "initial conditions" in a physical process are what
they are, whether all of them are known or not.

Here is an example of the problem:

     A "chaotic" process is different from a noisy process. A chaotic
     process will always give the same results, when provided with the
     same starting state and initial conditions. In that sense, a
     chaotic process is algorithmic.

The only way to provide a process with the same starting state and
initial conditions is in imagination. This very proposal is an
idealization, a process in the mind, not in nature. There is no way to
reset the world to a previous state. The best we can do is to imagine
that the initial conditions are repeated exactly, and then to imagine
that each step in the process takes place exactly as before. But in that
case the argument becomes a tautology; we're just running the tape of
memory again, not starting a real process anew from a real starting
condition. If we tried to repeat a real process, we would be unable to
set the initial conditions exactly the way they were, and we could not
be sure that each step in the process would take place exactly -- with
limitless accuracy -- as before. And even more to the point, we could
not reset the entire world to the state it was in before: our
"repetition" of the process would take place in a different world.

     One feature of the kinds of chaotic process usually observed in the
     world is that they often display a sort of near-periodic behaviour.

This brings up another aspect of what I think is my point. The "process
observed in the world" is already in the mind, simply because you have
said that it is "observed." The concept of periodicity is just that, a
concept, or at best a perception. You are speaking not of processes in
the real world, which I presume take place whether or not they are
perceived, but about processes that take place in the brain, and which
consist of relationships among perceptions, not among entities in the
world. Space and time are perceptions; they are even more abstracted
from reality when they are symbolized and we investigate the rule-driven
behavior of the symbols.

The basic feature of a symbol is that it refers to a range of lower-
level perceptions, this range being the set of states of the lower level
perceptions that is indicated by the same symbol. Inevitably, whatever
we say about the world using symbols is ambiguous, in that many
different states of the world are included in the meaning of each
symbol. A process that we describe symbolically thus necessarily refers
to an infinity of different processes in the world. It is only because
we can ignore those differences that we can even think of a parallel
between a symbol-manipulating process and an actual process external to
us.

This applies even in the relationship between symbolic processes and the
behavior of lower-level perceptions in the brain -- we don't have to
bring in the "real world" of which we know nothing directly. We can
speak of a light being "off" or "on", and ignore the fact that one "on"
condition is dimmer or greener than another, and that one "off"
condition actually involves a slight amount of detectable light. When we
symbolize the two states, we no longer care about these differences; we
reason logically about the result and arrive at logical conclusions, in
a mental world where quantitative differences no longer exist.

Once a logical conclusion has been reached, this conclusion can't be
translated back into the lower-level world of perception on which it was
based. All that can be done is to recreate a world in which the same
"on" and "off" classes can be perceived. I meant to make this
observation earlier, relative to your "wasp-waisted" perceptron diagram.
The representation that exists at the narrowest point can't be
translated back into the same (larger) input set from which it was
derived, because in the convergence information has to be lost. The best
you could do would be to create one of many _different_ input states
which would have the identical representation at the wasp-waist.

···

-------------------------------
Going back even farther in our discussions, I have always been somewhat
bothered by your usage of the "CEV," the Complex Environmental Variable.
Inevitably, this term carries the connotation that there must be
_something_ in the external world that corresponds to a perceptual
signal derived by complex processes from input intensities. But is this
true? Is it necessary that in the real world, there exists something
that corresponds to the taste of lemonade?

What we hope, or wish for, is that our abstract perceptions of the world
capture some larger level of organization that is independent of finer
states of the world. This is what I think is behind the way you speak of
the CEV. If we perceive a regularity, then the real world must have some
kind of corresponding regularity. If this assumption could be proven
true, we would have taken a giant step forward in solving the basic
problem of epistemology.

But I remain unconvinced -- unconvinced that the premise is true, and
unconviced that we can even test it. We do not know that when we
perceive the world to be in the same state as before, at any level, it
is actually in the same state as before. All we can know is that our
perceptual representations are in the same state as before, which is not
the same thing.

If I could put all of this more clearly, I would be smarter than I am.
------------------------------------------------------------------------
Chris Kitzke 960314.0850 --

I don't know if this will help, but here is an example I've used before
of "hypersensitivity to initial conditions."

Imagine an idealized infinitely long straight railroad rail in a uniform
gravitational field. The top surface of the rail is perfectly
cylindrical, with a large radius; the center is the highest point. The
problem is to roll a perfect steel sphere along the center of the rail,
and predict which way it will fall off, to the left or the right.

Obviously, if you start the ball exactly at the center of the rail and
roll it exactly down the centerline, the ball will never fall off. But
if there is a slight misplacement of the initial position, or a slight
error in the initial direction, the ball will fall off to one side or
the other.

If we know what the error is, we can predict on which side the ball will
fall. But as we correct this error on successive trials, we find that
the prediction becomes more and more difficult, and we have to wait a
longer and longer time to see which way the ball falls. In the end, when
we establish the initial conditions as accurately as possible using our
best measurement methods, we become totally unable to predict the final
position of the ball: we are wrong half of the time. The actual
centerline and actual initial position are somewhere within the
uncertainty of our measurements, but we can't say whether the initial
conditions lead to one outcome or the other. Paradoxically, the more
carefully we set up the initial conditions, the worse our ability to
predict the outcome becomes.

If the rail had a top surface like a peaked roof, with a perfect
straight ridge, and if the ball were shrunk to a geometrical point, the
point-ball would fall off in a very short time, so we wouldn't have to
wait long to see the outcome. But we would still find that the outcome
is unpredictable, because now any infinitesimal error in initial
placement or direction would lead immediately to falling off the rail.
No matter how small the uncertainty in the initial conditions, we would
still reach a stage where we could no longer predict where the point
would be in, say, 10 seconds.

The centerline of the rail, in either case, embodies a bifurcation in
causality. In the region around the centerline, there is a very simple
kind of hypersensitivity to initial conditions. But as Martin Taylor
says, there is more to chaos than this.
-----------------------------------------------------------------------
Lars Christian Smith (960314 16:00 CET) --

     How can we learn more about your models? Are you writing them up?

Everything I have done is available as source code or runnable code on
our Web page, or has appeared on CSGnet. Very little has appeared in the
normal literature; I no longer try to write for that market. Somebody
else is going to have to pick up that torch.

     Any plans for a CSG meeting in Europe this summer?

There were plans, but you'd have to ask Marcos Rodrigues about them (his
email address as of 1 January was m.a.rodrigues@DCS.HULL.AC.UK). I think
that 1997 is more likely.
-----------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 960315 11:30]

Bill Powers (960315.0530 MST)

I guess what we disagree about is whether, in the real world, there are
any processes that deliver "consistent results from consistent data."

I don't think we disagree as much on that as on some issue of viewpoint,
about which I am unclear. What I am clear about (I think) is that the issue
is unrelated to the problem I was discussing with Peter Cariani on the
nature of algorithms. Incidentally, did you read the part of my posting
that was addressed to him? A good part of your own posting replicates it
in different words, based on the characteristic quote:

A process that we describe symbolically thus necessarily refers
to an infinity of different processes in the world. It is only because
we can ignore those differences that we can even think of a parallel
between a symbol-manipulating process and an actual process external to
us.

Incidentally, the comment does not apply to the wasp-waisted perceptron
in the same way. The issue is related but different.

I meant to make this
observation earlier, relative to your "wasp-waisted" perceptron diagram.
The representation that exists at the narrowest point can't be
translated back into the same (larger) input set from which it was
derived, because in the convergence information has to be lost.

What the wasp-waist does is to take advantage of redundancy in the input
patterns. If the redundancy is insufficient, your comment is correct, as
I believe I noted when describing the system. But if the input patterns
are sufficiently redundant, there is zero loss at the wasp waist. What is
lost is the ability to describe patterns that never occur. Obviously
if the patterns observed after training differ characteristically from those
on which the MLP was trained, it will not translate back correctly. It can't
describe all possible input patterns, but if the "waspiness" is matched to
the world in which the MLP lives, it can describe exactly all the inputs
that ever occur.

···

----------------

What is clear from your discussion (with most of which I agree) is that there
is an inconsistency in the way I have been viewing things. I'm not clear,
however, that the inconsistency is removed by shifting to your viewpoint.
Here's the issue, probably stated in a way that I will later deny ever
saying :slight_smile:

Everything we talk about can only be perceptions. We think some of those
perceptions relate to a "real world". However, the "real" real world would
be too complex to talk about, even if we could be assured that we were
perceiving it, so we use abstractions. We talk about "isolated systems"
although there can be none such in the "reality" we believe exists. It is
only in an isolated system that there can be a process that delivers
consistent results from consistent data. And there are no isolated systems.
In that sense, you are quite right in saying that "algorithmic processes"
exist only in the mind (so far as we can tell).

The only one I can think of is the digital computer, which has been
constructed to approximate this ideal as nearly as possible. Even then,
the digital computer creates consistent results only when we view them
in the idealized terms of ones and zeros, ignoring the actual behavior
of the currents and voltages in the device.

Yes, that's a type 5 system in the classification I described to Peter
Cariani. It's the only kind of system that can execute "logical" processes,
and that is true whether the logic is done by a computer or by the human mind.
In the computer, each transition of a zero to a one in a particular flip-flop
occurs as a transition across a boundary with a trajectory that approaches
an attractor that represents the "one" state, regardless of the particulars
of the approach. And then the voltage is subject to all sorts of external
disturbances from electric and magnetic fields and charged particle
impingements. But, the regional boundary in a flip flop has changed once
the transition has been made, so that a voltage that had been a marginal
"zero" now corresponds to a secure "one."

If you had said
that the process creates a consistent result from consistent "initial
conditions," you would have been closer to discussing my point. "Data"
are perceptions; "initial conditions" in a physical process are what
they are, whether all of them are known or not.

But we are still perceiving them and analyzing them within a perceived
model of the situation, whether we call them "initial conditions" or "data."
I preferred to separate "initial conditions" within the device from
"initial conditions" that influence the condition of the device from
outside. The latter, I call "data." During the process, the internal
conditions of the device depend, at least in part, on both the "initial
conditions" and the "data," but by using different terms, we allow for
the device to sense novel external "data" without having had to include
the whole universe in the original state. If we use only the term "initial
conditions" we have to add an adjective "internal" or "external" when we
are dealing with a "realistic" non-isolated system.

The essence of my argument has been much like yours, that algorithmic
processes do not occur in the real world. If I may quote from my commentary
on "The Emperor's New Mind" (BBS, December 1990):

  "Thinking always proceeds in an interactive environment, and no train of
  thought can be guaranteed to run to completion unaffected by outside
  influences. Furthermore, the thoughts engendered by an input do not stop
  when the relevant output has been made. In the second sense, then,
  algorithm means only the the manipulation of elements of thought according
  to rules, in an environment in which the data, the results, and the
  manipulations can be influenced by unexpected events. [I had earlier said
  that Penrose used the term "algorithmic" in two senses, as a defined method
  for converting inputs of a given type into results of a given type, and also
  as equivalent to "deterministic."] Algorithms in the second sense cannot
  be deterministic in the real world. Only the behaviour of the universe as
  a whole could be deterministic, not that of any isolated subpart, since
  even if the subpart arrives twice in exactly the same state, external
  events may cause its future behaviour to follow two different paths."

I think that this, though not couched in the language of PCT, makes much
the same point as you do. An algorithmic process will not be found to
occur in nature. But when we talk about an algorithmic process _not_ being
found in the real world, we are nevertheless talking about the real world,
not about processes occurring (or not occurring) in our perceptual apparatus.
One has to imagine the existence in the "real world" of something, in order
to argue that this something cannot exist.

Now, I used the example of a neuron, which is said to produce an output
impulse when some function of the input impulses and its chemical environment
exceeds a threshold. This is, of course, an imaginary neuron. But it is
imagined to exist in the real world, and it is imagined that the function
in question is at any moment precise, whether it is knowable or not. The
imagined neuron simply _will_ fire if F(x1, ... ,xn) exceeds its threshold
value.

Here there is a dilemma. Is the neuron an algorithmic processor because the
function F is precisely defined (though unknowable), or is it not an
algorithmic processor because the real-world neuron of which it is an
imagined model could never _in principle_ be in the same initial state twice?
Peter Cariani says that it could not be algorithmic on the grounds that its
input data may be continuous-valued. I don't go along with that, and it's
a different issue. But the attribution of unknowable properties to an
imagined object that could not exist exactly as imagined--there's a problem,
indeed.

In that we are talking about imagined objects, it is quite true, as you said
in a different context, that The "process observed in the world" is already
in the mind, simply because you have said that it is "observed." (Or
"imagined"). But there seems to me to be more than that in the distinction
you seem to be making. You say:

You are speaking not of processes in
the real world, which I presume take place whether or not they are
perceived, but about processes that take place in the brain,

We are getting into meta-meta- stuff here, and I don't think we should. In
talking about "algorithmic processes" I want to be talking about processes
in the real world. Real world objects have to be a legitimate topic of
discussion, in the sense that if I say "My desk is wood" I know that I am
talking of perceptions in the mind, but assume that those perceptions
correspond to some kind of "reality." In the same sense, I know that the
process I imagine this "neuron" to be executing is indeed in my mind,
but I am talking about the corresponding process that would be occurring
in a real-world neuron if there were such a thing.

It's more convenient, and seldom leads to problems, if we pretend we are
talking about the "real world" objects, except when we are explicitly
dealing with the behaviour of perceptions. And then it is convenient to
pretend that those perceptions are "real" so that we can talk of them.
Otherwise we get into indefinite recursions, and lose track of which
perception of perception of perception of...we are trying to talk about.
-------------

Going back even farther in our discussions, I have always been somewhat
bothered by your usage of the "CEV," the Complex Environmental Variable.
Inevitably, this term carries the connotation that there must be
_something_ in the external world that corresponds to a perceptual
signal derived by complex processes from input intensities. But is this
true? Is it necessary that in the real world, there exists something
that corresponds to the taste of lemonade?

Periodicity is in the mind, you said. We have this discussion periodically,
and periodically seem to agree that we have the same idea about what is
going on. In this case, should I try to dig out the archives and repost
the sequence, or would it be more fun to recycle the arguments in new
wordings? A CEV is some function of variables in the real world. Those
variables may themselves be CEVs. Eventually, every CEV has to be traced
back to perceptual functions, though not necessarily functions that produce
controlled perceptions. A controlled perception defines a controlled CEV,
but, being in the environment, a CEV can be influenced by events in the
environment, specifically by the actions of a different control unit
whose perceptual function defines a related CEV. There's no assertion
that any CEV represents a structure in the environment that has any more
reality than "the number of purple hairs in the roof of St Basil's Cathedral
times the number of seconds since the beginning of the last lunar eclipse".

What we hope, or wish for, is that our abstract perceptions of the world
capture some larger level of organization that is independent of finer
states of the world. This is what I think is behind the way you speak of
the CEV.

I don't go that far. Neither do I think there is anything special about
the "finer states of the world." All we can access is variations in the
outputs of our sense organs, and for all we know, there are infinite
ranges of variation in the states of the "real world" corresponding to
any particular variation in the sensor outputs. (Of course, in "sensor
outputs" we have to include lots of things that are not normally considered
"sensors", such as (perhaps) the effects of magnetic fields on brain cells,
and stuff like that.

If we perceive a regularity, then the real world must have some
kind of corresponding regularity.

I don't think this comes into it much. It does, to some extent, but it's
not terribly important. It gives rise to the convergence (wasp-waistedness)
of our sensory systems--10^8 retinal cones converging to 10^6 optical nerve
fibres, for example. And it means that any particular "sensor" variation
corresponds to an even wider range of variability in the "fine detail" of
the real-world. These convergences may well depend on "real" regularities
of the world, in the sense that if they didn't, our ancestors would have
had a harder time surviving variations that affected them but that they
couldn't perceive.

By making this assertion, incidentally, you are saying that you really
believe Hans Blom's view of the modelled perceptual world. I've no objection
to that, but you should probably take note of it in your next round of
discussions with him.

My view is a bit different. It is that if our control actions have reasonably
reliable effects on some of the infinitely many possible perceptions, those
are the perceptions that are likely to be retained. We are likely to lose
perceptual functions that define CEVs one which our actions have no consistent
effect. And we are likely to lose perceptual functions for which the control
has no consistent effect on our intrinsic variables (specifically, those
variables that affect the propagation of our genetic structure to future
generations).

So I don't believe that we perceive regularities of the real world, but
I do believe that what we perceive consistently and stably is strongly
influenced by regularities in the way the world works. What we perceive
during reorganizational transitions has nothing to do with real regularities
in the world, and I would assume that much of the time we are doing some
reorganization (I'm using the term in a slightly broader sense than usual,
to indicate any manner in which control functions are changed).

If this assumption could be proven
true, we would have taken a giant step forward in solving the basic
problem of epistemology.

Any such proof would be circular, wouldn't it?

If I could put all of this more clearly, I would be smarter than I am.

Wouldn't we all?

I'm not at all sure that the above clarifies anything, but I hope it does.

Martin