algorithmic processes; learning goals; question re chaos

[From Bill Powers (960316.0300 MST)]

Martin Taylor 960315 11:30 --

     What the wasp-waist does is to take advantage of redundancy in the
     input patterns. If the redundancy is insufficient, your comment is
     correct, as I believe I noted when describing the system. But if
     the input patterns are sufficiently redundant, there is zero loss
     at the wasp waist.

What is "redundancy?" Later in your post, you illustrate it this way:

     It gives rise to the convergence (wasp-waistedness) of our sensory
     systems--10^8 retinal cones converging to 10^6 optical nerve
     fibres, for example.

What is "redundant" depends on what function is being used to extract a
perceptual signal. For example, an "A" presented anywhere within the
retinal area of recognition will lead to the signal representing "A".
However, what is lost is the information about _where_ the "A" is seen,
how it is oriented, and what its size, color, or typeface are. From the
knowledge that an "A" is present, one could reconstruct an "A" in the
retinal field -- but it would not be the same "A". This is true of any
situation involving "redundant" inputs. The redundancy is always defined
in terms of the level of perception involved, and there is no redundancy
at lower levels. So the reconstruction can be only in terms of the level
(and kind) of perception in question.

To get back into sequence: you say

     But if the input patterns are sufficiently redundant, there is zero
     loss at the wasp waist. What is lost is the ability to describe
     patterns that never occur.

The catch here is in speaking of input _patterns_. A _pattern_ may
repeat (a higher-level perception) without having the same _instance_ of
the pattern repeating (the lower-level components of that pattern). A
different perceptual function given the same lower-level components
would extract a different invariant, treating a different aspect of the
input set as the variable to be recognized and treating all other
aspects, including for example the alphabetic identity of the inputs, as
irrelevant redundancies. If the second perceptual function is looking
for a pattern of a given size, to that function an "A" of a given size
is redundant with a "B" of the same size.

     ... if the "waspiness" is matched to the world in which the MLP
     lives, it can describe exactly all the inputs that ever occur.

The problem is in this "matching." Presumably, the world is more
detailed than the set of all sensors. Therefore each sensor's state is
potentially independent of all other sensors' states. This means that
the set of all possibly discriminable inputs is the same as the set of
all sensors, at the level of intensities. No true redundancy, therefore,
can exist at the level of intensities. Given any MLP that identifies all
possible input patterns that that MLP can detect, there can be another
MLP that distinguishes between input states that are the same to the
first MLP.

We don't really need to be talking about MLPs. This would be true of any
perceptual function, however it came to exist. You can have a perceptual
function that exhaustively recognizes all possible cylinders -- but it
will not be able to recognize all different orientations of the
cylinders, because orientation is stripped out in the very process of
recognizing "a cylinder" independently of its orientation.

This is the essence, I think, of the concept of a hierarchy of
perceptions with many perceptual functions at each of many levels.

     What is clear from your discussion (with most of which I agree) is
     that there is an inconsistency in the way I have been viewing
     things. I'm not clear, however, that the inconsistency is removed
     by shifting to your viewpoint. Here's the issue, probably stated in
     a way that I will later deny ever saying :slight_smile: Everything we talk
     about can only be perceptions. We think some of those perceptions
     relate to a "real world". However, the "real" real world would be
     too complex to talk about, even if we could be assured that we were
     perceiving it, so we use abstractions.

We also use experiments. Sorry to keep hammering at this, but the
experimental approach gives us a probe into reality which we do not get
merely from abstracting. When we emit an act into the real world, what
we get back depends only and exactly on what is actually Out There. No
approximations or abstractions are involved; no algorithms. When you
push on an object, the result affects your senses in a way that is
determined by what is actually involved; to put this in terms of limited
human perceptions, it is determined by the exact resiliency of the area
of skin in contact with the object, the exact composition and structure
of the object's surface at that point, the propagation of pressure waves
through the object, and their reflections at the surface, the molecules
of air bombarding the object at the same time, the influence of the Sun
and Moon on the object, and literally an endless list of similar details
which our senses can't possible sort out -- and other details which the
senses can't even detect and of which we know nothing. The world itself
determines both what a "push" amounts to and what its effects will be;
the effects will be the only effects that could have occurred at that
instant of time.

ALL levels of perception are abstractions. The higher the level, the
greater the degree of abstraction. What we do not know is whether this
process of successive abstraction is revealing actual properties of the
world, or whether it is _creating_ properties of the (experienced)

Incidentally, you missed the context of my sentence

If we perceive a regularity, then the real world must have some
kind of corresponding regularity.

I was still paraphrasing what I took to be connations of your usage of a
CEV. The quoted sentence is not my view. My view is that we are safer
not assuming that perceived regularities correspond to actual
regularities, but are constructed by perceptual functions. It's hard to
remain skeptical about regularities like ticks of a clock, but it is
conceivable that in the real world Out There, space and time are not
like what we experience of them. The very fabric of the world as we
experience it could be merely the projection of an unimagineably
different world into the world of human perceptions -- like the
Flatlanders perceiving the results of an intervention by a three-
dimensional being.

     My view is a bit different. It is that if our control actions have
     reasonably reliable effects on some of the infinitely many possible
     perceptions, those are the perceptions that are likely to be
     retained. We are likely to lose perceptual functions that define
     CEVs one which our actions have no consistent effect. And we are
     likely to lose perceptual functions for which the control has no
     consistent effect on our intrinsic variables (specifically, those
     variables that affect the propagation of our genetic structure to
     future generations).

As you can see, since the quoted sentence above is not my view but my
representation of what I thought your view was, this is not a difference

If this assumption [of correspondence between perceived and real
regularities] could be proven true, we would have taken a giant step
forward in solving the basic problem of epistemology.

     Any such proof would be circular, wouldn't it?

If so, it would not be a proof. What would be implied would be a way of
knowing about real regularities that did not depend on the senses. I
would accept a mathematical proof that could distinguish between systems
in which internal regularities are necessarily representative of
external regularities, and those in which the internal regularities were
arbitrary. If the proof entailed showing how the system itself could
make this distinction, then we would know how to make it and the problem
would be solved. But I rather suspect that any attempt at such a proof
would come out negative, for reasons similar to those behind Godel's
Theorem or theorems of decideability (I am parroting words here, since I
couldn't reproduce such theorems).


from my view. I agree entirely.
             This is Phil Runkel replying to Bruce Gregory's posting of
     15 March 96.

Phil, wouldn't it be shorter to say [From Phil Runkel 960315]? And it's
easier to cite as the heading for a section of a post like this.

I agree with your comments about schooling and concentrating on what to
learn. But I still think there is a "how to learn" question that
involves learning control. Whatever you would like students to learn,
there are control processes that they have to acquire before they can
learn it, and contrary to what you imply, these are not always in place
just because the subject is an animal.

Take subtraction, which you mention. What is the goal of subtraction? I
went all the way through grammar and high school before anyone ever
explained to me (while I was listening) what the goal of subtraction is.
If I had known, I would not have made so many mistakes. I suppose that
everyone else in the world was taught the goal of subtraction and I just
missed it, so I will not bore you by repeating it here. But if you know
the goal of subtraction, then you will be able to tell when you have
subtracted right and when there is an error, so you will stop making
subtraction errors.

This way of teaching doesn't teach procedures or facts; it teaches
goals. If you know what the goal is, and accept it as your own, you will
find some procedure to reach it. Half of the fun of learning is to find
your very own way to achieve a goal, whether or not it's of any basic
interest to you (has any higher-level usefulness). Of course the other
half (there are probably many halves) is seeing someone else come up
with a really neat way to reach a goal, and learning to do it the same
way. This kind of learning, I think, is called "playing games," because
the outcome is less important than how you get there. People who play
football or golf are not really much interested in the ball's position.
The whole game is about how you get it into and out of various
positions. If, like Calvin Peete, you have a gimpy arm, you find your
own way, which can work well enough to earn a pot of money and fame if
you like that sort of thing.

Note to Ken Kitzke: ask Phil.
A general question to the experts about chaos.

Has anyone tried to explain why chaos occurs? In the case of nonlinear
oscillators, one possible explanation would be that the nonlinearities
create different modes of oscillation at different frequencies. If the
different frequencies are harmonically commensurate, we get regular
oscillations of some irregular waveform. But if they are incommensurate,
we get nonrepeating superimposed oscillations which look quite random
but still contain overall orderliness. As the drive to a nonlinear
oscillator is increased, the various modes of oscillation will be
differently affected, and the commensurateness will also be affected. So
we would expect (perhaps) regions of regular oscillation with "chaotic"
oscillations between them, as the drive is gradually increased.

The reason I ask is that chaos seems to be an object of study in itself,
with no bridge to normal physical principles. It would be nice to know
what the bridge is.
Best to all,

Bill P.

P.S. Next Wednesday, the 20th, I'm going to Chapel Hill NC to put on a
couple of seminars for New View Publications, who persuaded me somehow
to do this. I will be gone until the following Tuesday. I wish someone
else were doing this. Anyway, I have some getting ready to do and will
not be as responsive as usual (particularly after the 20th).