redundancy; predictability; x-wind landings

[From Bill Powers (950327.0630 MST)]

Martin Taylor 960326 15:00 --
Bruce Gregory 960326.1400 --

Bruce:
     Could I express my concern, not in terms of how predictable I want
     my steering action to be, but in terms of the control I want to be
     able to exercise?

Martin:
     Yes, you would be saying the same thing in different words.

Bruce:
     I foresee (predict) a potential loss of control over the direction
     in which my car is moving.

Martin:
     Yes, that's it.

These statements are somewhat artificial, in that the words are chosen
to have theoretical significance rather than being actual descriptions
of the process and experience. When I drive on icy roads, it's not the
unpredictability or lack of control that I imagine, it's trying to go
around a curve at my present speed and sliding -- quite predictably --
off the road. I slow the car until I can imagine going around the curve
at that speed and staying on the road. I'm simply controlling a
perceived outome as usual, real or imagined.

Actually, what I really imagine seems even less like planning: I imagine
something like a feeling of the tires "gripping" the road, with ice
considerably reducing my sense of the connection between tires and road.
This is much like Hans Blom's model-based control, but without the
calculations of variance that he uses.

At a higher level of perception, I might characterize "sliding off the
road" as one example of "losing control," but while actualluy driving
I'm not really concerned with the category to which sliding off the road
belongs. I might classify sliding off the road as an example of
"unpredictability" or "uncertainty," except that what really bothers me
about going too fast is the clearly predicted outcome. Also, this kind
of supposed unpredictability is not at all like the kind that arises
from a loose steering linkage, which really would make me uncertain
about how the car will behave when I turn the steering wheel.

The way of speaking illustrated above substitutes a theoretical category
to which a process could be said to belong for a description of the
process itself. This is not a description of the actual process unless
the person is actually perceiving in terms of the stated categories and
trying to make the world into a suitable member of the categories. After
a person finds the car in the ditch, the person might explain to the
State Trooper, "I lost control of my car," but in my experience this is
likely to be a description after the fact, not a report on the event
itself. During the event I simply try to stay on the road.

I don't know if I'm saying this clearly. To exaggerate, suppose I
explained to the trooper, "I was unable to keep the entropy of my car
from increasing." From a certain theoretical standpoint, this might be a
correct way to characterize the result, but entropy is very unlikely to
have been part of any actual intentional processes involved.

Are we talking about a conclusion drawn by an observer of a behaving
system, or about an actual process carried out in a behaving system?

···

----------------------------------------------------------------------
Bruce Gregory 960326.1642 --

     I guess I was speaking from the point of my idee fixe, learning.
     When you are learning to land in cross winds, you may: (1) look at
     the wind sock, (2) anticipate that you will have to lower your left
     (or right) wing; and (3) anticipate that will have to hold right
     (or left) rudder. This anticipation or prediction sets you up
     control your perceptions, since it is not always obvious what you
     must do to get things looking the way you want them to look.

Yes, this is how we are taught things like crosswind landings. But when
you actually try the process, these descriptions suddenly take on new
meanings than the words didn't have. Actually, your description applies
only to the moments just before touchdown when you're trying to keep the
fuselage (or more specifically, the undercarriage) aligned with the
runway, which requires a momentary slip. In the final approach, you
don't use crossed controls; you fly coordinated but with the nose aimed
somewhat upwind, so you're approaching the runway in a crabbed
orientation. The way I think of it is that just before the wheels touch
the runway, you kick the plane into alignment. If you actually try to
touch down with one wing low, I think you risk some pretty hairy
consequences. Have you ever actually landed on one wheel? Maybe you can
really do this, but I never could when I was flying. The only time I've
actually seen it done was at an air show, with a stunt pilot at the
controls. I've often thought that the guy who wrote those descriptions
of how to touch down in a crosswind wasn't putting into words what he or
she actually did. It sure wasn't what _I_ actually did!

I think that looking at the wind sock and anticipating your moves is
mainly a way of occupying your cognitive levels and keeping them from
being surprised and interfering when you actually land the plane, doing
what you actually have to do. If you do it the way I said (align with
the runway; stop the lateral drift; kick the plane straight just before
touchdown) you will get down just fine, whatever your cognitive systems
are going on about. Cognitive -- verbal -- learning isn't the best way
to learn nonverbal control tasks.
-----------------------------------------------------------------------
Martin Taylor 960326 16:15 --

     I seem to sense an improvement in your mood from its state before
     the trip, perhaps a release from soem tension. Do I? Or is it just
     an illusion?

You're right -- I don't have to think about going off on a five-day trip
now.

     By "pattern" I mean something like { 1, 5, 23, 6, 15, 2, -17 }, a
     set of numbers that represent the values of a variable at many
     points. A pattern doesn't need a pattern recognizer.

An interesting statement. What I see in the brackets are seven numbers.
How do you know there's a pattern there without perceiving it? I have
written down another seven numbers: can you tell me what the pattern in
them is? Or would you need to perceive them before telling me?

    The statement was exactly that the reproduction is _of the values
    at the sensor inputs_.

      ...
     It seems to be very hard for you to believe that I mean this. I
     don't know why that should be. I do mean it.

I do find it hard to believe you can mean this, for a real system. If
you're just talking about a mathematical system I can believe it, but
for a real system, the array of sensor inputs doesn't actually repeat.
The set of numbers above, for a real system, would never repeat: on a
second occurrance of "the same" pattern we might have {1.02, 4.87, 22,
6.15, 15.5, 2.1, and -12} -- and the pattern recognizer would report
that the same pattern as before was present. This assumes that the
numbers represent sensor responses to a real environment, not just
mathematical computations in which the idealization is built into
everything.

     The postulated conditions are that the input patterns of
     intensities during the learning phase were all generated by
     presenting intensities derived from mathematically correct stripes
     of various widths, orientations, and placements. The claim is that
     _so long as the characteristics of the world are what they were
     during learning_, the intensities at the outputs of the device will
     be, one for one, exactly what the intensities are at the inputs.

Yes, this I can believe because everything -- the properties of the
world as well as those of the system -- have been idealized. If you can
mathematicize the world as well as the system, you can say that "the
characteristics of the world are what they were during learning." But
this does not apply to the real system, where the characteristics of the
world are never actually "the same" either during or after learning.

There's another problem. Suppose your system uses monocular vision.
Clearly, there can be many states of the three-dimensional world that
will lead to the same two-dimensional view. No matter how redundant the
inputs may seem to the two-dimensional input function, there is no way
to reproduce the inputs to this perceptual function exactly (or even
approximately) on the basis of the two-dimensional representation. If
the stripes at the input are actually at different distances from the
input function, the reproduction of the _appearance_ of the stripes can
never recapture the distances of the stripes, no matter how compact the
representation.

When you claim that the wasp-waisted perceptron can be used as the basis
for recreating the _inputs_ to the perceptron, aren't you tacitly
assuming that the dimensionality of the environment is less than or
equal to that of the set of input sensors?

There must be some conditions that you're assuming which aren't being
mentioned.
-----------------------------------------------------------------------
Best to all,

Bill P.