Models and control

[From Bill Powers (980124.1656 MST)]

I want to make a point about model-based control and the question that Jeff
Vancouver raised, of perceptual systems as models of the environment. This
discussion, appropriately enough, will go in a big circle.

What gets overlooked here is the relationship of perceptual signals to the
environment that is basic to PCT. The view that is taken outside of PCT is
that the environment, the world, is just what is out there, and the brain
has to construct some kind of model of it inside the head, so as to be able
to predict or control it. But in PCT, as I conceive it, the world of
perception _is_ the world that seems to be "out there." The problem is not
to understand how the brain forms a model of the world we observe, but how
the world we observe is constructed from the physical reality on which we
assume it's based. We're already looking at the perceptions; the real
problem is to figure out how they're related to the physics.

All this came to mind again as I was working on a refinement of Rick's 3-D
baseball model. This problem gets a bit complicated when you try to get the
details right. The natural thing to do is to split the control systems up
into one that controls in radius, and another that controls laterally.
Relating this to the physics of the "real" baseball diamond gets
complicated because the control is always relative to the fielder, but we
have to express the motions in a fixed physical coordinate system. When the
fielder moves radially or laterally in relation to the line of sight to the
baseball or home plate, that motion is not in the x-y coordinates of the
playing field "out there;" it has to be projected onto the external
cartesian coordinates. The right fielder, for example, is off to one side,
so when he moves straight away from home plate he is moving in one
dimension relative to himself, radially, but in both x and y in relation to
the external coordinate system.

This gets more complicated when we realize that the ball player can move
only in one direction at a time, the direction of running. This direction
can change, but from the ball player's point of view he is always running
in a given direction. Of course he can be running sideways, his body
twisted, or even backward, backpedaling relative to the ball. The
orientation of the body or its direction of motion have no fixed
relationship to the direction the legs are moving relative to the body.

The fact that there is only one body and one pair of legs also makes it
hard to see how the fielder could have two separate control systems, one
controlling laterally and the other radially. He can't move his legs in two
different ways at once, and his body can move in only one direction at a
time. What the player actually does is to vary his speed of running and his
direction of running, not his x-y coordinates or even his polar coordinates
as we see the situation from outside him.

Finally, we have the problem that the fielder is perceiving the world
through eyes that are mobile in a head that swivels on a neck attached to a
body that can change its orientation relative to the playing field. Yet
what the fielder has to perceive, to catch the ball in the way we think he
does it, is the direction and velocity of the ball in a fixed external
space, even if he's running away from the ball and glimpsing it by looking
back over his shoulder. He has to perceive the ballpark, the ball, and
himself as movable objects in this fixed space, keeping track of horizontal
and vertical.

The only way I can even imagine this to work is to suppose that in the
ball-player's head, there is a common spatial framework into which all
sensory modalities are translated. Touch, kinesthesis, vision, and sound
all are transformed so they are mutually consistent within this common
space. If I reach out to touch something, I see it as having a location in
an external space. When I touch it, the sensation of touch is also located
in this same external space, at the same place where I see the object and
my finger. If the object makes a noise, I hear the sound coming from the
direction in which I feel and see the object. And when I reach toward it, I
feel the location of my arm and hand in that same space, and my direction
of reaching feels as if it is in the same direction as the object,
consistent with where I hear it, see it, and will eventually touch it. If I
walk toward the object, I feel my own position in this space as changing
toward the visual, tactile, kinesthetic, and auditory location of the object.

Of course as I move toward the object, it moves toward me; somehow I sense
my changing position in this common space, yet I am always at the center of
it. When I turn to face in another direction, I sense myself turning, yet
everything in the space is revolving around me the other way, and I always
end up looking straight ahead. I can look left and right, yet if I am
looking left by turning my eyes, I can instantly redefine where I am
looking as being straight ahead and feel my body and head as pointed to the
right. And I can then turn my head and body while keeping my eyes pointed
straight ahead until body and head, too, are oriented straight ahead, in
the direction that was, until a moment ago, off to the left. I turn my head
and body while holding my gaze -- and my eyeballs -- straight ahead.

None of this seems to be happening in a model in my head, yet it obviously
can be happening nowhere else. It is truly all perception.

So what we seem to have happening here is model-based control -- only it's
control _of_ a model, not control_through_ a model. When I turn my head to
look at something, I am making this perceptual model swing around me, which
I also interpret as turning my eyes and head toward it. But I have no idea
how I do that -- that is, the "how" is not modeled or experienced.

What is not experienced is the nervous system that sends signals to my
muscles, or the muscles themselves that make me (or the world) turn, or the
physics involved in converting torques into angular accelerations,
velocities, and positions, or the optical laws involved in generating
images on my retinas. All I experience are the effects that wanting to look
has on the appearance of my perceptual world. I want to look at an object;
the world swings around; and there is the object -- right where it has
always been, with me turned to look at it.

So this is very clearly and definitely not a situation in which the
properties of the world between my actions and my perceptions are
represented as an internal world-model. This is not Hans Blom's model of
how behavior works. The model we are talking about now, this common
framework within which all modalities of perception are adapted to agree
with each other, is a perceptual model of what lies outside us. It is, in
fact, the world that we experience as being objective and outside. But it
is neither objective nor outside.

Let's climb back inside this outfielder, then, and see how the world he is
controlling looks to him. He is in a large space, his body at a particular
location in the outfield, yet still at the center of everything. Home plate
is over there, in a direction he can see, to which he can point. There is
the foul territory to the left of first base, toward which he can walk when
the inning is over, bringing the dugout close enough to step into. He hears
the crack of the bat coming from the place where home base is, and sees a
ball rising into space at some angle from the vertical. He waits to make
sure the ball might be coming toward him, and then moves himself in this
space until he has turned the ball's slanted rise into a vertical rise, and
is keeping the rate of rise at some small value. He maintains the ball
rising vertically, even though he has to turn his body and head and eyes
and move himself in this common space, so part of the time he has to
imagine that the rising of the ball is continuing somewhere behind him.
He's not imagining the physics of objects moving through air in a
gravitational field; he's just imagining a rising-ness continuing behind
and above him. He turns, makes any corrections of the ball's place in this
space that are necessary to keep it rising vertically at a slow rate, and
when the ball gets close enough, moves his glove in front of it and catches
it.

At no time does this outfielder have to think about how to get his body
moving in a given direction or with a given orientation. That is left up to
all the lower-level control systems, which vary the directions of movement
of the legs and the body's orientation to maintain the sensed direction and
speed of running in the specified state. All those lower systems at all the
lower levels just produce whatever perceptions are demanded from above.
Obviously, working out the details at the lower levels would be a
incredibly difficult task. But to model the higher-level systems that catch
the ball, we don't have to do that.

At the higher level, all we have to do is decide what state of affairs is
to be controlled in this common perceptual framework. The ball is to rise
steadily in the vertical direction. To make it do this, we have to convert
rate-of-rise and angle-of-rise error signals into appropriate directions of
movement of the body, to keep the angle of rise at vertical and the rate of
rise at a small positive value. We can do this just as if we were measuring
the position and velocity of the ball from the standpoint of the
outfielder. We don't have to consider the details of how the error signals
produce the right motions, any more than the outfielder has to pay
attention to such things. We don't have to know how this common perceptual
framework is generated by the brain, and of course neither does the
outfielder.

So my conclusion is that we _can_ model the ball-catcher as a pair of
control systems working radially and at right angles to the direction to
the ball, and these control systems can produce orthogonal directions of
motion. Since we don't hope to model all the lower-level systems that must
be involved, we can simply propose an organization that produces the right
effect, knowing that however the real systems work, they must produce the
same effect. In this way we will answer the question of how the higher
systems might work, without getting hung up on all the lower levels of
organization.

Well, that leaves me about where I started, but I think with a better grasp
of what we're trying to do. And I think this discussion helps to make clear
the difference between control through an internal model of the
environment's _properties_, and control through an inner perceptual
representation of the _behavior_ of the world.

Best,

Bill P.

[From Bruce Gregory (980124.2317 EST)]

Bill Powers (980124.1656 MST)

Well, that leaves me about where I started, but I think with a better grasp
of what we're trying to do. And I think this discussion helps to make clear
the difference between control through an internal model of the
environment's _properties_, and control through an inner perceptual
representation of the _behavior_ of the world.

Thanks for this very clear post. The way I think about it is that we have a
model of the world inside our heads, but that is not where we look for it. We
look _outside_ ourselves to see the model at work. The world _is_ the model.
We are saved from solipsism by the mysterious fact that we each seem to manage
to build a model that coheres to an amazing degree with the models built by
others.

Bruce

[From Bill Powers (980125.0010 MST)]

Bruce Gregory (980124.2317 EST)--

The way I think about it is that we have a
model of the world inside our heads, but that is not where we look for it.
We look _outside_ ourselves to see the model at work. The world _is_ the
model.

Yes, but this isn't the whole story. The world we build in perception isn't
a model in the same sense that PCT is a model of human behavior, or in the
sense that Hans Blom uses the term world-model. When we look at this
perceptual model, what we see is the behavior of the world but not why that
behavior is as it is.

The world-model in Blom's approach is stated in terms of mathematical
relationships that embody properties of the external "plant." The
adjustable parameters of the world-model, plus the forms of the
mathematical expressions, determining how that plant will behave when
presented with any forms of inputs. This is what we call a simulation. The
mathematical equations by themselves don't describe any behavior; they
describe the organization of a physical system, but that system doesn't
behave until something acts on it.

A simple example is a schedule of reinforcement. The term schedule suggests
a listing of events that will take place at particular times, but if you
turn on the scheduling apparatus it will not produce any events. It will
lie there inertly until the lever gets pressed in a certain way, and then
it will produce food pellets in a pattern determined by the pattern of
lever-pressing, whatever it is, acting on the properties of the apparatus.
Our model of the apparatus describes it not in terms of the pattern of
lever presses or the pattern of resulting reinforcements, but in terms of
properties that remain the same no matter what patterns we see at the input
and output. For a fixed-ratio schedule, the property is that one
reinforcement is delivered for every n presses of the bar. But this in no
way tells us what the pattern of presses and reinforcements will be. If we
press the bar at a rate that gradually increases, the rate at which
reinforcements appear will also gradually increase. If we press so the rate
rises and falls in a sine wave pattern, the rate of reinforcement will also
rise and fall in a sine-wave pattern. If the pressing rate rises to an
asympotote, so will the rate of reinforcement.

We must not confuse patterns of behavior with properties of the behaving
system. Patterns change, but properties remain the same.

In a simulation, we have a system made of components with defined
properties, but no defined behavior. A component doesn't behave until we
manipulate its inputs, and we are free to manipulate the inputs any way we
please, thus producing any pattern of outputs we please.

But in a perceptual model, we do not find any properties; we can see the
behavior of the world, but we can't see its properties. The fielder knows
how to act on the world to keep the ball rising at a constant rate, but
there is nothing in the observed behavior of the ball that indicates
Newton's Laws. There is nothing in the perceived world that will explain
_why_ moving in a certain way will affect the ball's appearance. The ball's
_properties_ are not _simulated_ in perception; its _behavior_ is
_represented_ in perception.

"Model as simulation" is not the same thing as "model as perception." The
former is a collection of properties; the latter is a collection of behaviors.

Of course from observing our perceptual models, and applying mental
operations to the observations, we can eventually deduce (or at least
propose) simulations that will explain the observations. But that comes
after perception; perception gives us a world about which to reason. But we
don't have to reason about the world to control its behavior, which is
fortunate for dogs, fish, and bacteria.

Best,

Bill P.

This post is a wow!

The objective world is a system level perception?

Which is not in the adult form right from the beginning?

Playing "catch with infants" consists of rolling a ball towards him/her
and the infant puts his/her hand in front of the ball.

Does this involve only one of the proposed control sytems? And the
assumption of a system level objective world system?

···

From: David Goldstein
Subject: Re:Models and control; Bill Powers (980124.1656 MST)
Date: 1/25/98

[From Bill Powers (980125.0255 MST)]

From: David Goldstein
Subject: Re:Models and control; Bill Powers (980124.1656 MST)
Date: 1/25/98

The objective world is a system level perception?

I would rather say it is a hierarchy of perceptions. We see intensities,
sensations, configurations, and all the rest up to system concepts as
existing out there in the objective world. "The color red is right on that
apple I'm pointing to, over there." That statement refers to three or four
levels of perception out there in the world, and implies more.

Which is not in the adult form right from the beginning?

Yes; I think these levels develop in the first few years (see the Plooij's
work with children), and then fill out and elaborate as time goes on. And
as this happens, the child experiences an increasingly complex world.

Playing "catch with infants" consists of rolling a ball towards him/her
and the infant puts his/her hand in front of the ball.

Does this involve only one of the proposed control sytems? And the
assumption of a system level objective world system?

I think it's how the child learns to stop a rolling ball. System concepts
probably don't play much of a part in it, except perhaps for some
perception of the adult as an adult and the child as a child. Or maybe a
concept of a "game." I don't think a one-year-old has any system concepts,
so they just don't exist in that child's world.

Best,

Bill P.

[From Bruce Gregory (980125.0818 EST)]

Bill Powers (980125.0010 MST)

Bruce Gregory (980124.2317 EST)--

The way I think about it is that we have a
model of the world inside our heads, but that is not where we look for it.
We look _outside_ ourselves to see the model at work. The world _is_ the
model.

Yes, but this isn't the whole story. The world we build in perception isn't
a model in the same sense that PCT is a model of human behavior, or in the
sense that Hans Blom uses the term world-model. When we look at this
perceptual model, what we see is the behavior of the world but not why that
behavior is as it is.

Science would be _sooo_ much easier is this weren't so.

The world-model in Blom's approach is stated in terms of mathematical
relationships that embody properties of the external "plant." The
adjustable parameters of the world-model, plus the forms of the
mathematical expressions, determining how that plant will behave when
presented with any forms of inputs. This is what we call a simulation. The
mathematical equations by themselves don't describe any behavior; they
describe the organization of a physical system, but that system doesn't
behave until something acts on it.

In the same way that Newton's equations describe nothing until we apply them
to a set of "initial conditions."

But in a perceptual model, we do not find any properties; we can see the
behavior of the world, but we can't see its properties. The fielder knows
how to act on the world to keep the ball rising at a constant rate, but
there is nothing in the observed behavior of the ball that indicates
Newton's Laws. There is nothing in the perceived world that will explain
_why_ moving in a certain way will affect the ball's appearance. The ball's
_properties_ are not _simulated_ in perception; its _behavior_ is
_represented_ in perception.

Or, perhaps more neutrally, "given to us" in perception. The notion of
"representation" has a lot of baggage. Another way to say this might be that
the world is not represented, but revealed in perception. Too much baggage
here too. The point, as you say, is that the world is not a simulation of
anything.

"Model as simulation" is not the same thing as "model as perception." The
former is a collection of properties; the latter is a collection of

behaviors.

I know what you are saying, I'm just struggling to find alternative ways to
say it.

Of course from observing our perceptual models, and applying mental
operations to the observations, we can eventually deduce (or at least
propose) simulations that will explain the observations. But that comes
after perception; perception gives us a world about which to reason. But we
don't have to reason about the world to control its behavior, which is
fortunate for dogs, fish, and bacteria.

For all of us, most of the time. We spend very little time building
simulations and a great deal of time controlling perceptions.

Bruce

[From Wolfgang Zocher (980116.1030 MEZ)]

[From Bill Powers (980124.1656 MST)]

.... But in PCT, as I conceive it, the world of
perception _is_ the world that seems to be "out there." The problem is not
to understand how the brain forms a model of the world we observe, but how
the world we observe is constructed from the physical reality on which we
assume it's based. We're already looking at the perceptions; the real
problem is to figure out how they're related to the physics.

Great! That's EXACTLY what I think about PCT. In our brains we don't have models
of the world. Instead we have a construction of the reality around us built by
means of our sensoric equipment. And this construction _is_ the world we
perceive ...

On this base (on the base of _my_ own reality, which no other human can
share directly) I can build models of the world and I can compare these
models with "the reality".

Best,
Wolfgang

···

-------------------------------------------------------------------
2nd European Workshop on Perceptual Control Theory 1998
     Infos:
     http://www.unics.uni-hannover.de/rrzn/zocher/ecsg.html
-------------------------------------------------------------------
email: zocher@rrzn.uni-hannover.de (office)
       zocher@apollo.han.de (home)
www: http://www.unics.uni-hannover.de/rrzn/zocher

[From Bruce Abbott (970126.0940 EST)]

Bill Powers (980124.1656 MST) --

The only way I can even imagine this to work is to suppose that in the
ball-player's head, there is a common spatial framework into which all
sensory modalities are translated. Touch, kinesthesis, vision, and sound
all are transformed so they are mutually consistent within this common
space.

The really interesting thing about this is that we somehow infer a stable
frame of reference outside ourselves, one within which we move. As you
sweep your eyes across a parking lot, the image of the parking lot moves
across your retinas in the opposite direction. What seems to be moving is
your angle of looking rather than the lot, although both interpretations of
the visual input are equally valid given only the horizontal translation of
the image on the retina.

But try the following. First, press a finger tip against the skin lying
near the outer corner of one eye, and close the other eye. Now use the
finger to tug at the skin, thus moving the eyeball slightly. What you now
see is the "world" moving, swimming back and forth with the tugs. Thus, the
brain interprets the visual scene differently when you move your eye
actively, via its appropriate muscles, or passively, via external tugs.
Only in the former case does the visual "frame" remain stable against the
motion of the image.

Now fixate your vision on some object. While keeping your gaze there, begin
to rotate your head left and right, as if shaking your head "no." Slowly
pick up the tempo. At first, the object (and the rest of the visual world
"out there") seem to stay put -- as your head moves left (for example), the
eyes turn right to compensate, keeping the gaze fixed on the object. The
frame of reference stays fixed. But as your head-shaking gets faster, the
control system fails to keep up with the disturbance produced by
head-motion, and the visual world begins to oscillate.

Then there is the effect of dizziness, which every child learns to produce
by spinning around rapidly for several turns and then stopping suddenly.
Now the eyes can be stationary and yet there is a definite sense that the
world outside is spinning, with you at the axis of rotation. Under normal
conditions we look around us, and what we see is our own motion with respect
to a fixed exterior frame. As we turn our vision toward an object, we see
our gaze come into registry with the object; we do not see the object move
into registry with the center of our gaze. We take this as given, so much
so that we may fail even to notice, and to ask how it works.

Regards,

Bruce

[From Bruce Nevin (970127.1551)]

Bruce Abbott (970126.0940 EST)--

You wrote of our construction of

a stable
frame of reference outside ourselves, one within which we move. ...
the
brain interprets the visual scene differently when you move your eye
actively, via its appropriate muscles, or passively, via external tugs.
Only in the former case does the visual "frame" remain stable against the
motion of the image.

... as your head-shaking gets faster, the
control system fails to keep up with the disturbance produced by
head-motion, and the visual world begins to oscillate.

Then there is the effect of dizziness,
... the eyes can be stationary and yet there is a definite sense that the
world outside is spinning, with you at the axis of rotation.

These are cases where something interferes with control of the perception
"that object in the fovea".

Under normal
conditions we look around us, and what we see is our own motion with respect
to a fixed exterior frame. As we turn our vision toward an object, we see
our gaze come into registry with the object; we do not see the object move
into registry with the center of our gaze. We take this as given, so much
so that we may fail even to notice, and to ask how it works.

By means of peripheral vision probably as well as the "object in the fovea"
perception, I control a perception something like "my spatial relationship
to objects around me" or "my position in this space". To control that
perception, it seems like I use my perceptions of the objects around me to
define a stable space in which I am oriented. Under conditions like those
you described, I can't control that perception. The space (the aggregate of
objects defining it) appears to be moving in my field of vision. But there
is a concurrent kinesthetic perception that my body is in contrary motion
relative to things around me, a sensation of vertigo. This seems to arise
with failure to control my orientation. Perhaps there is some incipient
body movement to prevent a fall, and perhaps this perception and its
control is a primitive one--though it can't be at a very low level, if it
is as I have described it (relationship at least).

Another familiar example that we've talked about before is the seeming
movement of the scene when you stop after looking forward from a moving car
for an extended time. This seems to me mostly in peripheral vision--it
attenuates more rapidly when I foveate a nearby object and ignore
peripheral vision. It appears as though I control the perception of my
orientation in space--by means of a perception of "a stable frame of
reference outside" myself--by somehow reversing the actual advance of
objects in the visual field. That reversal persists after the car stops.
But there was some pretty sophisticated discussion of this when it came up
a few years ago, and probably this interpretation is too simple-minded.

  Bruce Nevin

[From Bruce Abbott (970128.0700 EST)]

Bruce Nevin (970127.1551) --

By means of peripheral vision probably as well as the "object in the fovea"
perception, I control a perception something like "my spatial relationship
to objects around me" or "my position in this space". To control that
perception, it seems like I use my perceptions of the objects around me to
define a stable space in which I am oriented. Under conditions like those
you described, I can't control that perception. The space (the aggregate of
objects defining it) appears to be moving in my field of vision. But there
is a concurrent kinesthetic perception that my body is in contrary motion
relative to things around me, a sensation of vertigo. This seems to arise
with failure to control my orientation. Perhaps there is some incipient
body movement to prevent a fall, and perhaps this perception and its
control is a primitive one--though it can't be at a very low level, if it
is as I have described it (relationship at least).

It would appear that we use a variety of sensory inputs to construct and
maintain this "stable space" or frame of reference. As we turn our vision
to scan our surroundings, it seems likely that the reference signals by
which we move our eyes, or perhaps feedback from the eye muscles themselves,
inform the perceptual system maintaining the frame of reference that motion
of the visual scene is due to the commanded repositioning of the eye and
should not contribute to reorienting the frame. Head movements generating
accelerations within the simicircular canals of the inner ears inform the
brain that the head is moving in a particular way; these signals, too,
evidently allow the brain to compensate for these movements to preserve a
stable frame of reference. But the canal system works well only when
exposed to brief accelerations; the mechanism responds more strongly to
acceleration than to contined rotational motion. A sudden stop following
continued motion briefly produces the sensation that the head is now turning
in the opposite direction, rather than having come to rest, and this is in
conflict with the lack of motion of the visual scene on the retinas of the
eyes. The result is dizziness and the perception that the frame is rotating.

Another familiar example that we've talked about before is the seeming
movement of the scene when you stop after looking forward from a moving car
for an extended time. This seems to me mostly in peripheral vision--it
attenuates more rapidly when I foveate a nearby object and ignore
peripheral vision. It appears as though I control the perception of my
orientation in space--by means of a perception of "a stable frame of
reference outside" myself--by somehow reversing the actual advance of
objects in the visual field. That reversal persists after the car stops.
But there was some pretty sophisticated discussion of this when it came up
a few years ago, and probably this interpretation is too simple-minded.

A similar phenomenon occurs when you stare for some time at a waterfall that
takes up most of the vield of view. When you then quickly recenter your
gaze on some stationary (and rather blank) scene, you get the sense that the
scene is flowing upward even though your know that it isn't. It's as if the
null point for the perception of no motion has been moved temporarily by the
prolonged exposure to the motion of the waterfall, so that no actual motion
produces the signal of reverse motion.

If you stare at the moon on a night when the sky is filled with small clouds
that move rapidly and in unison across the sky, at first the clouds appear
to race by the moon. After viewing the scene for a short while, however,
the clouds will seem to suddenly freeze and the moon to take off in the
opposite direction. The clouds have been accepted by the perceptual system
as the stable frame of reference, against which the moon's motions are
judged. Something similar may happen if you are sandwiched between two
trucks at a stoplight and both trucks begin to creep forward. Your visual
system locks onto the trucks as the stable visual frame of reference, and
you suddenly perceive yourself to be rolling backward. You may have an
anxious moment as you discover that your brakes don't seem to be working and
you are in danger of striking the car behind you!

You say this came up a few years ago? What was the conclusion then?

Regards,

Bruce

[From Bruce Nevin (970128.0952)]

Bruce Abbott (970128.0700 EST)--

(Talking about "stable space" or frame of reference and vertiginous
illusions of the scene moving.)

You say this came up a few years ago? What was the conclusion then?

I don't remember. Some of the discussion was beyond my technical
competence. Probably there were different emphases, and the entry point and
conclusions had different tangents to this nexus of issues. I'd have to go
spelunking in the archives. I regret that I don't have the time free for
that. But you could.

Bill and Martin probably remember more.

  Bruce

[From Bill Powers (980128.0702 MST)]

Bruce Abbott (970128.0700 EST)--

It would appear that we use a variety of sensory inputs to construct and
maintain this "stable space" or frame of reference. As we turn our vision
to scan our surroundings, it seems likely that the reference signals by
which we move our eyes, or perhaps feedback from the eye muscles themselves,
inform the perceptual system maintaining the frame of reference that motion
of the visual scene is due to the commanded repositioning of the eye and
should not contribute to reorienting the frame.

That's a lot of "informing" for a mere signal to do. That sort of model is
easier to work out if you say that perceptual signals simply represent
variables, and it's up to some computing circuit to extract meaning from them.

In the stack of lateral-angle control systems I describe in the re-post
today, there are kinesthetic perceptual signals representing angles at the
ankle, hips, torso, shoulder, neck, and eyes. The sum of these angles is
the lateral direction of gaze relative to the feet (or vice versa). There
is a similar set of kinesthetic signals for angles in the vertical plane;
their sum indicates the relative vertical angle of gaze, or of the feet.

When you attend to a object in the visual field that is off to one side of
the angle of gaze, you can select the angle of that object as the
perception to be controlled. By using any of the angle-altering muscles
from the ankles up, in any combination, you can bring that angle to zero,
centering the objectin the visual field.

The perceived direction of the object is, of course, always relative. There
is no absolute directional coordinate system. You can see the object at a
bearing of zero degrees, meaning "straight ahead". Or you can see it at a
bearing of 45 degrees from the direction of your feet. Or you can see it at
a bearing of 20 degrees from the direction of some other visual object.
This kind of perceptual function requires inputs from two signals standing
for angles, and it emits a signal proportional to the relative angle, the
difference in directions.

Once a relative-angle signal exists, it can be maintained at any reference
level. Maintaining it at zero means you want the two directions to be the
same -- you move your body, or the object, or in the case of the angle
between two objects, one or both objects. But you can also maintain some
non-zero angle between them, as an archer does in aiming at a moving target
or in compensating for estimated wind direction, or as you do when you're
trying to maintain a collision course. You can also control for some
specific rate of change of relative angle, as in the case of the outfielder.

Vestibular signals offer a short-term inertial directional signal relative
to the average position of the head. These signals are accurate only for
very short times, a fraction of a second. In my oculomotor model,
vestibular signals were used as a rough feed-forward compensation for
sudden head movements, adding to the reference input of the
pursuit-tracking control system and tending to move the eye opposite to the
head movement.

These signals die out to zero after a while, even if the body is rotating
continuously. Then, when rotation is stopped, the motion of fluid in the
semicircular canals produces a signal as if the head had turned in the
opposite direction, creating an error in the pursuit tracking systems that
turns the eyes. However, the positional tracking system sees this turning
of the eyes as a position error, and eventually blanks out the pursuit
system and produces a saccade. The eye jumps back to its original position
(i.e., the fixation object is centered again), and the pursuit system
starts turning the eyes again, and so on until the vestibular signal decays
to zero. This is how nystagmus emerges from my oculomotor model.

The vestibular effect on eye position, being open-loop, requires
calibration. The normal eye under or over compensates by a few degrees;
however, this compensation is slowly adjusted, so if the visual field is
artificially made to move more or less than the head actually turns, the
gain of the vestibular contribution is gradually raised or lowered, over
about 20 minutes, so it comes close to optimum compensation again for small
head movements. This shows that there is actually a very slow control
system involved with the vestibular compensation, which acts by altering
its gain to keep the visual movement as close to zero as possible. Note
that this is control through changing a parameter, not a reference signal.

This vestibular compensation is optimized for relatively small head
movements of medium speed. If you move your head fast and by a large
amount, the visual field usually moves a little the other way, showing that
the compensation is too large. But it has to be too large for large fast
movements if it is to be correct for more normal movements that are slower
and less extreme. What it accomplishes is to extend slightly the ability of
the pursuit tracking systems to maintain a lock on the visual field when
the head moves rapidly. The eye, with a retinal averaging time of around a
tenth of a second, is extremely sensitive to relative image movement. A
retinal movement rate of about 10 minutes of arc per second will roughly
halve the visual acuity.

It appears that the pursuit tracking system has high gain, high accuracy,
and a fast response. So tight is the control that in order for the eye to
seek a new fixation point, the pursuit tracking system must be gated off.
However, Wayne Hershberger showed that the retinal signals are not gated
off: a rapidly flashing light actually creates a "phantom array" of dots in
the visual field during a saccade, in the period when vision is supposedly
blanked out. In my model, what is blanked out is the input to the pursuit
system only. A saccade occurs when the position control system, which is
always trying to center some object, is not completely counteracted by the
much more sensitive pursuit system. However, the pursuit system is actually
a velocity control system, not a position control system. Therefore when it
turns back on, it simply reduces image motion to as small a value as
possible, which is roughly a few minutes of arc per second.

This model makes a prediction that I don't have the resources to test, but
which may be testable using existing data. When you select a target for a
saccade that is off the direction of gaze, this model says that in the
position control system an error signal appears immediately, and starts to
move the eyes. However, the pursuit system, which has very strong control
over rate of change of visual direction, is trying to keep the angular
velocity at zero. The result should be that the eye moves very, very slowly
in the direction of the off-axis target as soon as attention is directed to
it. The speed of movement should be proportional to the off-axis angle. I
think this could be detected by having a person attend, without moving the
eyes, to one target after another on a signal, and doing a synchronous
detection of eye movement. The effect, although small, should be visible in
a average of many trials. This would constitute a relatively direct test of
my model.

You say this came up a few years ago? What was the conclusion then?

It's really too detailed a model to describe in a few words. I tried to
incorporate every phenomenon relating to eye movement then in the
literature. Wolfgang Zocher has all my material on this model, including
the references. The model is at least 10 years old. I'm hoping that some
day he will refine it and publish it, together with the model of eye
movement control he is working on as a simulation on an analog computer --
that computer being called SIMPCT, and running on a digital computer!
Perhaps he will unveil it at the European CSG meeting in June.

Best,

Bill P.

[From Bruce Abbott (980128.1550 EST)]

Bill Powers (980128.0702 MST) --

Bruce Abbott (970128.0700 EST)

It would appear that we use a variety of sensory inputs to construct and
maintain this "stable space" or frame of reference. As we turn our vision
to scan our surroundings, it seems likely that the reference signals by
which we move our eyes, or perhaps feedback from the eye muscles themselves,
inform the perceptual system maintaining the frame of reference that motion
of the visual scene is due to the commanded repositioning of the eye and
should not contribute to reorienting the frame.

That's a lot of "informing" for a mere signal to do. That sort of model is
easier to work out if you say that perceptual signals simply represent
variables, and it's up to some computing circuit to extract meaning from them.

I don't of course mean that the signal is a little person telling the
perceptual system things, but only that this signal serves as input to the
perceptual system, and that its effect is to allow the latter to compensate
for the changes in eye position represented by that signal, thus maintaining
the perception of a fixed frame of reference even though the visual scene is
swimming across the retina.

Your analysis in the rest of your post agrees by and large with my own
conception, except that you have gone much farther in working out the
details than I have done as yet. I'd like to pursue it further when I get
more time to spare. One thing you didn't mention is that the retina rapidly
fails to perceive anything if the image on the retina is stabilized. For
this reason the eye is in constant slight motion even when you believe you
are holding it absolutely still. The retinal system apparently reacts to
change and not to steady stimulation.

Saccadic movements allow the eye to gain a clear, unblurred image at each
fixation point as the gaze is shifted, and will occur when the eyes are
turned to track a moving object. However, when you fix your gaze on an
immobile object and rotate your head about, smooth rather than saccadic
movement of the eye occurs, even though the eye must be rotated by the same
muscles in order to keep the eye fixed on the object. Or at least that's my
experience. Does anyone else have that same impression? It's easy enough
to test.

Returning to the perceptual frame, I'd like to know what contributes to our
perception of a direction as being north, west, and so on. Years ago when I
was house-hunting for the first time in Fort Wayne, I got disoriented during
the trip to view the house we eventually bought. I've been here 20 years
now and still can't get my perceputual north to agree with physical north,
even though I know where physical north is intellectually. I keep seeing
the sun rise in the south-west and set in the north-east. I'm very glad to
report that at the new house Steph and I are building in the country,
Polaris appears in the night sky exactly where my perceptual frame tells me
it should be -- due north.

Regards,

Bruce

[From Bill Powers (980128.1455 MST)]

Bruce Abbott (980128.1550 EST)--

That's a lot of "informing" for a mere signal to do. That sort of model is
easier to work out if you say that perceptual signals simply represent
variables, and it's up to some computing circuit to extract meaning from

them.

I don't of course mean that the signal is a little person telling the
perceptual system things, but only that this signal serves as input to the
perceptual system, and that its effect is to allow the latter to compensate
for the changes in eye position represented by that signal, thus maintaining
the perception of a fixed frame of reference even though the visual scene is
swimming across the retina.

I find it awkward to talk about "the effect" of a signal. A signal doesn't
have any particular effect; it's just a signal. The same signal may be an
input to many different input functions, contributing to the perceptual
signal in each of them, but determining none of them. And even in a control
system with only a single input, the effect of the signal measured at the
error signal position depends entirely on the setting of the reference signal.

My preference is to consider that signals convey the states of other
variables to the point where the signal is received, and not to attribute
anything that happens after that to the signal itself. As you know, your
usage of "effect" has been a bone of contention between us.

One thing you didn't mention is that the retina rapidly
fails to perceive anything if the image on the retina is stabilized. For
this reason the eye is in constant slight motion even when you believe you
are holding it absolutely still. The retinal system apparently reacts to
change and not to steady stimulation.

This is a strange phenomenon that is surely telling us something -- but I
don't know what. Has anyone established for sure whether this is a
peripheral or a central phenomenon?

Saccadic movements allow the eye to gain a clear, unblurred image at each
fixation point as the gaze is shifted, and will occur when the eyes are
turned to track a moving object.

It isn't the saccadic movement that allows a clear unblurred image; it's
the fact that the image remains stationary between saccades. The pursuit
tracking system is what stabilizes the image (to the level of
microsaccades) between saccades.

If someone moves an object across your field of view and you focus on it
sufficiently to see its details, your eyes will follow it smoothly, with
saccades only during changes in the fixation points within the object. The
background details blur out. When the backgound has a contrasty texture and
the moving object is very small or dim, the pursuit system will lock onto
the background, and then there will be saccades in following the moving
object.

However, when you fix your gaze on an
immobile object and rotate your head about, smooth rather than saccadic
movement of the eye occurs, even though the eye must be rotated by the same
muscles in order to keep the eye fixed on the object. Or at least that's my
experience. Does anyone else have that same impression? It's easy enough
to test.

The difference isn't in the head movements but in whether there is an
object moving relative to the background. When you move your head, that's
not a problem: the pursuit system stabilizes the whole image. When there's
an object moving against a patterned background, the pursuit system
computes some sort of average movement over the whole field, to which a
small moving object contributes very little. The saccadic system, which
controls position rather than velocity, may be trying to keep the small
object centered, but it's fighting the pursuit system which is trying to
keep the entire visual field stationary. This is why there are saccadic
jumps when the background pattern is strong and the moving object is small.
I suspect that the motion signals from the foveal region are given a
considerably higher weight than those from the periphery.

Our sense of relative direction is obtained from the saccadic position
system. This is why, when you follow a satellite through a part of the sky
with few stars in it, it ceases to move: the pursuit system stabilizes it
and it remains centered. However, when the satellite moves near a bright
star, it seems to wander in direction and change speed, because the pursuit
system compromises between the two point sources. The centroid is
stabilized, and the satellite starts to move. Since the days of Sputnik,
this has been known to skywatchers as the "satellite illusion."

Returning to the perceptual frame, I'd like to know what contributes to our
perception of a direction as being north, west, and so on.

These are purely arbitrary labels, aren't they? We pick some distant
landmark and say "that's north." Civilization has dictated that in farming
country, the roads go mostly in a grid oriented to north, so as we move
from one place to another our maps can't drift much before they are
recalibrated. However, in New England this is not the case, and in
unfamiliar territory there's nothing to recalibrate the meaning of north --
you can't transfer from one landmark to the next. In New England, the only
people who can tell you where north is are those that know the local
landmarks, like the orientation of houses they know. Of course in the
daytime we have the sun (sometimes) and at night the stars, so there is a
long-term way to reset the calibration. But in New England, on those
winding roads down among all those trees, who can ever see the sky?

Your problem (and I have the same problem in downtown Durango) is that your
calibration got attached to some kind of landmark so effectively that
anything that looks similar restores the old calibration, when it
shouldn't. Driving to Durango from where I live involves some long sweeping
curves among hills, ending up in a curve onto Camino Rio, which is oriented
mostly north and south. But my calibration, left over from the town where I
grew up, says that the main street runs east and west. So I always have to
stop and go through a cognitive phase, which amounts to telling myself that
in order to go east on 6th street, I have to travel south. When you get to
be my age, changing perceptions is hard. What's your excuse?

I'm very glad to
report that at the new house Steph and I are building in the country,
Polaris appears in the night sky exactly where my perceptual frame tells me
it should be -- due north.

The best thing about this is that you can see Polaris. That's getting to be
unusual (and Polaris is second magnitude).

Best,

Bill P.

[From Bruce Abbott (980128.2050 EST)]

Bill Powers (980128.1455 MST) --

Bruce Abbott (980128.1550 EST)

One thing you didn't mention is that the retina rapidly
fails to perceive anything if the image on the retina is stabilized. For
this reason the eye is in constant slight motion even when you believe you
are holding it absolutely still. The retinal system apparently reacts to
change and not to steady stimulation.

This is a strange phenomenon that is surely telling us something -- but I
don't know what. Has anyone established for sure whether this is a
peripheral or a central phenomenon?

That distinction may be a bit misplaced, given that the retina is an
outgrowth of the brain containing four layers of neurons. I believe that
the phenomenon arises in the retina, but I'd have to do a bit of library
research to find out whether that belief is correct. Quite a bit of
preliminary processing of the image takes place in the retina, especially
what might be termed "sharpening" of the image via lateral inhibition. I'd
like to know more about the details as currently understood.

Saccadic movements allow the eye to gain a clear, unblurred image at each
fixation point as the gaze is shifted, and will occur when the eyes are
turned to track a moving object.

It isn't the saccadic movement that allows a clear unblurred image; it's
the fact that the image remains stationary between saccades. The pursuit
tracking system is what stabilizes the image (to the level of
microsaccades) between saccades.

I think you misread me there: I said that the clear view was obtained at
each _fixation_ point, not during the saccade. You've added that this
fixation is accomplished via the pursuit tracking system, an issue I didn't
address. When you are scanning across a scene, I wouldn't think there would
be much need for stabilization between movements, as the gaze rests in a
particular position for such a brief time.

If someone moves an object across your field of view and you focus on it
sufficiently to see its details, your eyes will follow it smoothly, with
saccades only during changes in the fixation points within the object.

You're right. (I just tried it.) Now that I think of it, in principle,
there should be no difference between locking onto a stationary object while
moving the head and locking onto a moving object while holding the head
still (or while moving the head, for that matter).

The
background details blur out. When the backgound has a contrasty texture and
the moving object is very small or dim, the pursuit system will lock onto
the background, and then there will be saccades in following the moving
object.

This is more difficult to arrange, so I haven't tried it yet.

The difference isn't in the head movements but in whether there is an
object moving relative to the background.

Yes, I see that now.

When you move your head, that's
not a problem: the pursuit system stabilizes the whole image. When there's
an object moving against a patterned background, the pursuit system
computes some sort of average movement over the whole field, to which a
small moving object contributes very little. The saccadic system, which
controls position rather than velocity, may be trying to keep the small
object centered, but it's fighting the pursuit system which is trying to
keep the entire visual field stationary. This is why there are saccadic
jumps when the background pattern is strong and the moving object is small.
I suspect that the motion signals from the foveal region are given a
considerably higher weight than those from the periphery.

Interesting . . .

Our sense of relative direction is obtained from the saccadic position
system. This is why, when you follow a satellite through a part of the sky
with few stars in it, it ceases to move: the pursuit system stabilizes it
and it remains centered. However, when the satellite moves near a bright
star, it seems to wander in direction and change speed, because the pursuit
system compromises between the two point sources. The centroid is
stabilized, and the satellite starts to move. Since the days of Sputnik,
this has been known to skywatchers as the "satellite illusion."

Wouldn't the pursuit system stabilize (track) the image of the satellite in
either case? Motion is best computed when there is something in the visual
field that is taken to be motionless, relative to which the motion of the
moving object can be computed. Does the bright star also seem to wander and
change speed? This would make sense if the system is attempting to lock
onto something as the fixed frame of reference when only the satellite and a
single star are visible in the same field -- the system may choose the wrong
object, or as you suggest, some average of the two, in which case both
objects would seem to wander somewhat.

Returning to the perceptual frame, I'd like to know what contributes to our
perception of a direction as being north, west, and so on.

These are purely arbitrary labels, aren't they? We pick some distant
landmark and say "that's north." Civilization has dictated that in farming
country, the roads go mostly in a grid oriented to north, so as we move
from one place to another our maps can't drift much before they are
recalibrated. However, in New England this is not the case, and in
unfamiliar territory there's nothing to recalibrate the meaning of north --
you can't transfer from one landmark to the next. In New England, the only
people who can tell you where north is are those that know the local
landmarks, like the orientation of houses they know. Of course in the
daytime we have the sun (sometimes) and at night the stars, so there is a
long-term way to reset the calibration. But in New England, on those
winding roads down among all those trees, who can ever see the sky?

Directional _labels_ like "north" are arbitrary, but that doesn't make the
directions themselves arbitrary. The sun rises more-or-less in the east and
sets more-or-less in the west (depending on the season), and south is midway
between when one is facing with sunrise to the left. North is opposite to
south. I don't have much trouble keeping my directions properly oriented
when the sun is shining. The generally east-west, north-south orientation
of roads in the midwest probably helps as well, as you suggest. The day I
found my house it was overcast to the extent that the sun's direction was
completely obscured, and we arrived after going through a long sweeping turn
from east to south in which only small segments of the arc were visible at
any given time. It seems that my inertial system underestimated the extent
of the turn (in fact, it practically ignored it), so it had me going
east-southeast when I was actually traveling south. The subsequent left
turn to the east was to me a left turn to the north-northeast, and then we
entered the neighborhood. My brain has been confused about which direction
the house faces ever since.

Your problem (and I have the same problem in downtown Durango) is that your
calibration got attached to some kind of landmark so effectively that
anything that looks similar restores the old calibration, when it
shouldn't. Driving to Durango from where I live involves some long sweeping
curves among hills, ending up in a curve onto Camino Rio, which is oriented
mostly north and south. But my calibration, left over from the town where I
grew up, says that the main street runs east and west. So I always have to
stop and go through a cognitive phase, which amounts to telling myself that
in order to go east on 6th street, I have to travel south. When you get to
be my age, changing perceptions is hard. What's your excuse?

I can do the cognitive thing, too, but after 20 years you'd think something
deeper down would have caught on. It hasn't. I doubt that our problems in
reorienting our directional perceptions have to do with age, even though I,
too, qualify for an AARP card.

By the way, research is revealing a statistical difference between males and
females in how they find their way around. Males are much more likely to
use directions and distances like "go north for about three miles and then
turn east." Females are much more likely to use landmarks. ("Go down Broad
Street until you pass the Pizza Hut and then turn right.") [A female member
of our department has been investigating the problem.] This suggests that
males and females tend to control somewhat different perceptions when
navigating about. Whether this results from differences in socialization or
organizing effects of testosterone on the brain during development (or some
combination) remains to be determined.

I'm very glad to
report that at the new house Steph and I are building in the country,
Polaris appears in the night sky exactly where my perceptual frame tells me
it should be -- due north.

The best thing about this is that you can see Polaris. That's getting to be
unusual (and Polaris is second magnitude).

It's better than that: On a clear night I can see six of the Seven Sisters
through my bifocals. One more reason why I'm moving to the country. (:->

Regards,

Bruce

[From Wolfgang Zocher (980129.0755 MEZ)]

From Bill Powers (980128.0702 MST)

It's really too detailed a model to describe in a few words. I tried to
incorporate every phenomenon relating to eye movement then in the
literature. Wolfgang Zocher has all my material on this model, including
the references. The model is at least 10 years old. I'm hoping that some
day he will refine it and publish it, together with the model of eye
movement control he is working on as a simulation on an analog computer --
that computer being called SIMPCT, and running on a digital computer!
Perhaps he will unveil it at the European CSG meeting in June.

At the meeting in June I wiil come up with a complete model of vision.
For the last 5 years I was working on the refinement of your model -
the main thing was to work out a model which fits measured data of
eye movements. The main insight I got during this work is that I
can't model the eye movement without modeling the retinal processes.
The eye movement machinery is closely connected to the outcomes of
the retina.

The model I presented at the Durango meeting was working for all three
types of eye movement: fixation and saccades. And it was able to
track a light point with good accuracy. At least I thought it was
working correctly. But when I got measured data from pursuit movement
I had to realize, that the model wasn't as good as I thought. The
"real" pursuit of human eyes lookes very different from the outcome
of my model and there were no chances to alter the model that way
to fit these data. So I decided to incorporate the retina into the model..

My conclusion: modelling as a "thought experiment" doesn't work. These
theoretical models only show very general things - as my "Durango-model"
did. However, what I really want to do is bringing up a model which
describes the real movement of human eyes. And this is VERY difficult.

Seems the German meeting will be a very interesting meeting for me :slight_smile:
Vision and Bugs and ...

Best,

Wolfgang

···

-------------------------------------------------------------------
2nd European Workshop on Perceptual Control Theory 1998
Infos:
http://www.unics.uni-hannover.de/rrzn/zocher/ecsg.html
-------------------------------------------------------------------
email: zocher@rrzn.uni-hannover.de (office)
       zocher@apollo.han.de (home)
www: http://www.unics.uni-hannover.de/rrzn/zocher

[From Bill Powers (980129.0524 MST)]

Bruce Abbott (980128.2050 EST)--

Saccadic movements allow the eye to gain a clear, unblurred image at each
fixation point as the gaze is shifted, and will occur when the eyes are
turned to track a moving object.

It isn't the saccadic movement that allows a clear unblurred image; it's
the fact that the image remains stationary between saccades. The pursuit
tracking system is what stabilizes the image (to the level of
microsaccades) between saccades.

I think you misread me there: I said that the clear view was obtained at
each _fixation_ point, not during the saccade.

You said "Saccadic movements allow the eye to gain a clear, unblurred
image...". Perhaps you meant to say that the fact that movements are
saccadic rather than continuous allows the eye to see a clear image in the
periods when the eye is stationary. If that's what you meant, we agree.

You've added that this
fixation is accomplished via the pursuit tracking system, an issue I didn't
address. When you are scanning across a scene, I wouldn't think there would
be much need for stabilization between movements, as the gaze rests in a
particular position for such a brief time.

If the eye is literally scanning across a scene, it will be very blurred.
One could pick out only the largest and most contrasty of objects. I
believe that the time between saccades is normally considerably longer than
the duration of the saccade itself.

If someone moves an object across your field of view and you focus on it
sufficiently to see its details, your eyes will follow it smoothly, with
saccades only during changes in the fixation points within the object.

You're right. (I just tried it.)

Thank goodness. I'm normal.

Now that I think of it, in principle,
there should be no difference between locking onto a stationary object while
moving the head and locking onto a moving object while holding the head
still (or while moving the head, for that matter).

Agree.

The
background details blur out. When the backgound has a contrasty texture and
the moving object is very small or dim, the pursuit system will lock onto
the background, and then there will be saccades in following the moving
object.

This is more difficult to arrange, so I haven't tried it yet.

Try moving your finger laterally while fixating on the fingernail. Attend
to the objects next to and behind the finger, but without fixating on them.
The small details disappear. Compare with fixating on the fingernail with
the finger stationary against the background.

Wouldn't the pursuit system stabilize (track) the image of the satellite in
either case? Motion is best computed when there is something in the visual
field that is taken to be motionless, relative to which the motion of the
moving object can be computed.

Yes, but for that you need an extended patterned background. When only a
few separated objects are involved, I think the pursuit system, which I
think draws on motion signals from the whole retina, stabilizes the average
velocity of all contrasts and edges in the field, perhaps with greater
weightings for signals near the center of vision. You can pick any of these
objects for position control (centering on the fovea), but if the _average_
velocity is being controlled at zero by the pursuit system, the eye will
continuously be dragged off the target, calling for a saccade to get back
to it. This will happen whether you pick the star or the satellite to look at.

Does the bright star also seem to wander and
change speed?

Oh, yes, especially if the satellite is of the same brightness. Of course
when you fixate on the star, dimmer stars become visible because they're
now stationary on the retina, and you approach the conditions of an
extended patterned background, which the eye wants to hold motionless. Then
you realize that you're not looking at the satellite any more, and can see
it as moving because you have to perform saccades to keep up with it.

This would make sense if the system is attempting to lock
onto something as the fixed frame of reference when only the satellite and a
single star are visible in the same field -- the system may choose the wrong
object, or as you suggest, some average of the two, in which case both
objects would seem to wander somewhat.

Right. This is what seems to happen, although we tend to fixate on one or
the other.

Returning to the perceptual frame, I'd like to know what contributes to our
perception of a direction as being north, west, and so on.

These are purely arbitrary labels, aren't they? We pick some distant
landmark and say "that's north."

Directional _labels_ like "north" are arbitrary, but that doesn't make the
directions themselves arbitrary.

No?

The sun rises more-or-less in the east and
sets more-or-less in the west (depending on the season), and south is midway
between when one is facing with sunrise to the left. North is opposite to
south. I don't have much trouble keeping my directions properly oriented
when the sun is shining.

That's because you're defining the names of the directions in terms of
where the sun rises and sets, and so on. Landmarks, as I said.

The day I
found my house it was overcast to the extent that the sun's direction was
completely obscured, and we arrived after going through a long sweeping turn
from east to south in which only small segments of the arc were visible at
any given time. It seems that my inertial system underestimated the extent
of the turn (in fact, it practically ignored it), so it had me going
east-southeast when I was actually traveling south. The subsequent left
turn to the east was to me a left turn to the north-northeast, and then we
entered the neighborhood. My brain has been confused about which direction
the house faces ever since.

This is a sobering experience (in case you weren't sober). It argues
against really rapid reorganizations. Once we establish a perception, it is
very hard to change without a complete change of circumstances. It seems to
be relatively easy to acquire a new perceptual function, but hard to get
rid of it once it's in place, and particularly if it's become part of a
control system.

This effect is most pronounced at the higher levels. Once a person acquires
a system concept, and has ordered his life around controlling it, changing
it is next to impossible. I think this is especially true of one's _first_
well-worked-out system concept. Even today, 48 years afterward, I can still
return to the way it was to believe in dianetics; that system concept is
still there, even though it's not used any more. And if I dwell on it too
long, I have to go through some withdrawal pangs to get out of it again. If
I hadn't been somewhat skeptical even then, who knows what I would be doing
now -- Scientology?

When you believe in something at that level, it's only a matter of luck
that you ever stop believing it. Like having a friend hand you a copy of
Wiener's _Cybernetics_.

I imagine that you go through similar experiences in switching between the
PCT system concept and that of behviorism. Or that you will.

Best,

Bill P.

[From Bruce Gregory (980129.0957 EST)]

Bill Powers (980129.0524 MST)

This effect is most pronounced at the higher levels. Once a person acquires
a system concept, and has ordered his life around controlling it, changing
it is next to impossible. I think this is especially true of one's _first_
well-worked-out system concept. Even today, 48 years afterward, I can still
return to the way it was to believe in dianetics; that system concept is
still there, even though it's not used any more. And if I dwell on it too
long, I have to go through some withdrawal pangs to get out of it again. If
I hadn't been somewhat skeptical even then, who knows what I would be doing
now -- Scientology?

When you believe in something at that level, it's only a matter of luck
that you ever stop believing it. Like having a friend hand you a copy of
Wiener's _Cybernetics_.

My experience seems somewhat more fluid, although I might be
deluding myself. Perhaps there is a supra-system level that
allows one to alter reference signals at the systems level.

Bruce

[From Bruce Abbott (980129.1115 EST)]

Bill Powers (980129.0524 MST) --

Bruce Abbott (980128.2050 EST)

You said "Saccadic movements allow the eye to gain a clear, unblurred
image...".

You left out "at each fixation point." That makes a difference.

Perhaps you meant to say that the fact that movements are
saccadic rather than continuous allows the eye to see a clear image in the
periods when the eye is stationary. If that's what you meant, we agree.

That's what I meant. Want to hire on as my copy editor? My writing would
benefit enormously from your input . . .

You've added that this
fixation is accomplished via the pursuit tracking system, an issue I didn't
address. When you are scanning across a scene, I wouldn't think there would
be much need for stabilization between movements, as the gaze rests in a
particular position for such a brief time.

If the eye is literally scanning across a scene, it will be very blurred.
One could pick out only the largest and most contrasty of objects. I
believe that the time between saccades is normally considerably longer than
the duration of the saccade itself.

Yes, but the question I was raising was whether the time between saccades
_during the scan_ was long enough to either require or allow pursuit
tracking. I think the eye probably just stops for a moment and then moves
again in another saccade. But that is an empirical issue -- do you have data?

Yes, but for that you need an extended patterned background. When only a
few separated objects are involved, I think the pursuit system, which I
think draws on motion signals from the whole retina, stabilizes the average
velocity of all contrasts and edges in the field, perhaps with greater
weightings for signals near the center of vision. You can pick any of these
objects for position control (centering on the fovea), but if the _average_
velocity is being controlled at zero by the pursuit system, the eye will
continuously be dragged off the target, calling for a saccade to get back
to it. This will happen whether you pick the star or the satellite to look at.

That makes good sense. It provides an explanation for moon illusion and the
effect of the forward-creeping trucks at the light, as I described earlier.
The averaging process must require a bit of integration, as in my experience
it takes a few moments before (in the case of the moon illusion) the array
of moving clouds suddenly halts its motion across the sky and the moon takes
off in the opposite direction. Also, I suspect that input from the
periphery is more strongly weighted than input from the center of vision in
computing this average motion. The illusion of moving backward when the
trucks are creeping forward occurs even though you can see the road ahead
quite clearly. The truck sides are taking up most of the two lateral fields
of view. Or peraps it is lateral motion (necessarily on the periphery when
you are looking ahead) that contributes most to the perception of forward or
backward motion.

The sun rises more-or-less in the east and
sets more-or-less in the west (depending on the season), and south is midway
between when one is facing with sunrise to the left. North is opposite to
south. I don't have much trouble keeping my directions properly oriented
when the sun is shining.

That's because you're defining the names of the directions in terms of
where the sun rises and sets, and so on. Landmarks, as I said.

I can use these references to establish my direction, but if I've become
confused in the absence of these indicators and settled into some other
orientation, they don't serve to reorient my directional frame when they
become available, although logically it would seem that they should. The
sun keeps coming up in the south-west, and I have to exercise cognitive
effort to swing it around, mentally, to the east. I try to lay a mental
picture of watching the sunrise from my parent's property over the sunrise I
am watching, and for a while I can make it feel like east. But it doesn't
persist.

The day I
found my house it was overcast to the extent that the sun's direction was
completely obscured, and we arrived after going through a long sweeping turn
from east to south in which only small segments of the arc were visible at
any given time. It seems that my inertial system underestimated the extent
of the turn (in fact, it practically ignored it), so it had me going
east-southeast when I was actually traveling south. The subsequent left
turn to the east was to me a left turn to the north-northeast, and then we
entered the neighborhood. My brain has been confused about which direction
the house faces ever since.

This is a sobering experience (in case you weren't sober). It argues
against really rapid reorganizations. Once we establish a perception, it is
very hard to change without a complete change of circumstances. It seems to
be relatively easy to acquire a new perceptual function, but hard to get
rid of it once it's in place, and particularly if it's become part of a
control system.

I wouldn't go so far as to generalize this perceptual experience to
reorganization in non-perceptual systems. Although at one level, perception
is strongly influenced by expectations (which essentially indicate what to
look for in the sensory input), much of the system seems to be essentially
hardwired and not responsive to logical analysis by general-purpose
cognitive mechanisms. In the case of illusions, you can make the
measurements and tell yourself that what you perceive cannot be true, but
the perceptual system pays no attention and dumbly continues to produce the
contradictory perception. A kind of perceptual reorganization can occur in
that you suddenly perceive the sensory input differently (e.g., the
alternative views of the Necker cube), but in these cases there is
invariably an ambiguity in the input, such that alternative constructions
are at least locally self-consistent.

When you believe in something at that level, it's only a matter of luck
that you ever stop believing it. Like having a friend hand you a copy of
Wiener's _Cybernetics_.

I imagine that you go through similar experiences in switching between the
PCT system concept and that of behviorism. Or that you will.

Yes, the Necker cube again, but with a difference. PCT and behaviorism are
not systems of the same type, although both deal with behavior. They do not
operate at the same level of discourse. The former offers a mechanism that
behaves, whereas the latter offers only descriptions of the mechanism's
behavior as functions of its current and past inputs. It is the difference
between describing how a radio works by describing it circuits and tracing
signals versus describing how to work the radio by noting what happens when
you turn this or that dial.

Regards,

Bruce

[From Rick Marken (980129.0820)]

Bill Powers (980129.0524 MST) --

When you believe in something at that level, it's only a matter
of luck that you ever stop believing it.

Ain't that the truth. Maybe luck will be a PCT reference level
tonight for all those people to whom I am tactless;-)

Bruce Gregory (980129.0957 EST) --

My experience seems somewhat more fluid, although I might be
deluding myself.

Mine too. I'm not sure I've ever really believed in anything
at the level (system concept?) Bill is talking about, other
than PCT. Maybe that's why I get so excited about it; PCT is
my first religion;-)

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken