Loss of feedback; old points of view

[From Bill Powers (950603.1550 MDT)]

Martin Taylor (950603.1415) --

Rather than starting with the premise that loss of feedback has to leave
a control system flailing around, why not start from the other end?

In specific cases, it is known that loss or degradation of perception
can lead to a degradation of control, but not always total loss of
control. What kind of perceptual function, using what lower-level
inputs, might show this property?

In the case of visual tracking, for example, we tend to assume that the
sense of cursor position is derived entirely from visual information.
But we also know that while we are tracking, we can feel our arms and
hands moving. What kind of input function could use both the visual and
the kinesthetic feedback information to produce a cursor position
signal? The visual information would be the most accurate component, but
a kinesthetic component could, with practice, become almost as accurate.
The only question remaining is how these two position signals would be
combined in an input function so that when the visual signal is lost,
the kinesthetic signal would still provide a rough indication of cursor
position. Controlling this rough indication would result in errors in
the actual cursor position -- but that is exactly what _does_ happen.
The main thing is that the system would not go open-loop; it would
simply change its apparent characteristics and precision.

···

----------------------------------------------------------------------
Bruce Abbott (950603.1340 EST) --

That was a beautiful answer to my question about the transition from
traditional behaviorism to PCT. Your analogy to the Necker cube is
particularly powerful. It's been said often that PCT is a "valid
perspective" on behavior, but that there is something to be said for
other approaches, too, such as S-R. This gives the impression that if
you use the PCT perspective, you see one set of interesting phenomena,
while from another perspective you might see other phenomena that the
PCT view misses.

But your discussion makes a strong case this this is not the
relationship between the old behaviorist view and PCT. The points of
view, while each seems to make sense when it is dominant, are not
supplementary or complementary, but _mutually exclusive_. When you are
"in" one of the points of view, the other simply appears wrong,
backward. When you switch viewpoints, you also switch which view seems
right and which wrong. It's as though you're using the same mental
machinery to support either point of view, and it can be used only for
one of them at a time.

I've tried to learn to switch from the PCT view to other views when a
conflict comes up between them. I don't do very well at it, because my
supply of lore is far greater in the control-theory field than in any
other field, so I can't really get into that place where another view
seems to be right and natural. To get into another theorist's shoes, you
have to do more than just pretend to accept different conclusions; you
have to see how the other guy's reasoning ties together a whole web of
observations and ideas in a way that invites belief. I think your
experiences with PCT might support this concept; as your experience with
phenomena seen from the PCT standpoint grew, it became more possible for
you to find a comfortable position in either camp. This, I think, tends
to remove the familiarity factor and makes it possible to look for other
criteria by which to compare the usefulness of the viewpoints.
----------------------------------------
     No, I did not misspeak. In the avoidance experiment the rat
     performed on a Sidman shock-avoidance schedule on one session and
     the actual temporal pattern of shock delivery was recorded (rats on
     this schedule occasionally make mistakes and receive shocks); this
     pattern was "played back" on the next session, in which the rat had
     no control over shock delivery. As with escapable versus
     inescapable shock schedules, the rats failed to resist the
     disturbance when the apparatus switched them from avoidable to
     unavoidable shock schedules or vice versa. The key here is that
     during training the rats had learned that the shock frequency was
     the same whether they controlled shock delivery or let the
     apparatus determine when shocks would be delivered.

Amazing how your concepts have paralleled ours. Many years ago we did a
very similar experiment in which a person did a tracking run while the
computer recorded the cursor position; then, under some pretext, the
person was asked to "repeat" the run, while the recorded cursor
positions were played back. Some people got completely though the second
run without realizing that the handle no longer had any effect on the
cursor. Most others, when they discovered that the loop was broken,
would simply stop tracking, realizing that there was no point to it.

Because the same disturbance was applied in the second run as in the
first, we could compare how well the person "tracked" by generating the
cursor movements that would have resulted from the handle movements in
the second pass. The errors were very large. In some cases, the phantom
cursor would go clear off the screen. Interestingly, it was possible
for experienced controllers to deliberatedly believe that they were
controlling and go through a whole run, producing results much like
those of people who were fooled.

This "deliberate belief" seemed to have the effect of preventing higher-
level systems from turning off the tracking control system. As a result,
we could look (I think) at the open-loop behavior of the control system,
something that is normally very hard to do because when control is lost,
people generally just stop trying.

Interesting that apparently the rats did not seem to maintain a believe
that they were controlling -- maybe they didn't have one in the first
place. Comparative PCT is going to be an interesting field because we
can do the same nonverbal experiments with people and different animals.
One day, when we call someone a bird-brain, we may know what we are
talking about.

Thanks again for the great post. I hope this stuff will take its place
in your published writings before long.
----------------------------------------------------------------------
Best to all,

Bill P.

<[Bill Leach 950604.11:48 U.S. Eastern Time Zone]

[From Bill Powers (950603.1550 MDT)]
in reply to: Martin Taylor (950603.1415) --

The whole discussion is "on the mark" as far as I am concerned as well as
a better presentation of concepts that I was trying discuss.

The main thing is that the system would not go open-loop; it would
simply change its apparent characteristics and precision.

This is definately the heart of what I was trying to say.

My concept of "model based" control in general is that it is an attempt
to "control" a perception not immediately perceived based upon the use of
learned relationships between the desired perception and other
perceptions (that are currently perceived).

Thus the "controlling" is attempted by controlling these current
perceptions to reference values _that should_ result in the desired
(but currently not actually perceived) perception being controlled to
its' reference value when it finally at some future time becomes a
current perception.

If at that future time, the desired perception is not at its' reference
value then other control action will occur and possibly the "model" will
be changed in some way.

In particular the references for the related controlled perceptions will
now be set by the error output from the now perceived goal's control
loop. The "experience" of bringing the goal's perception under control is
probably what is also used to "update" the model (in some unspecified
way).

-bill