Bill P. will love this; preventing control

[From Bill Powers (950503.0630 MDT)]

Bill Leach (950502.2241) --

     I'm disappointed -- sniff

     You did not comment upon my "Bill will like this one" remark.

     [And Bill P. will truly _love_ this next one]

     Compensation
     The basic idea of feedback is intuitive and simple. From the
     perspective of a human operator attempting any control action,
     whether that of positioning a lamp on a table, steering an
     automobile, or any of the innumerable actions we take continually
     and instinctively, our action is almost invariably tempered by our
     continuing observation of any discrepancy between intent and status
     thus far. [The next one however is in serious error unless the
     author meant "results" when he said "output"] This is negative
     feedback: The control action is a function of the difference
     between the desired output and the actual output.

I commented on the final sentence, but not the first part. You are right
that I love the first part in that it recognizes the role of control in
behavior as a whole. But I hate words like "tempered" , especially in a
phrase like "almost invariably tempered". This allows room for actions
that would occur anyway, but are only "tempered" by the error. And of
course this implies that we observe error signals rather than "status."
Words like "tempered," "modified," "modulated," "influenced," and others
like them are used mainly when the author knows there must be some sort
of effect but doesn't have any idea what it is.

Most of your comments on "Early PCT research" are what I would have
said. Your hands-on experiences with real control systems go a long way
toward explaining our uncanny agreements on most details of control
systems.

One aspect of Bruce's experiment that I didn't comment on (to avoid
spoiling the occasion by nit-picking): I wonder why, when a rat found it
could prevent the shock (when warned) by flipping over on its back, the
experimenters simply ignored this control action and shaved the animal's
back. This is the sort of reaction one expects when the experimenter is
determined to apply a stimulus, and finds that the animal manages to
avoid being stimulated. Overwhelming physical force is then applied to
make sure that unwanted control system loses control. Then the
experimenter proceeds to "take data", having thrown out a good part of
it.

There are other means of avoiding shocks, such as balancing on one foot
or two feet for a moment on a single grid wire or standing with the feet
on different grid wires that have the same polarity, but I presume that
rats were not observed to do this -- that is, were observed not to do
this (there's a big difference). I understand that in some shock-
administration systems the polarity of the floor's grid wires is rapidly
scrambled to keep animals from controlling the shock.

It seems to me that in behavioral experiments there is a lot of
thwarting of the animal's control systems, so the experimenter can get
the effect he or she wanted. It's hard to get a good picture of control
behavior when you start by opening the loop. All this does is prevent
you from noticing the most important control processes.

···

-----------------------------------------------------------------------
Best,

Bill P.

<[Bill Leach 950603.10:10 U.S. Eastern Time Zone]

[From Bill Powers (950503.0630 MDT)]

In retrospect I see that I was "taken in" just because the author used a
human example (which itself implies that the human is recognized to be
controlling) in an otherwise rather dry work.

Being critical about it now, it is almost embarrassing that I was so
enamored by this text.

In the first place he used the phrase "attempting any control action"
which implies the possibility of "uncontrolled actions". There is also
however the "or any of the innumerable actions we take continually and
instinctively" which seems to contradict the first phrase by implying a
recognition that control is not optional.

The "our action is almost invariably tempered by" would have tempted me
to send this guy a copy of B:CP. It is almost like this guy might have
just about failed a psyc class for asserting that the behaviour that he
has observed appeared to be control system action!

I think that I realize now why his "observation of any discrepancy
between intent and status" was not recognized for its error. For some
reason, the "positioning a lamp on a table" reminded me of a specific
instance. The instance involved "control of a perception not perceivable
while actually touching the lamp thus the overall control action involved
"stepping back and evaluating the appearance, estimating the position
change that might "make things look right", stepping back to the table
itself and controlling the new position related perception, then iterate.

Of course I should have recognized that I should not have missed what he
was saying - basically that the comparitor is in the environment (though
I doubt that is what was intended).

This "observation of the discrepancy" thing is a real problem in living
control system discussions.

In my own example of moving a lamp, "observation of the discrepancy", I
believe, is exactly what was going on. Or at least to describe it that
way is not a gross error. I would control a perception of the lamp's
position while the real goal was not perceived and therefore
uncontrolled. Stepping back and observing allowed me to again perceive
the perception whose control was my ultimate goal. If an error existed,
I was cognizant of the existance of an error (but probably not all of the
detail concerning the nature of the error). In some complex fashion the
error was a reference for a program level (?) perception that resulted in
a new reference for my perception of the lamp's position.

If this description is at all reasonable for what might have actually
been going on in my "lamp moving episode" then it is easy (at least for
me) to see how one could write something like the analogy we are
discussing. His description _is_ wrong as far as describing a closed
loop negative feedback control operation.

It is also probably wrong for describing a complex "iterative" control
action because the term "continuing observation" implies continous
observation which is precisely what can not exist is one is "observing
the discrepancy ..." of a "controlled" perception.

You raise a good point for me with:

It seems to me that in behavioral experiments there is a lot of
thwarting of the animal's control systems, so the experimenter can get
the effect he or she wanted. It's hard to get a good picture of control
behavior when you start by opening the loop. All this does is prevent
you from noticing the most important control processes.

In fairness to the designer and implementer of an experiment this matter
is very difficult to resolve completely (impossible?). My objection to
Bruce about the contrived experiment was the result of my uneasiness
about the issue (but was also a wholly unacceptable position for most
detailed research).

The causality principle in combination with "valid" testing of hypothesis
is the foundational principle of modern science. The application of the
process to perfection necessitates determining all of the variables,
their values and their relationships to each other. To the extent that
this is _not_ done the experimental "evidence" is degraded. We believe
that achieving perfection in this regard is completely out of the
question so we should recognize that all experimental evidence is limited
in its' certainty as are all conclusions based upon such evidence.

Since the goal nevertheless is to reduce the uncertainty, two of the
possible ways are to limit the number of variables possible and to
control others... thus the "controlled" environment experiment.

In the physical sciences this method has had stunning results. Many of
the "failures" in controlled experiments have been the most valuable
learning tools... when the researcher recognized that the failure WAS a
significant challenge to current understanding.

Of course many of the failures have also been due to failure on the part
of the experimenter to identify variables and therefore fail to measure
or control relevant variables.

In behavioural studies, the sheer number of variables potentially
involved or relevant is bound to be staggering. In general the attempt
to simplify the experimental conditions with respect to the study of a
complex control system will always consist of a balance between having
too much data with too many uncontrolled (or at least not understood)
variables and unintentionally overwhelming the control system being
studied.

In ANY experiment an assumption or hypothesis concerning the direction of
causality is "fatal" until that particular error is noticed and corrected.
This particular error results in the experimenter asking the wrong
questions and designing the experiment around manipulation of the wrong
variables.

The experiment then is based upon observer control of an "effect" while
at the same time trying to "track" the causal effects upon the CAUSE. I
can not think of a real example of this sort of thing in the physical
sciences field history (at least for a simple "A causes B" relationship).
I don't doubt for a moment that with sufficient research and effort that
I could find specific examples for more complex cases.

In the behavioural sciences the situation is almost infinately more
complex. It is precisely the fact that living beings are complex closed
loop negative feedback control systems with INTERNALLY set references
that makes it possible for an experimenter to "prove" that an "effector"
exists in the environment and "works to produce its' effect".

As long as the experimenter has sufficient control over the environment
that the subject encounters it is possible to obtain almost any
"conditioning" behavioural response desired. This is particularly true
if observed behaviour that is not in accordance with the "theory" is
ignored (such as avoiding shock altogether).

While it might sound like I am trying to "pick" on Bruce and/or his work
in this field, I really am not. A rat is "behaving" all the time and
from what I have seen of the critters, except when sleeping they are so
active as to almost make the observer tired just watching.

The accidental level presses is a good example of the problem, I think.
Behavioural results that are not intended by the subject are difficult to
evaluate. In his experiment they, at least, were looking for and testing
for "intended results" on the part of the subject. The problem then is
that there are thousands of intended results mixed in with at least
potentially, many thousands of observable unintended results. How and
where do you "draw the lines of distinction"?

I think that this problem is true for real PCT research and the
significant difference is that the viewpoint demanded by PCT brings the
problem to the fore.

-bill