Understanding Control Theory, PCT Research

[From Rick Marken (950602.0845)]

Bill Leach (950601.23:46) --

Anyone that actually does understand even Engineering Control Theory well
enough to understand what is controlled could not possibly put the
comparitor in the environment.

I agree completely. The fact is, however, that there are a number of
psychologists who can do a convincing and (for most psychologists)
intimidating job of presenting a mathematical analysis of control theory.
These people are considered the "experts" in the application of control
theory in psychology yet they get what seems to be the simplest aspect of
control theory wrong -- the variable controlled by a control system.

So I guess the question is "what constitutes an understanding of control
theory?". Apparently there are many aspects to "understanding control
theory". One can understand the complex (literally) math while not
understanding the basic functional characteristics of a control loop (like
control of perception); this seems to characterize the understanding of many
of the psychologists who are the experts in control theory. On the other
hand, one can understand the basic functional characteritics of control
systems while having only a passing familiarity with the complex math; this
seems to characterize my own undertanding of control theory.

I'm glad that Bill Powers (950602.0600 MDT) agrees with my basic evaluation
of Bruce Abbott's (950601.1635 EST) "PCT research from the 1970s". Bruce
said:

I maintain that this experiment performed the Test for the controlled
variable.

and Bill said:

I agree that it did, but it didn't carry it very far.

In response to the same comment I had said:

Yes. I agree, your experiment definitely involves the Test. I do think you
could have spent more time nailing down the controlled variable, though.

I think the fact that this Test was not carried very far (more time was not
spent nailing down the controlled variable) is crucial. I would guess that
the reason this Test was not carried very far is because the experimenters
did not see their goal as identifying a variable that the rat was
controlling. It is not clear that the experimenters really performed the
first (and most crucial) part of the Test: hypothesizing that a variable was
under control. The variable "shock signalling schedule" was not treated as a
_possible_ controlled variable (and, as Bill Leach (950602.00:56) points
out, an extremely unlikely one since it "is just assigning the observer's
understanding of the experimental apparatus to the rat"). It was
probably treated as a variable that has a possible effect on behavior (bar
pressing) and it did have such an effect. Thus, the experimenters never even
considered the many plausible alternative variables that the rat might
actually be controlling.

The goal of The Test differs completely from the goal of conventional
research. The goal of conventional research is to determine what variables
influence the observable behavior of the organisms; the goal of the Test is
to see the world from the organisms perspective; to learn what aspects of the
organism's own experience it is trying to bring under control.

So, while the research Bruce describes can be seen as having several elements
of The Test for controlled variables (mainly, introducing what can be seen
as a disturbance to a possible controlled variable) it really doesn't go
nearly far enough to achieve the basic goal of the Test -- to determine
"beyond a reasonable doubt" the perceptual variables an organism is
trying to control.

Best

Rick

<[Bill Leach 950602.21:14 U.S. Eastern Time Zone]

[From Rick Marken (950602.0845)]

... systems while having only a passing familiarity with the complex
math; this seems to characterize my own undertanding of control theory.

This is also one of the reasons for some of the "heated" discussions
between you and Martin. Martin has several times presented a point that
was absolutely a true statement in control theory but was irrelevent to
the type and design specifics of the control systems that we study.

I almost feel that this is similarly true for a portion of Hans
presentation concerning "modeled control" system application to living
systems. Hans has several times discussed the outstanding ability of
model based control to reduce the effects of white (or pink) noise,
achieve significantly higher accuracy for certain specialized control
situations.

It seems to me that an arguement for "Fuzzy Logic" would stand a much
better chance of being won! Living systems explicitly do not exhibit
outstanding control accuracy when compared to all but the shoddiest of
engineered control systems.

Thus hyping a system because of its "optimal" control capability seems to
me to be presenting a case for why such a system might not exist.

Some of the "adaptive" ideas could be valid in the sense that we do have
experimental evidence that some sort of tuning must occur.

We have pretty good evidence that perceptions that are directly perceived
"can be controlled" IF they are rather high level perceptions. The
evidence for controlling low level perceptions upon a loss of input is
completely contrary to model operations -- that is at low levels it seems
pretty conclusive that loss of perception means loss of control. When
low level control is maintained anyway then the evidence is pretty strong
that what happens is that other perceptions "replace" the desired one and
"control" of the original perception is an artifact of the physics of the
situation.

Rather than go back and edit the message I just wrote in reply to Bruce's
message I add this thought here:

Operant conditioning might be at least headed in the right direction.
The major difficulty in any behavioural research with animals especially
is that perceptions of what is observed by the researcher never have any
significance to the subject. What is observed really is absolutely
incidental to the subject.

Researchers doing experiments such as the one that Bruce described are
faced with performing the sort of analysis that we often carry on here
when discussing PCT questions with respect to unmodeled hypothesis. The
levels of the hiearchy that are involved, the multitude of unknown CEVs
make objective testable hypothesis concerning the postulated CEV very
difficult to formulate.

It may well be that for the foreseeable future that the only really good
PCT data will come from 1) Continued PCT modeling of the nature
currently used 2) Relatively "simple" experimental observations such as
the insect studies and 3) Complex studies with human subjects under PCT
conditions.

The higher animals being without communications methods may well be too
complex to study until more knowledge is obtained about the nature of
HPCT.

-bill

[Martin Taylor 950603 14:15]

Bill Leach 950602.21:14 U.S. Eastern Time Zone

On model control systems:

at low levels it seems
pretty conclusive that loss of perception means loss of control. When
low level control is maintained anyway then the evidence is pretty strong
that what happens is that other perceptions "replace" the desired one and
"control" of the original perception is an artifact of the physics of the
situation.

In the "standard model" control system, there is a perceptual input function
that works on its input variables to produce a scalar output value. There
is no provision for "perceptions replacing" the "desired" one. The nature
of the PIF determines the value of the perceptual signal in the absence
(i.e. zero values?) of input. If the PIF contains some kind of a predictor
such as a sample and hold, or anything else that lets the output be based
on something other than the immediate current input values, then the
perceptual signal might not change dramatically when the input vanishes.
But if the PIF works only on the current values of the input signal, then
when the input values change dramatically (such as by vanishing), the
perceptual signal will change dramatically, creating probably large error
values, and ineffective but large changes in output. That's the "loss
of control" you are talking about. But there's a real loss of control
that doesn't depend on any model--if you can't see what's happening, you
can't selectively affect it in the way you want, no matter what modelling
you do, unless "what's happening" is utterly predictable. That "real"
loss of control may happen abruptly or gradually, depending on the kind
of unpredictability that the PIF is tuned for. The "use only the current
value of the inputs" kind of PIF is tuned for all possible unpredictability.
It results in immediate loss of control when the input is cut off. Other
forms of PIF may lose control more slowly, but they are tuned for smaller
ranges of unpredictability and the system controls poorly when unexpected
things happen.

All observables, and all actions of physical entities are "artifacts of
the physics of the situation." What more precise idea were you thinking
of when you used the term?

Martin

<[Bill Leach 950603.22:40 U.S. Eastern Time Zone]

[Martin Taylor 950603 14:15]

My discussion referenced previous discussions about how a control system
might deal with a situation where the perceptual input for CEV was lost.
In the simple model, the system goes "open-loop" and the output runs to
one of the two extremes.

But the discussion dealt with the observed fact that in complex control
systems (human in particular) this is not what happens. The discussion
also included quite a bit of speculation about model based control capped
off with a fine example by Avery.

One of the proposals (actually tested) was that other perceptions are
then used in place of the lost perception in an attempt to continue to
"control" the original perception.

But there's a real loss of control that doesn't depend on any model--if
you can't see what's happening, you can't selectively affect it in the
way you want, no matter what modelling you do, unless "what's happening"
is utterly predictable.

And taken in its' precise, exacting meaning, this assertion is completely
correct. You can not control what you can not perceive.

However, we do "control" to achieve results that we desire and we often
do this for things that we can not currently perceive.

We basically use what can reasonably be termed "model based control"
(without trying to get into any details about how this model must
"look"). When I have a reference set for perceiving an additional gallon
of milk in my refrigerator, the error is high but the necessary
components for reducing the error are not perceived.

I do have though a "model" constructed in experience that will _probably_
allow me to reduce the error to zero. The model of course involves
controlling other perceptions.

In addition there is work with the tracking task lately on trying to
continue the tracking with the preceptual input from the computer screen
missing. I don't think that nearly enough work has been done with this
yet but at least for the moment all of the subjects have reported that
they became "conscious" of something like mouse position (that is the
mouse itself), hand movement or the like.

All observables, and all actions of physical entities are "artifacts of
the physics of the situation." What more precise idea were you thinking
of when you used the term?

In the situation where I am controlling a physical object in the
environment (say position relative to myself), as a practical matter the
"artifacts of the physics of the situation) will not affect the control
(as long as I am not overwhelmed) but rather will only affect the output
used to effect the control.

OTOH, if I am trying to "control" something that I can not perceive
directly, then the best that I can do is control something that I can
perceive that hopefully is related in a consistant and known way to the
unperceived variable (thus "control" is an artifact of the physics of the
situation). Which is to say that "model" based control is only good to
the extent that the model accurately predicts the relationship(s) between
the controlled perception(s) and the desired but not perceived one.

Do that clear up what I was talking about?

-bill