Basic PCT; IT predictions

<Bill Leach 01 Feb 1994 23:11:11

[From Bill Powers (940102.0925) --

Yes, I can do that. There will be times where I might not show up quite
so consistantly as I have but I still won't miss any messages.

My idea in that regard is that I will collect that submissions into a
file, attempt to correlate the data, request clarification where I think
necessary and then post the results for critical review and iterate.

Such justification would be in terms of maximizing the ability of ...

I don't suppose that you could be correctly termed an "Objectivist" could
you? <this is not a challenge to what you said, just curiosity>

Compliance is not a term we have used in PCT.

How do you deal with "goodness of control"? That is what handles error
signal can not achieve zero but can be "close"? It seems to me that
several "control system concepts" apply to PCT that I have not seen
discussed. These would include (of the top of my head - and crossing
disciplines for terms); compliance, reset rate, low frequency hunt
(under control), high hunt (over control), among others.

···

-----------------

CONTROL LOOP

I don't know if this is really useful or not. I suspect that defining
the most simple control loop is useful if for no others than those such
as myself that are just begining with PCT. I REALLY "blew it" with that
suggestion in that I was thinking "elementary" control loop but failed to
mention that "elementary" is how I was viewing the term.

How about?:
SIMPLE CONTROL LOOP - Single input signal, single reference signal, a
comparitor and a single output signal.

CONTROL LOOP (simple or complex) - Input function, comparator, and output
function.

COMPARITOR - function that produces an output signal based upon the
relationship between an input signal and a reference signal.

and I'll add input/output functions... (it's getting late)

---------------
CONTROLLED VARIABLE -- no problem, that clears up my understanding.

REFERENCE (signal & level) -- OK, I need to cogitate a bit on that :slight_smile:

REORGANIZATION and ADAPTATION -- I'll look those over again too but I
admit that while there are vague areas of "discomfort" I generally feel
much better about the sense in which the term "reorganization" is used.

---------------
Survival
Thanks, I need that! Reminded my of one of my favorite expressions:
"If you think that you are having a bad day, just try skipping a couple."
---------------

Jim's reflexes

Thanks, I thought maybe I had that right and even ventured to say so.
---------------
Thanks again and if no one has objections, I'll collect "definitions" (I
do have the initial set already).

-bill

[From Bill Powers (940102.0925) --

Bill Leach (940131.1901, --.2055) --

Glad you're pushing on the dictionary idea. Do you want to be the
official collector? (hand the broom to the new guy).

···

---------------
PCT -- Perceptual Control Theory. A theory system that attempts

to explain ALL manifestation of human behaviour.

So far we can't explain consciousness or memory as control
phenomena, and _interactions_ among organisms can't be explained
by control theory because they're not constrained to include only
negative feedback loops. Perhaps:

A theory that attempts to explain all control processes within an
individual organism and in that organism's relationship to its
environment.

It may be arguable that PCT "justifies" a particular moral
standard or code however that is incidental to PCT. Such
"justification" would be a function of some common behavioral
characteristics identified by PCT and not PCT itself.

I would change the last sentence to read:

Such justification would be in terms of maximizing the ability of
the organism to carry out the basic control functions required
for its survival or the survival of its species.

And I would add:

Moral behavior is a control process and can be described within
PCT. Moral codes can be seen as reference conditions, with
perceptions of morally-relevant happenings being compared with
the reference condition and the error resulting in corrections of
behavior to bring the results closer to the moral standard. This
analysis is independent of what the moral standard is. A more
general term for this level of control is control of principles.

COMPLIANCE -- A higher level control system is satisfied with
the degree of match between the PERCEPTION and the REFERENCE
for a lower level control system.

As the model is currently constructed, higher systems do not
perceive the error signals in lower systems. They perceive
variables that are functions of the perceptual signals (only) in
lower systems. The only sign a higher system gets that a lower
one is not controlling successfully is that the higher perception
fails to match the higher-level reference signal. It is possible
that a higher system can control successfully even if one lower
system fails; the reference signals for the remaining lower-level
systems will automatically adjust to make up the difference (this
is easy to demonstrate). Compliance is not a term we have used in
PCT.

CONTROL LOOP -- a control system involving only one input
variable, one output variable and one reference variable.

I would prefer to say "one perceptual signal," because the
perceptual signal may be a function of many lower-level (or
external) variables. It might be better to include the functions
in the system, too: input function, comparator, and output
function.

controlled variable -- the variable that is maintained in a
reference state in a control loop.

We generally used the term "controlled variable" to mean the
external counterpart of the perceptual signal. This is handy when
taking the external viewpoint toward another control system, as
in constructing simulations or observing the behavior of others.
The term "controlled perception" is used in speaking from a
viewpoint inside the system; then there is no need to state what,
in the environment (if anything), corresponds to that perception.

In this connection, we need to make a similar distinction about
reference signals:

REFERENCE SIGNAL -- the variable (or signal) that represents
the desired control state. This signal may be an output signal
from another (higher level) control system. Note that the
control system for which this signal is the reference CAN NOT
directly influence this signal. While consequences of this
particular control system's control action may result in
changes to the reference signal, those changes (if any) must
have been derived as the result of some other (higher level)
control system action.

A reference _signal_ is a physical signal inside the organism
against which a perceptual signal is compared. This is the proper
term when we are talking about the control system itself as if we
could see it working inside the organism.

A reference _level_, or state, or condition, is the externally
visible counterpart of the reference signal. We observe it as a
particular state of a controlled variable toward which the
actions of the system continually force the controlled variable.
The reference level of the controlled variable is not separately
observable; it is deduced from seeing the effects of disturbances
on the controlled variable. It can be formally defined as that
state of the controlled variable at which the output actions of
the system have zero effect on the controlled variable (i.e.,
when there is no attempt to change its state). These definitions
are designed to apply in experimental situations. Remember that
in applying PCT to real organisms, we do not start out by knowing
what the controlled variable is or what its reference level is.
We have to deduce that by applying disturbances and observing the
effects of the organism's actions on the disturbed variable. Only
after the fact of control has been established this way, and the
controlled variable and its (current) reference level have been
deduced, can we construct the model that is to explain the
observations.
------------------------------

REORGANIZATION -- is a term relating to a high level control
system and is the condition where having lost control the
system begins trying to find some control methodology that will
regain control. Note that for most human behaviour this is
likely not a completely random search but if one considers how
an infant appear to behave then one realizes that possibly it
could be.

This one requires more detail -- perhaps it can be boiled down.

There are several concepts of reorganization currently being
discussed on the net. The original idea was that a "reorganizing
system," separate from the hierarchy of control systems,
monitored certain critical variables in the body, compared them
with inherited reference signals, and converted the resulting
error signals into random changes of organization in the brain.
Unlike a high-level control system in the behavioral hierarchy,
the reorganizing system has direct access to all systems in the
hierarchy without regard to level. This principle turns out to be
very powerful, in that it can correct "intrinsic errors" almost
as efficiently as a systematic control process could, but can do
it without making any assumptions about regularities in the world
outside it.

By linking reorganization to the status of the whole body
(including biochemical systems), this concept accounts for a
number of things, particular where the "value" of certain signals
comes from. Why does pain hurt? Why is pleasure nice? The answer
can only be that these signals are compared with built-in
reference signals, defining preferred values that are inherited
instead of learned (zero for pain signals, some large amount for
pleasure signals). This built-in value then allows these signals
to be used as criteria for the acceptability of any organization
of behavior. If a particular way of behaving results in a non-
zero pain signal, then the organization of the brain that is
producing that behavior should be altered. If a pleasure signal
fails to match its reference, again reorganization is triggered.

The ideas of pleasure and pain are generalized in the concept of
the reorganizing system. Some set of inherited neural or chemical
signals stands for the states of critical variables inside the
organism. When these variables depart from their specified
reference levels, high or low, reorganization begins at a rate
that depends on the amount of departure. Among these signals are
those we experience as pleasure and pain, but there must also be
many that represent other variables like body temperature, state
of nutrition, blood pH, and so forth -- variables critical to the
life support system which are affected by the organization of
behavior. The reorganizing system has no preference for any
particular organization of behavior -- but whatever organization
exists, it must maintain these intrinsic variables near their
reference states, or that organization will be reorganized away.

Martin Taylor has noted that reorganization must also occur in
highly localized ways, for instance in establishing the stability
of control systems. I offer another term for this process, which
is "adaptation." This is a specific adjustment of a specific
control system to the properties of the environment with which it
interacts. This can occur through simple circuits contained
within a given control system. The operation of those circuits is
not part of the behavioral hierarchy; they simply take care of
housekeeping details and make the control systems perform better.

There are also _systematic_ ways of reorganizing, but these are
more properly considered as the behavior of learned higher-level
control systems, as most systematic methods must be learned. I
think of this as _algorithmic_ control, where the application of
some systematic behavior like a search strategy or elimination of
alternatives can result in control. That kind of behavior has to
be accquired under the influence of a true reorganizing system.

One of the main functions of the original concept of a
reorganizing system was to account for the initial growth of the
control hierarchy, before any systematic behavior at all was
possible.
----------------------------------------------

How does PCT deal with such basic issues as most people have a
drive to survive.

The drive to survive, I think, is a learned concept, and mostly
verbal at that. Survival is not a goal, but an outcome. If the
organism maintains all its critical intrinsic variables at their
inherited reference levels, it will probably survive at least as
long as the basic design is good for, which is far from forever.
It would be hard to imagine a control system with the goal of
survival. The reference signal would be "survive!". But what
would the perceptual signal be, if it did not match the reference
signal?

More likely is that we acquire goals which result in avoiding
pain, illness, starvation, suffocation, and so on. When such
things start to happen, the reorganizing system immediately goes
to work to reconstruct behavior until those experiences are
removed. They have built-in values. Intellectually, we might
classify all those experiences as threats to survival, but we
don't need that verbal classification to know that we don't like
those experiences. Nature has not given us any reference signal
defining survival; it simply defines the conditions that must be
maintained if survival is to result. Individual organisms can't
learn from failures to survive. Only a species can.

As to the urge to learn, that can also be learned -- consider the
perpetual graduate student. It can be nothing more impressive
than the choice of an occupation. At another level, learning is
simply part of how we are organized: if we could not learn, we
would not make it to adulthood.
-------------------------------------------------------------
Jim Dundon (940301.0026) --

Rick said:

When there are disturbances too large or too fast for the
system to deal with. If you are at ground zero during
a 6.6, you won't be standing for long.

And you said:

Right I would have lost control of that reference signal
but the automatic reflexes would take over, and set a new
reference perception of emergency reflexes.

Not so fast. There is no such thing as a "reflex." What were once
called reflexes are now called control systems. They are all part
of one hierarchy of control, or so sez PCT.

There is no way an emergency reaction can do any good if the
muscles are already generating their maximum possible output.
Consider two arm wrestlers. Each has a different goal for where
the two interlocked hands should be. It is impossible for the
hands to be at both goal positions at once. Both control systems
are experiencing large -- maximum -- errors. Those errors are
producing the maximum possible muscle forces.

If the two wrestlers are evenly matched, then both of them have
lost control of their hand positions. An onlooker could reach out
and push the two hands either way, with hardly any resistance.

Inside a single person, there can be conflict between different
control systems that are active at the same time. One goal is
"don't be a wimp," the other is "don't be a boor." So should you
tap the girl on the shoulder and ask her to shut up until the
movie is over? One control system sets a positive reference level
for the act, and the other, at the same time, would consider
doing the same act a large error. So which do you do? Neither.
The reference signals cancel. You can't satisfy either goal, so
you have lost control. Read BCP on conflict.
--------------------------------

IS THERE AN ULTIMATE REFERENCE SIGNAL AND IF SO, WHAT IS IT.

The highest level in the currently-proposed hierarchy is control
of "system concepts." These are perceptions of whole organized
entities like a person, a self, a government, a society, a
science, a religion, and so forth. There is not just one of them;
there are many. I do not know where the reference signals come
from. They might arise strictly through reorganization. They
might be like an average of experience with perceived system
concepts. They might be selected at random. There might be
another level of organization that sets them, to serve goals of a
kind I haven't been able to identify or imagine.

As to whether they include "survival", see my answer to Bill
Leach's comments above. The answer is no. Human goals are not
imposed from outside, nor are they objective factors. They are
simply states of previously-experienced perceptions that have
been selected to maintain or bring about again, for the
individual's own benefit. Nobody fears death and seeks survival.
We fear the _details_ of dying, and seek the _details_ of living.
We can't fear something we have never experienced and can't
imagine. We can't control for something which, by definition, we
can experience only in one state.

Other of your questions might be answered in the above post to
Bill Leach.
---------------------------------------------------------------
Martin Taylor (940131.1320) --

I think what Rick is kvetching about is that your prediction is
being made under unnecessarily restricted conditions. In PCT we
don't try to predict the parameters of a model; all we predict is
that for some set of parameters we will get the model to behave
like the real system, and that the same model with the same
parameters will then continue to predict behavior over repeated
trials and under certain changes in conditions. To apply IT, you
would be permitted to state some general parameters, and propose
that a measure of behavior in informational terms would remain
stable over trials given the optimal set of parameters found
through previous trials.

Also, you should really state what outcome would show that your
IT analysis is wrong. If we could find NO set of parameters that
would allow the same model to predict behavior, or if we changed
conditions in a way that does not affect the model but turns out
to have an effect on real behavior (or vice versa), we would
conclude that the control model is wrong. The criterion for
prediction is that the model should reproduce the real behavior
with an RMS error of no more than, say, 10%, although we might
accept a little more error or require a little less (depending on
what we had for breakfast). What is your criterion for
prediction?

By the way, you and Chuck Tucker are really escalating the bets:
Chuck Tucker by demanding that we predict within 0.9995, and you
by adding a few more nines. I have said that I consider a
_correlation_ of 0.95 to be a reasonable sort of number, although
not ideal, and 0.8 to be unacceptably low for establishing a fact
that we're supposed to make future use of. I have pointed out
that facts with a probability of truth of 0.95 are not useful in
extended reasoning that depends on many facts being
simultaneously true. I don't see what there is to complain about
there, because that's just as calculation. If the probability of
truth of n facts is 0.95, what is the maximum n for which a
conclusion that requires the truth of all the facts has a chance
of being right that is greater than 0.5? Don't bitch at _me_
about the answer, or ask me to make an exception for a good
cause. That's just how it works.

I have three papers to finish before the end of June (for Joslyn,
ECSG, and you), and one by the end of July (CSG) so I will have
limited free time until they're done. HOWEVER, if I can find the
time I will program the first four of your cases using the same
format as for the "sleep" program, so you can use the Gaussian
disturbances you already have. From there on it's up to you.
--------------------------------------------------------------
Best to all,

Bill P.

<Martin Taylor 940202 12:30>

Expect only sporadic communication from me over the next several weeks.
There is an ongoing major sleep deprivation study with which I am involved
(including running some experiments programmed by Bill Powers and Tom
Bourbon). I was on duty as an experimenter until a bit after midnight
last night, and am on again starting at midnight tonight. I don't feel
up to much intellectual discourse now. These sessions happen on alternate
weeks, with the intermediate weeks being taken up by experimenter recovery
and analysis of the data to show whether everything is still running
properly. So there will be little time or energy left for fun things.

Having said that...

Bill Powers (940102.0925) (I think 940202)

Bill Leach (940131.1901, --.2055) --

Glad you're pushing on the dictionary idea. Do you want to be the
official collector? (hand the broom to the new guy).

Whoever does the collection, I'd really like to get the result formalized
for the IJMMS issue. I'll try to keep track as it goes on, but for reasons
stated above, I won't be reliable.

···

=================================

Martin Taylor (940131.1320)

I think what Rick is kvetching about is that your prediction is
being made under unnecessarily restricted conditions.

I see it differently (as you might imagine). Rick is asking for a
reconstruction of the disturbance waveform, and treating that as the
ONLY satisfactory demonstration that the perceptual signal contains
information about the disturbance. Since Rick has said that he would
accept a reconstruction as a suitable demonstration, I proposed conditions
that would allow it to be done, which he accepted. When I did it, these
conditions no longer were satisfactory. Fine. I have said what is
necessary for a reconstruction to be exact, which is a finite amount
of prior information, plus a perceptual signal that goes on indefinitely.
If the feedback relation (including the output function) changes, then
reconstruction is not exact, and any reconstruction needs ongoing
information input relating to the changing form of the output->CEV
transform.

To me, this reconstruction theme has always been something of a public
relations exercise, far divorced from the issue of the information flows
in the control system. It's another of the many red herrings, but of some
value because some people don't seem to recognize that there is a difference
between knowing exactly what something is and knowing something about it.
It should be clear that if you know exactly what something is, you do
know something about it, but the reverse need not hold. I participated in
the reconstruction game because it seemed likely to be convincing, and I
think that the original demonstration should have shown the point
incontrovertibly. It didn't, because of some preconceptions that I still
haven't fathomed properly, so I kind of lost interest in continuing the game.

In PCT we
don't try to predict the parameters of a model; all we predict is
that for some set of parameters we will get the model to behave
like the real system, and that the same model with the same
parameters will then continue to predict behavior over repeated
trials and under certain changes in conditions.

Yes, it would be nice if you could post what those changes in conditions
are. We know now, for example, that they don't include changes in the
statistics of the disturbance waveform, whereas if the parameters are
a function of the control system itself, the disturbance waveform
shouldn't matter. The IT approach suggests why it does matter. A
statement as to what changes in conditions will not lead to changes
in model parameters would be a valuable document.

To apply IT, you
would be permitted to state some general parameters, and propose
that a measure of behavior in informational terms would remain
stable over trials given the optimal set of parameters found
through previous trials.

Also, you should really state what outcome would show that your
IT analysis is wrong.

Those are two very fair comments. Let's consider the second one.
One of the notions is that there is a loop, right? Within that loop,
there will be an informational bottleneck, which will determine the
quality of control that can be achieved--i.e. the uncertainty of the
perceptual signal value given the reference value. If the quality
of control remains unchanged when the information rate at the bottleneck
is changed, that would indicate a wrong analysis.

Where could the bottleneck be? I think it would normally be in the
output side, perhaps not in the output function, but more probably in
the effector-to-CEV link. But if the disturbance had a low enough
bandwidth, or the perceptual function a low enough resolution (such as
in moonlight, for example) that's where the bottleneck would be.

The first of your two comments comes in here. If the IT analysis is
right, one should be able to set up conditions for two very different
perceptual control tasks in such a way that the information rates in
the two tasks had known relationships, and from them one should be able
to relate the relative quality of control in the two tasks. I have
not done that, and it would be a neat thing to try to do.

I've made a few predictions. If they fail, then at least my intuitive
analysis is wrong. I'll try to firm them up by working through the
equations. Whatever predictions emerge from that would be a stronger
test of the approach.

What is your criterion for prediction?

Depends on the circumstances. For one set of predictions, based on your
model and a sawtooth disturbance, the predictions at present are only
qualitative--this parameter or that of the model will increase or decrease.
For the other set, the prediction is a fan of linear trends, though as I
said, I anticipate a steady deviation from the linear trend because of
harmonic generation. That deviation should not be too bad. As to whether
a 1% or a 10% error is acceptable, the point right now is whether there is
any such effect at all, because in both cases either the straight and
simple one-ECS model predicts no effect, or we have had strong claims that
"PCT says there is no effect."

What I see as the usefulness of information theory is not so much an
ability to make more precise quantitative predictions than a signal
processing approach does. That should rarely be possible, as I have
argued many times before. Its usefulness is to see factors that are
important and that are not easy to see when you use the more detailed
analysis procedures.

By the way, you and Chuck Tucker are really escalating the bets:
Chuck Tucker by demanding that we predict within 0.9995, and you
by adding a few more nines. I have said that I consider a
_correlation_ of 0.95 to be a reasonable sort of number, although
not ideal, and 0.8 to be unacceptably low for establishing a fact
that we're supposed to make future use of.

Yes, you are much more reasonable. If I did not communicate with Rick,
I presumably wouldn't indulge in the kind of rhetorical escalation that
I do. But you know well the reasons behind it--conflicted control systems
escalate their output. Claims of utter perfection tend to lead to equally
ridiculous counter-statements, to nobody's benefit. Somehow or other, you
usually seem able to stand aside from that kind of conflict. Luckily for
the rest of us.

Martin

<Bill Leach 02 Feb 199419:25:46

Martin Taylor 940202 12:30>

Whoever does the collection, ...

OK, I suspect that there should not be much of a problem seeing but I'll
put a note in the beginning of my file to specifically direct a "final"
version to you when it has "passed" the net critique.

-bill