[From Rick Marken (930917.1100)]
Michael Fehling (930915 4:44 PM PDT) said--
In fact, p _seldom_ equals r, _dynamically_. If it did, one
wouldn't need the control loop in the first place.
I replied --
The dynamic variations of p about r are typically orders of magnitude
smaller than the potential dynamic range of either p or r. The control
loop keeps p virtually equal to r; the fact that p varies around r at
all represents failure of control; in a good control system the size of
the variations of p around r be so small that they are undetectable
by the instruments used to measure p.
Dag Forssell (930916.2100) adds:
As usual, Rick, you immediately go to the idealized condition of (almost)
infinite loop gain, by drawing your conclusions from the simplified math
where all the quantities divided by the loop gain have been dropped. Many
of the heated arguments on this net have been based on this extreme and
oversimplified position.
I don't understand. Do you think that dynamic variations of p about r
are not orders of magnitude smaller than the potential range of p or r?
What makes you think that I "assumed" an idealized contol system in my
comment? I just pointed out that the variations in p can be very, very
small and still there will be control.
A more balanced approach would be to point to an agreement with Michael, as
shown so clearly with the rubber bands, followed by an every-day example of
what you try to say.
Perhaps you are right, here. I certainly agree with Michael's statement
that "p _seldom_ equals r" -- and I didn;t disagree with it -- but
maybe I should have emphasized how much I did agree with it. Yes, in
ANY control system, "p _seldom_ equals r". Upon arrival at work today
I found that I was privately scolded by Bill Powers also (all this
criticism would be quite crushing were I not a megalomaniac) for
always saying that in a control loop p = r. In fact, I write this
equation only because I can't type those little squiggly parallel
lines that mean "approximately equal" -- but this is always my intent.
So I apologize for any misunderstanding. From now on I will try to
remember to use a tilda (~) to mean "approximately equal"; so,
in a control loop p~r, with the approximation varying with loop gain.
The reason I responded to Michael's statement above as I did (aside
from the fact that I'm a high gain control system) is because, as I
said in my post to Michael:
Your statement above suggests that you believe that the dynamic deviations
of p from the reference level, r, are what drive (cause?) the outputs of
the control system. That is, dynamic (temporal) variations in (r-p) are the
cause of the temporal variations in the outputs that affect p, keeping p
near r. Is this what you think is going on in a control loop?
I don't know if this is what Michael meant. Perhaps he didn't -- and
I should have ignored it. But it caught my attention because this
is the basic mistake made by the "other" control theorists -- the
psychologists who apply control theory to manual control. They assume
that the observed relationship between output and perceptual input
(or r-p) is a reflection of causal mechanisms in the organism that
transform input into output. PCT shows that this is an illusion --
the "behavioral illusion". The relationship between variations in
o and r-p depends on the feedback function (outside of the organism)
that relates o to p, not on the organism function that relates r-p to o.
Apparently this behavioral illusion is quite seductive because you
yourself (Dag) seem to have fallen for it. You say (in response to
my comment above):
This certainly is what I think. The temporal deviations of p from r create
an error signal e, which _contains information_ used by the output function
(again, see video script p. 11).
This part is basically true -- the error signal tells the output how
much to change, that's true, but these changes are being produced in
a loop; so the cause of the changes in o (e) is itself caused by the
changes it caused. In our simulations, these causes are propogated
around the loop by time integration(s). The result is that, in a
functioning control loop, o ~ 1/g(r-p) rather than o ~ f(r-p),
where g is the feedback function and f is the "organism function".
One way to reveal the behavioral illusion is by creating a situation
where there is NO relationship between p (or r-p) and o, even though
p is controlled (p~r). I just did this with my little Hypercard control
simulator. A scatter plot of temporal variations in (r-p) against
temporal variations in o looks like this:
> x
> x x
o | x x
> x x x
>________
r-p
Not all points are plotted but this gives a representative picture
of the shape of the whole plot. The x's represent paired values of p
and o at different times. The relationship between r-p and o is a
cloud, not a function -- ie. there is no causal relationship between
r-p and o. This result does not depend on making any special assumptions
about the control system -- it can be high or low gain, for example.
Nor do things look any better if you plot r-p values at time t against
o values at time t+tau (under the assumption that the lack of re-
lationship is due to lags or slowing). The important point about
the graph is that the SAME r-p values leads to quite different o
values at differnet times; for example, sometimes when p equals,
say, 10, o equals 20 and at other times when p equals 10 o equals 50.
I did this simulation in response to your comment:
Rick, as I read your further argument in
this post, the best I can figure is that you write about some idealized
conception of a control system which takes an error signal as an instruction
to output any which way (which after trial and error proves successful). You
deny the obvious existence of a real, demonstrable control system in the
here and now, arguing instead for an ivory tower, unspecified function f()
with unreal properties, including the full effect of reorganization over a
long time period
In fact, I write about real control systems that really work -- in
ivory towers or park benches. Note that there was no trial and error
in this simulation; this was just a plain vanilla, non-ivory tower
control system. Nevertheless, it produced the results above; in fact,
the cloud of points is an accurate reprsentation of the varying
environmental (feedback) function relating o to p. There was no magic;
no mystery. This result was obtained from a model that computed o
detrministically from a single program statement:
o := o + k * (r-p) * dt
The only variables in this statement are o and p. Clearly, it is
the integration that makes it possible to have different o's on
different occasions with the same p. o is "caused" by (r-p) as
you suggest but it is also caused by "itself" -- that is, the
integrated effects of previous errors (r-p). The result of the
simulation shows that, when you measure temporal variations in
(r-p) and o as they vary dynamically in the control loop the
relationship between these variables approximates the inverse of
the feedback function that transforms o into effects on p
rather than the system function that actually transforms (r-p)
into effects on o.
What does this all mean? It means that, philosophy aside, the
observed relationship between p or r-p and o reveals nothing
about the nature of whatever causal processes lead from p
(or r-p) to o. It is important to me to try to get this across
because it is the basis for my claim (based on PCT) that traditional
methology in psychology cannot possibly reveal (except by chance)
anything about the properties of the organism that are responsible
for the organism's observed behavior (behavior being actions
or results of action -- o or q.i). I think that psychologists
will contiue to persue this fruitless methodological course
(even if they end up liking PCT)until they finally grasp
"the behaviroal illusion". As long as there is a thread of hope
that there is some degree of lineal causal dependence of
organismic outputs on perceptual inputs (even if it is only
thought to occur or to be noticeable when the deviations of
p from r are large) there will be an inclination to continue on
the hopeless course of traditional psychological methodology
(because it is familiar and institutionalized).
It also agrees with the PCT lesson that people resist disturbances.
Mea culpa.
Please consider applying your understanding of PCT and reconsider your reply
to Michael.
My reply to Michael was a question: "Is this what you think is going
on in a control loop?" I was (and am) seeking understanding; I'm
trying to understand what Micheal is saying.
I am kind of surprised by your reaction to my post (Marken [920915.2330])
to Michael. All I was trying to do was discuss the "behavioral illusion".
I took the liberty of doing this because I think it is a rather
significant component of the PCT approach to understanding behavior.
The behavioral illusion was discovered mainly because Bill P. got the
relationship between a control architecture and a living system's
architecture correct. The behavioral illusion could have been discovered
by the "other" manual control theorists (and rescued psychology long
ago from it's deathly addiction to IV-DV research). It wasn't simply
becuase they got the mapping of control theory to living systems wrong
(they had the control theory part right but they put r in the environment;
small difference, big consequence). So discovery of the "behavioral
illusion" distinguishes PCT from other applications of control theory
to behavior -- and I thought it was worth discussing it again on
the net.
Best from the ivory tower
Rick