Behavioral Illusion

[From Rick Marken (930917.1100)]

Michael Fehling (930915 4:44 PM PDT) said--

In fact, p _seldom_ equals r, _dynamically_. If it did, one
wouldn't need the control loop in the first place.

I replied --

The dynamic variations of p about r are typically orders of magnitude
smaller than the potential dynamic range of either p or r. The control
loop keeps p virtually equal to r; the fact that p varies around r at
all represents failure of control; in a good control system the size of
the variations of p around r be so small that they are undetectable
by the instruments used to measure p.

Dag Forssell (930916.2100) adds:

As usual, Rick, you immediately go to the idealized condition of (almost)
infinite loop gain, by drawing your conclusions from the simplified math
where all the quantities divided by the loop gain have been dropped. Many
of the heated arguments on this net have been based on this extreme and
oversimplified position.

I don't understand. Do you think that dynamic variations of p about r
are not orders of magnitude smaller than the potential range of p or r?
What makes you think that I "assumed" an idealized contol system in my
comment? I just pointed out that the variations in p can be very, very
small and still there will be control.

A more balanced approach would be to point to an agreement with Michael, as
shown so clearly with the rubber bands, followed by an every-day example of
what you try to say.

Perhaps you are right, here. I certainly agree with Michael's statement
that "p _seldom_ equals r" -- and I didn;t disagree with it -- but
maybe I should have emphasized how much I did agree with it. Yes, in
ANY control system, "p _seldom_ equals r". Upon arrival at work today
I found that I was privately scolded by Bill Powers also (all this
criticism would be quite crushing were I not a megalomaniac) for
always saying that in a control loop p = r. In fact, I write this
equation only because I can't type those little squiggly parallel
lines that mean "approximately equal" -- but this is always my intent.
So I apologize for any misunderstanding. From now on I will try to
remember to use a tilda (~) to mean "approximately equal"; so,
in a control loop p~r, with the approximation varying with loop gain.

The reason I responded to Michael's statement above as I did (aside
from the fact that I'm a high gain control system) is because, as I
said in my post to Michael:

Your statement above suggests that you believe that the dynamic deviations
of p from the reference level, r, are what drive (cause?) the outputs of
the control system. That is, dynamic (temporal) variations in (r-p) are the
cause of the temporal variations in the outputs that affect p, keeping p
near r. Is this what you think is going on in a control loop?

I don't know if this is what Michael meant. Perhaps he didn't -- and
I should have ignored it. But it caught my attention because this
is the basic mistake made by the "other" control theorists -- the
psychologists who apply control theory to manual control. They assume
that the observed relationship between output and perceptual input
(or r-p) is a reflection of causal mechanisms in the organism that
transform input into output. PCT shows that this is an illusion --
the "behavioral illusion". The relationship between variations in
o and r-p depends on the feedback function (outside of the organism)
that relates o to p, not on the organism function that relates r-p to o.

Apparently this behavioral illusion is quite seductive because you
yourself (Dag) seem to have fallen for it. You say (in response to
my comment above):

This certainly is what I think. The temporal deviations of p from r create
an error signal e, which _contains information_ used by the output function
(again, see video script p. 11).

This part is basically true -- the error signal tells the output how
much to change, that's true, but these changes are being produced in
a loop; so the cause of the changes in o (e) is itself caused by the
changes it caused. In our simulations, these causes are propogated
around the loop by time integration(s). The result is that, in a
functioning control loop, o ~ 1/g(r-p) rather than o ~ f(r-p),
where g is the feedback function and f is the "organism function".

One way to reveal the behavioral illusion is by creating a situation
where there is NO relationship between p (or r-p) and o, even though
p is controlled (p~r). I just did this with my little Hypercard control
simulator. A scatter plot of temporal variations in (r-p) against
temporal variations in o looks like this:

      > x
      > x x
   o | x x
      > x x x
      >________
         r-p

Not all points are plotted but this gives a representative picture
of the shape of the whole plot. The x's represent paired values of p
and o at different times. The relationship between r-p and o is a
cloud, not a function -- ie. there is no causal relationship between
r-p and o. This result does not depend on making any special assumptions
about the control system -- it can be high or low gain, for example.
Nor do things look any better if you plot r-p values at time t against
o values at time t+tau (under the assumption that the lack of re-
lationship is due to lags or slowing). The important point about
the graph is that the SAME r-p values leads to quite different o
values at differnet times; for example, sometimes when p equals,
say, 10, o equals 20 and at other times when p equals 10 o equals 50.

I did this simulation in response to your comment:

Rick, as I read your further argument in
this post, the best I can figure is that you write about some idealized
conception of a control system which takes an error signal as an instruction
to output any which way (which after trial and error proves successful). You
deny the obvious existence of a real, demonstrable control system in the
here and now, arguing instead for an ivory tower, unspecified function f()
with unreal properties, including the full effect of reorganization over a
long time period

In fact, I write about real control systems that really work -- in
ivory towers or park benches. Note that there was no trial and error
in this simulation; this was just a plain vanilla, non-ivory tower
control system. Nevertheless, it produced the results above; in fact,
the cloud of points is an accurate reprsentation of the varying
environmental (feedback) function relating o to p. There was no magic;
no mystery. This result was obtained from a model that computed o
detrministically from a single program statement:

o := o + k * (r-p) * dt

The only variables in this statement are o and p. Clearly, it is
the integration that makes it possible to have different o's on
different occasions with the same p. o is "caused" by (r-p) as
you suggest but it is also caused by "itself" -- that is, the
integrated effects of previous errors (r-p). The result of the
simulation shows that, when you measure temporal variations in
(r-p) and o as they vary dynamically in the control loop the
relationship between these variables approximates the inverse of
the feedback function that transforms o into effects on p
rather than the system function that actually transforms (r-p)
into effects on o.

What does this all mean? It means that, philosophy aside, the
observed relationship between p or r-p and o reveals nothing
about the nature of whatever causal processes lead from p
(or r-p) to o. It is important to me to try to get this across
because it is the basis for my claim (based on PCT) that traditional
methology in psychology cannot possibly reveal (except by chance)
anything about the properties of the organism that are responsible
for the organism's observed behavior (behavior being actions
or results of action -- o or q.i). I think that psychologists
will contiue to persue this fruitless methodological course
(even if they end up liking PCT)until they finally grasp
"the behaviroal illusion". As long as there is a thread of hope
that there is some degree of lineal causal dependence of
organismic outputs on perceptual inputs (even if it is only
thought to occur or to be noticeable when the deviations of
p from r are large) there will be an inclination to continue on
the hopeless course of traditional psychological methodology
(because it is familiar and institutionalized).

It also agrees with the PCT lesson that people resist disturbances.

Mea culpa.

Please consider applying your understanding of PCT and reconsider your reply
to Michael.

My reply to Michael was a question: "Is this what you think is going
on in a control loop?" I was (and am) seeking understanding; I'm
trying to understand what Micheal is saying.

I am kind of surprised by your reaction to my post (Marken [920915.2330])
to Michael. All I was trying to do was discuss the "behavioral illusion".
I took the liberty of doing this because I think it is a rather
significant component of the PCT approach to understanding behavior.
The behavioral illusion was discovered mainly because Bill P. got the
relationship between a control architecture and a living system's
architecture correct. The behavioral illusion could have been discovered
by the "other" manual control theorists (and rescued psychology long
ago from it's deathly addiction to IV-DV research). It wasn't simply
becuase they got the mapping of control theory to living systems wrong
(they had the control theory part right but they put r in the environment;
small difference, big consequence). So discovery of the "behavioral
illusion" distinguishes PCT from other applications of control theory
to behavior -- and I thought it was worth discussing it again on
the net.

Best from the ivory tower

Rick

[From Rick Marken (930920.1300)]

Bill Powers (930920.1130 MDT)--

The "behavioral illusion" that I have talked about is the
relationship between the disturbing variable and the output
action of the control system, not the one between perception and
output (the perception is invisible to the onlooker).

I know. See my (930919.2100) post to Dag. In the model, however,
you can observe the relationship between perception and output --
and it is also "illusory".

The more S-R connections we can show are really illusions, the
more justification there is for assuming that the next apparent
S-R connection we see will also be an illusion. But that sort of
extrapolation from past experience doesn't justify saying that
ALL apparent S-R connections are illusions.

No one said that "ALL apparent S-R connections are illusions". I said that,
in a negative beedback system (where a perception is know to be
controlled) the relationship between disturbace and output (and between
perception and output for that matter) reflects the feedback function,
not the system function.

What we want, I think, is not to convince believers in S-R
explanations that they are full of hogwash, but only that there
is a viable alternative and a way of choosing between the
alternatives.

I have NEVER suggested that either S-R explanations or those who believe
in them are full of hogwash. Dag (and you, for that matter) seem to be
taking me to task for saying that an S-R explanation of CONTROL would
be hogwash. If you think an S-R explanation of CONTROL would not be
hogwash then I would really appreciate an explanation of why not.

If the method for determining the presence of a
controlled variable can be taught and accepted as a valid method,
that is all we need.

That sounds great but how are you going to get people to accept
"the test" as a valid method if you don't explain why it might not
be a waste of time? I don't see how you can make a case for "the test"
without explaining how control works and why the behavioral illusion
might exist. My experience is that people will accept the idea that
organisms control their perceptions but they will continuue to see
control as an input-output (perception causes behavior) process; thus,
they see no need to go to the trouble of adding "the test" to their
existing repetoire of approaches to understanding behavior.

If, as we have reason to suppose, nearly all
apparent S-R connections actually involve controlled variables,
then even an S-R enthusiast can apply the test to see whose
convictions are the more firmly grounded, in any particular
instance of behavior.

An S-R enthusiast who doesn't feel, in his or her bones. the POSSIBILITY
that control is NOT an S-R (input-output) process is an S-R enthusiast
who will never waste time with "the test" -- at least, that is my belief
(and my experience). Just look at the manual control theorists; they're
not doing "the test" even though they are looking at controlled variables
day in and day out. Why? I think (based on my conversatinos with these
people) that they are firmly convinced that the input variable (the objective
discrepency between target and cursor) is what causes the outputs they
measure. Even if you point out the possibility of adjustable references
or the fact that what subjects perceive is always influenced by what they
do, the input-output view of the process wins out; back to IV-DV research.

I don't think you'll convince anyone to invest the time and care needed
to perform "the test" properly (or take seriously someone else's results
from "the test") until you convince them that observed input MAY REALLY
NOT be the cause of output -- even when it looks that way (the behavioral
illusion). That's why I keep hammering on this point. In fact, the
ONLY people who are doing PCT research ("the test") are the small number
of people in the world who understand that input (in a control system) is
NOT a cause of output. This is why I think that, if you really want
some S-R enthusiasts to start trying "the test" (if only for fun) you've
got to convince them that there is the REAL possibility that the
input-output relationships they are observing using conventional
methodology are illusory.

Best

Rick

[From Bill Powers (971202.1535 MST)]

Bruce Abbott says that he is fully familiar with the behavioral illusion,
but there may be a few others who have not seen it demonstrated in its full
glory. So I've written a little program that shows how the relationship
between the disturbance and the output of the control system can change
when the environmental feedback function changes. The executable program
(on PCs) is attached, along with the source code, as ILLUSION.EXE and
ILLUSION.PAS. The "mouse" and "grutils" units are used; if anyone needs the
source code for those units (I've posted them several times but things get
lost) I'll post them again.

There are five environmental feedback functions used in this demo. In each
case the control system is exactly the same, with no changes in parameters.
The five feedback functions are

case fb of
  '1': v := d + o;
  '2': if o > 0 then v := d + 0.3*o else v := d + 3*o;
  '3': v := d + 1e-4*o*o*o - 0.0001* o;
  '4': v := d + 5e-5*o*o*o - o;
  '5': v := d + 40.0*sin(o/20.0) + 1.0*o;
end;

"v" is the controlled variable, and "o" is the output. As you can see, the
output affects the controlled variable in five different ways.

The disturbance is adjusted by moving the mouse from side to side. The
feedback function is selected by typing'1' through '5'. The program plots
output against disturbance, and also shows the error signal on the same
scale, in red, so you can see that the control system is still controlling.

Numbers 3 and 4 are interesting in that they both use a cubic form in the
feedback function. In number 4, the cubic actually reverses direction when
the output is near zero, so the sign of the feedback reverses (as described
in the "Spadework" article). The control system simply skips past the
region of positive feedback until the feedback is negative again, and goes
right on controlling. The result is a two-valued relationship between
disturbance and output.

In number 5, a sine function is involved so there are six double-valued
regions observable in the relationship.

I emphasize that the control system is EXACTLY the same in all five cases.
Yet you get five obviously different relationships when you treat the
disturbance as the independent variable and the output as the dependent
variable -- none of which resemble the control system's forward function,
which is a pure time integrator.

By the way, another of those wonderful mythical generalizations about
control systems is, "control systems work only in linear environments,
while real environments are nonlinear." The nonsense people will spout to
make it seem that they know more than they do!

Have fun.

Best,

Bill P.
program illusion;

uses dos,crt,graph, grutils,mouse;

var d,v,p,e,r,o: real;
    fb: char;
    first: boolean;

procedure controlsys;
begin
p := v;
e := r - p;
o := o + 0.3*e;
case fb of
  '1': v := d + o;
  '2': if o > 0 then v := d + 0.3*o else v := d + 3*o;
  '3': v := d + 1e-4*o*o*o - 0.0001* o;
  '4': v := d + 5e-5*o*o*o - o;
  '5': v := d + 40.0*sin(o/20.0) + 1.0*o;
end;
end;

procedure axes;
begin
line(0,240,639,240);
line(320,0,320,479);
end;

procedure legends;
begin
setcolor(white);
outtextxy(0,0,'FEEDBACK FUNCTION ' + fb);
setcolor(lightred);
outtextxy(300,0,'ERROR');
setcolor(lightgreen);
outtextxy(300,15,'OUTPUT');
setcolor(white);
outtextxy(0,220,'DISTURBANCE <--->');
outtextxy(0,420,'MOVE MOUSE SIDEWAYS TO ADJUST DISTURBANCE');
outtextxy(0,435,'TYPE 1 TO 5 TO SELECT FEEDBACK FUNCTION');
outtextxy(0,450,'TYPE "q" to quit');
end;

procedure showline;
const oldd: integer = 0;
begin
line(320 + oldd,220,320+oldd,235);
oldd := round(d);
end;

begin
initgraphics;
clearviewport;
setwritemode(XORPut);
fb := '1';
legends;
axes;
d := 0.0;
first := true;
repeat
  readmouse;
  d := d + 0.03*(mousex div 3 - d);
  if keypressed then
   begin
    fb := readkey;
    clearviewport;
    axes;
    legends;
    showline;
    d := 0.1;
   end;

   if first then first := false
   else showline;
   controlsys;
   putpixel(320 + round(d), 240 - round(o),lightgreen);
   putpixel(320 + round(d), 240 - round(e),lightred);
   showline;
   delay(1);
until fb = 'q';
end.

Illusion.exe (61 Bytes)