Alternate descriptions

[From Bruce Abbott (980929.0820 EST)]

Bill Powers (980928.1532 MDT) --

Bruce Abbott (980927 or so)

My assertion concerned only Ramachandran and Blakeslee, who in my judgment
were quite correct in their description. As I suggested in my post, the
emphasis on controlled outputs probably has something to do with the fact
that these are what are observed, whereas a person's perceptions can only be
inferred.

Bruce, you surprise me. Are you still adhering to this false behaviorist
doctrine, that a scientist can only deal with what a scientist observes?

No. And never did. [Skinner didn't either, by the way.]

Are not a scientist's observations his own perceptions? Do you believe that
somehow a scientist is granted the privilege of knowing what is actually
out there in the environment, without having to observe his own fallible
human perceptual signals?

No. I believe that scientific theories are tested through systematic
observation and measurement, and that some things (e.g., the trajectory of
someone's arm) are easier to observe and measure than others (the person's
perceptions, neural currents). In tracking studies, you have always
measured where the person puts the cursor (observable behavior), not the
person's perceptual signals.

But that's not even the main mistake here. In fact, control systems do not
control their own outputs; they control only their own perceptual signals.
Their outputs vary with every disturbance, and so reflect primarily those
disturbances, not the actual controlled variables. Even if a scientist had
some magical way of observing the other system's true outputs, he would
still be observing the wrong aspect of behavior for understanding it.

Perhaps this isn't what you meant. But it if wasn't, you real point remains
obscure.

I thought I stated it plainly: the arm's movement follows (more or less,
depending on the magnitude of disturbances acting on the arm and the quality
of control) the person's intended arm movement (time-varying reference).
What's obscure about that?

Problems arise when different parties define their terms differently, and
these differences are not taken into account when one party interprets the
other party's words. I suspect that this is the case with Rick's
interpretation of Ramachandran and Blakeslee's writing, quoted by Bruce Gregory.

Here is essentially the standard PCT control-system diagram:

        PCT DIAGRAM
                                 reference
                                     >
                    perception v error
           [sensor]------------>[comparator]---------->[actuator]
              ^ |
              > >
              +--------------------[CEV]<-----------------+
                     input ^ output or action
                                     >
                                     >
                                disturbance

In this diagram, the output (also called action) is what the actuator
produces and which tends to oppose the effects of the disturbance on the
environmental correlate of the controlled perception. The input represents
the current value of the CEV that affects the sensor, which transduces the
input signal into a neural signal, the perception. The intended state of
the perception is given by the reference.

A very different-looking diagram is usually presented by control-system
engineers and those who have adopted their terminology:

        CONVENTIONAL DIAGRAM (CLOSED LOOP)
                                                 disturbance
                                                      >
                          error action V
   input---->[comparator]-------->[actuator]------->[CEV]---+----> output
                  ^ |
                  > >
                  +------------------------------[sensor]---+
                              feedback

In this diagram, the "input" to the system is PCT's reference signal; by
varying the input, one can "control" (determine, within limits) the value of
the output. What is labeled "output" on the diagram corresponds to PCT's
input: the environmental correlate of the PCT perceptual signal. In this
diagram, the PCT perceptual signal is labeled "feedback."

The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is controlled.

The phrase "control systems control their inputs, not their outputs" is
incorrect for this diagram. And I think that Ramachandran and Blakeslee
probably have this sort of diagram in mind when they refer to "control of
movements." They mean making the arm move as intended via control action.

The conventional control-system diagram is presented as it is to facilitate
comparison to what is called "open loop control":

        CONVENTIONAL DIAGRAM (OPEN LOOP)
                                               disturbance
                                                    >
                                          action V
    input--------------------->[actuator]-------->[CEV]-------> output

Here the output follows the input, but only to the extent that disturbances
are small or are predictable and thus can be compensated for through
calibration. For example, if you want a motor to run at 2000 rpm, you can
set the input (reference) to 2000 rpm on the calibrated dial, and the motor
will run at 2000 rpm, so long as the load, supply voltage, etc. remain as
they were during calibration.

These diagrams make it easy to see the similarities and differences between
the open-loop and closed-loop systems, which may account for the popularity
of these ways of representing the systems.

Regards,

Bruce

[From Rick Marken (980929.0840)]

Bruce Abbott (980929.0820 EST) to Bill Powers:

I thought I stated it plainly: the arm's movement follows...the
person's intended arm movement (time-varying reference). What's
obscure about that?

It's not clear whether the intention (reference) of which you
speak is a specification for input or a command for output. In
the motor control literature, intention is always viewed as a
command for output, not a specification for input. Your statement
would have been less obscure if you had said "the perceptual
representation of the arm's movement follows the person's
reference for the state of that perceptual variable".

Problems arise when different parties define their terms
differently

If it were just a difference in how parties define "output" and
"input" then that would mean that PCT poses no substantive threat
to conventional models of behavior. But I think the difference is
much deeper than a difference in definiton. Let's look at your
"conventional" diagram of a closed loop system:

        CONVENTIONAL DIAGRAM (CLOSED LOOP)
                                              disturbance
                                                   >
                       error action V
input---->[comparator]-------->[actuator]------->[CEV]---+----> output
               ^ |
               > >
               +------------------------------[sensor]---+
                           feedback

In this diagram, the "input" to the system is PCT's reference
signal; by varying the input, one can "control" (determine, within
limits) the value of the output. What is labeled "output" on the
diagram corresponds to PCT's input: the environmental correlate
of the PCT perceptual signal. In this diagram, the PCT perceptual
signal is labeled "feedback."

The first thing to do when analyzing this diagram is determine what
is system and what is environment. In the original version of your
diagram (Sheridan and Ferrell, 1974, p. 177) all variables and
functions are in the environment of the system (called the "human
operator" by Sheridan and Ferrell); the system itself is represented
only in the box you call "actuator". So the conventional view is
that the human operator is a transfer function that converts an
input variable (error) into an output variable (output).

The input variable in the conventional diagram is _not_ equivalent
to the reference signal in PCT. The conventional input variable
combines (via the comparator function) with the feedback effects of
the output variable to produce the variable that is the actual input
to the system, the error signal. The conventional input variable
is equivalent to the PCT disturbance variable; the conventional
feedback signal is equivalent to the PCT output variable (note that
in the conventional diagram there is no "sensor" between output and
comparator; you may have inserted this sensor function in an effort
to make it seem to yourself like the conventional output is
equivalent to the PCT controlled variable).

The equivalent of the PCT controlled variable in the conventional
diagram is the _error_ variable. Your "sensor" function is really
the environmental feedback function connecting output to controlled
input (error in your diagram). The disturbance added to the
controlled variable in your version of the conventional diagram is
not quite the same as the PCT disturbance; the disturbance in your
diagram has a direct effect on the output (what you call the CEV)
that affects the actual CEV in your diagram, the error variable.
So the disturbance variable in your conventional diagram functions
more like a change in the feedback function in the PCT diagram;
variations in this "disturbance" change the effect that the output
variable has on the controlled variable ("error" in your diagram).

Note that the correct location for the "sensor" function in the
conventional diagram is _inside_ the actuator. In fact, the whole
PCT control model exists inside the actuator box of your conventional
model. It is inside the actuator box that we find the "sensor"
function that determines what aspect of the environment is actually
under control; it is also inside the actuator box that we find the
reference signal that determines the level at which the sensed aspect
of the environment is to be maintained.

It's interesting to note that there is nothing in the conventional
diagram that functions as the PCT reference signal. In fact, in
conventional control models, the reference signal is always implicitly
zero. There is nothing in the conventional model that can account for
the fact that the system may decide to keep the controlled ("error")
input variable at a value other than zero. The conventional model
has no way to explain what happens, for example, when a subject in
a tracking task decides to keep the cursor 1/2 inch to the left of,
instead of directly aligned with, the target.

The phrase "control systems control their inputs, not their
outputs" is incorrect for this diagram.

Sorry, I don't buy it.

And I think that Ramachandran and Blakeslee probably have this
sort of diagram in mind when they refer to "control of
movements."

I'm sure they do.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980929.1145 EST
)]

Rick Marken (980929.0840) --

The first thing to do when analyzing this diagram is determine what
is system and what is environment. In the original version of your
diagram (Sheridan and Ferrell, 1974, p. 177) all variables and
functions are in the environment of the system (called the "human
operator" by Sheridan and Ferrell); the system itself is represented
only in the box you call "actuator". So the conventional view is
that the human operator is a transfer function that converts an
input variable (error) into an output variable (output).

The first thing to do when analyzing the diagram I presented is to analyze
the diagram I presented and not some other one. I did not present Sheridan
and Ferrell's (1974) diagram and your analysis, however appropriate it may
be to Sheridan and Ferrell's (1974) diagram (or not, I haven't seen that
diagram), is not appropriate to the diagram I presented.

I'm not sure who presented the original version of the conventional block
diagram of
a control system, but I am sure it was not Sheridan and Ferrell
(1974). It may have been Wiener (1948), but perhaps by 1948 such diagrams
were already in common use in control engineering and Wiener just borrowed
it from there. Block diagrams similar to mine are presented in Fig. 2.1 of
McFarland (1971) as well as in many other sources that predate Sheridan and
Ferrell (1974). Sheridan and Ferrell's (1974) diagram (and/or their
interpretation of it) may be incorrect, but if so it is not the diagram
and/or not the interpretation I have been talking about, and your criticism
of Sheridan and Ferrell (1974) has no relevance to the standard diagram and
interpretation of it that I presented in my post.

From your description it sounds as though Sherridan and Ferrell (1974)

didn't know what they were talking about. That doesn't mean that everyone
else (or _anyone else_, for that matter) suffers from a similar
misconception. The diagram and interpretation I presented are the commonly
accepted ones, not Sherridan and Ferrell's. So -- nice job kicking
Sherridan and Ferrill's butts, but they have no relevance to the diagram and
interpretation I presented, which are the standard, accepted ones in
engineering and biology.

Regards,

Bruce

[From Chris Cherpas (980929.1100 PT)]

Bill Powers (980928.1532 MDT)--

Bruce, you surprise me. Are you still adhering to this false behaviorist
doctrine, that a scientist can only deal with what a scientist observes?

Bruce Abbott (980929.0820 EST)--

No. And never did. [Skinner didn't either, by the way.]

Interesting post, Bruce, but Skinner never got his story coherent,
one way or the other. In the beginning of Verbal Behavior, Skinner's
argument is for an "objective" analysis. Likewise, the parts of
Science and Human Behavior where philosophy of science is discussed,
are mostly objectivist, deterministic. In his "radical" persona,
he would say the scientist can only deal with what the scientist
can deal with, so let's find out what that is...it's a problem
for psychology, not methodological dogma.

Skinner said a few things about the -isms we talk about as not
being self-evident truths, but are to be explained by a science
of behavior (including the -ism called behaviorism). However, he
didn't stop anyone from calling him a behaviorist (including himself).
He advertised it. I'd have to say that being inconsistent
about X is not the same as never having done X, and this is the
situation with Skinner.

The term, "behaviorism," I imagine you would agree, is taken to
mean that we already know what "behavior" is, and that it's
largely a matter of focusing on "behavior" (instead of the "mind"
or whatever), that is the concern of behaviorism. Radical moments
aside, Skinner couldn't let go of movements (~observed outputs)
as a fall-back position in case his operant functionalism started
sounding too theoretical, imaginary. Skinner's epistemology is
a ping pong match with himself.

Best regards,
cc

[From Bill Powers (980929.1244 MDT)]

Bruce Abbott (980929.0820 EST)--

Perhaps this isn't what you meant. But it if wasn't, you real point remains
obscure.

I thought I stated it plainly: the arm's movement follows (more or less,
depending on the magnitude of disturbances acting on the arm and the quality
of control) the person's intended arm movement (time-varying reference).
What's obscure about that?

This says that the perceptual variable being controlled corresponds to arm
position or velocity, which means the action variable consists of the
muscle tensions that tend, along with disturbing forces, to alter the arm's
position or velocity.

A very different-looking diagram is usually presented by control-system
engineers and those who have adopted their terminology:

       CONVENTIONAL DIAGRAM (CLOSED LOOP)
                                                disturbance
                                                     >
                         error action V
  input---->[comparator]-------->[actuator]------->[CEV]---+----> output
                 ^ |
                 > >
                 +------------------------------[sensor]---+
                             feedback

In this diagram, the "input" to the system is PCT's reference signal; by
varying the input, one can "control" (determine, within limits) the value of
the output.

No, by varying the "input", one can control the variable labeled
"feedback." You have also mislabeled the "output", which in this diagram is
shown as an irrelevant side effect (not sensed) of the CEV. Properly, that
"output" variable should also be subject to disturbances, representing all
the variables other than the CEV on which it depends. The CEV is what is
sensed and thus what is indirectly controlled. The actual output of this
control system comes out of the actuator, via an unlabelled arrow that,
along with the disturbance, affects the CEV. The CEV is the _input_ quantity.

Perhaps you can see why I elected to abandon the standard engineering
diagram of a control system. It's not only ambiguous, its connection with a
description of the various aspects of the real system is misleading and
even wrong. It's no wonder some control engineers find PCT hard to understand.

What is labeled "output" on the diagram corresponds to PCT's
input: the environmental correlate of the PCT perceptual signal. In this
diagram, the PCT perceptual signal is labeled "feedback."

No, what is labeled "output" is an irrelevant side-effect. The CEV
corresponds to PCT's input quantity.

The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is

controlled.

No. The feedback signal represents the CEV, not the "output," as you have
drawn it. And you're assuming a simple linear multiplier as the sensor
function.

The phrase "control systems control their inputs, not their outputs" is
incorrect for this diagram.

No, it is correct. The "output" is not protected by the actions of the
system against disturbances of the "output" (which are omitted from the
diagram).

And I think that Ramachandran and Blakeslee
probably have this sort of diagram in mind when they refer to "control of
movements." They mean making the arm move as intended via control action.

I am quite sure they are referring to this misleading and confusing
diagram, as are most people who try to make sense of engineering control
theory.

The conventional control-system diagram is presented as it is to facilitate
comparison to what is called "open loop control":

       CONVENTIONAL DIAGRAM (OPEN LOOP)
                                              disturbance
                                                   >
                                         action V
   input--------------------->[actuator]-------->[CEV]-------> output

Here the output follows the input, but only to the extent that disturbances
are small or are predictable and thus can be compensated for through
calibration.

That is a very risky assumption that is drastically wrong in most natural
situations. Disturbances, for the most part, are neither predictable nor
compensatable. Compensation-based models have an entirely different
structure from the above -- the _cause_ of the disturbance, for one thing,
must be sensed, and its effects on the "output" have to be calculated to
allow producing the required compensating output effect.

For example, if you want a motor to run at 2000 rpm, you can
set the input (reference) to 2000 rpm on the calibrated dial, and the motor
will run at 2000 rpm, so long as the load, supply voltage, etc. remain as
they were during calibration.

That is how all machines were designed before control theory -- say, before
Watt. Such systems require constant attention to recalibration and of
course are helpless before any unanticipated disturbance.

These diagrams make it easy to see the similarities and differences between
the open-loop and closed-loop systems, which may account for the popularity
of these ways of representing the systems.

And the resulting sloppiness of analysis also accounts for the fact that
such systems are ever considered as remotely feasible models of behaving
systems.

Best,

Bill P.

[From Rick Marken (980929.1315)]

Bruce Abbott (980929.1145 EST)--

The first thing to do when analyzing the diagram I presented is
to analyze the diagram I presented and not some other one.

I did.

While reading Bill Powers (980929.1244 MDT) attempt to make sense
of your "conventional control diagram" I realized that the important
point of this discussion is that the diagram does _not_ support
your claim that

The phrase "control systems control their inputs, not their
outputs" is incorrect for this diagram.

As Bill says:

No, it is correct. The "output" is not protected by the actions
of the system against disturbances of the "output" (which are
omitted from the diagram).

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980929.1800 EST)]

Bill Powers (980929.1244 MDT) --

Bruce Abbott (980929.0820 EST)

I thought I stated it plainly: the arm's movement follows (more or less,
depending on the magnitude of disturbances acting on the arm and the quality
of control) the person's intended arm movement (time-varying reference).
What's obscure about that?

This says that the perceptual variable being controlled corresponds to arm
position or velocity, which means the action variable consists of the
muscle tensions that tend, along with disturbing forces, to alter the arm's
position or velocity.

Yep. The behavior of the arm -- where it goes, and how quickly, is under
control.

       CONVENTIONAL DIAGRAM (CLOSED LOOP)
                                                disturbance
                                                     >
                         error action V
  input---->[comparator]-------->[actuator]------->[CEV]---+----> output
                 ^ |
                 > >
                 +------------------------------[sensor]---+
                             feedback

In this diagram, the "input" to the system is PCT's reference signal; by
varying the input, one can "control" (determine, within limits) the value of
the output.

No, by varying the "input", one can control the variable labeled
"feedback."

You must have missed the unquoted paragraph in which I made that
clarification, to wit:

The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is
controlled.

You have also mislabeled the "output", which in this diagram is
shown as an irrelevant side effect (not sensed) of the CEV.

I debated whether to use "output" rather than CEV to label the box as well
as the output arrow. They are one and the same; the box simply represents
the confluence of the influences of action and disturbance on the output. I
decided to retain CEV because that's often been distinguished from "input"
in the PCT diagram, and I wanted to maintain consistency with that diagram.
Output in this diagram is the same as input in the PCT diagram; note that
the sensor is receiving its input from the same source as the arrow labeled
"output." This would correspond, for example, to the temperature of the
water in a temperature controlled bath.

Properly, that
"output" variable should also be subject to disturbances, representing all
the variables other than the CEV on which it depends. The CEV is what is
sensed and thus what is indirectly controlled.

I think you can see now that this conclusion is based on a misunderstanding
of the diagram.

The actual output of this
control system comes out of the actuator, via an unlabelled arrow that,
along with the disturbance, affects the CEV. The CEV is the _input_ quantity.

"The actual output?" You mean "the output as Bill Powers defines it." As
control-systems engineers define it, "the actual output" is what I've
labeled "output" in the conventional diagram. Also, take a look at the
diagram. The so-called unlabeled arrow carries the label "action."
Finally, in _your_ diagram the CEV is the input quantity. In the
conventional diagram it is the output quantity.

It's no wonder some control engineers find PCT hard to understand.

Well, yeah. The same familiar control-system terms mean entirely different
things in PCT.

What is labeled "output" on the diagram corresponds to PCT's
input: the environmental correlate of the PCT perceptual signal. In this
diagram, the PCT perceptual signal is labeled "feedback."

No, what is labeled "output" is an irrelevant side-effect. The CEV
corresponds to PCT's input quantity.

The CEV corresponds to the output quantity in standard engineering parlance,
as shown on the diagram.

The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is

controlled.

No. The feedback signal represents the CEV, not the "output," as you have
drawn it. And you're assuming a simple linear multiplier as the sensor
function.

Ah, so you did see that paragraph. The output splits off from the sensor
arrow, so what is sensed is identical to the output. But we've already
covered this ground.

The phrase "control systems control their inputs, not their outputs" is
incorrect for this diagram.

No, it is correct. The "output" is not protected by the actions of the
system against disturbances of the "output" (which are omitted from the
diagram).

No, it is incorrect. See above.

       CONVENTIONAL DIAGRAM (OPEN LOOP)
                                              disturbance
                                                   >
                                         action V
   input--------------------->[actuator]-------->[CEV]-------> output

Here the output follows the input, but only to the extent that disturbances
are small or are predictable and thus can be compensated for through
calibration.

That is a very risky assumption that is drastically wrong in most natural
situations. Disturbances, for the most part, are neither predictable nor
compensatable. Compensation-based models have an entirely different
structure from the above -- the _cause_ of the disturbance, for one thing,
must be sensed, and its effects on the "output" have to be calculated to
allow producing the required compensating output effect.

The sort of compensation I discussed involved measuring the actual relation
between the input and output (e.g., knob position and motor rpm); whatever
predictable disturbances arise from load and frictional forces are then
compensated for in the dial calibration. You're bringing in another sort of
compensation mechanism, which is fine, but it doesn't invalidate my
description of the open-loop system presented in the diagram.

For example, if you want a motor to run at 2000 rpm, you can
set the input (reference) to 2000 rpm on the calibrated dial, and the motor
will run at 2000 rpm, so long as the load, supply voltage, etc. remain as
they were during calibration.

That is how all machines were designed before control theory -- say, before
Watt. Such systems require constant attention to recalibration and of
course are helpless before any unanticipated disturbance.

True.

These diagrams make it easy to see the similarities and differences between
the open-loop and closed-loop systems, which may account for the popularity
of these ways of representing the systems.

And the resulting sloppiness of analysis also accounts for the fact that
such systems are ever considered as remotely feasible models of behaving
systems.

You're begging the question: You haven't shown that "sloppiness of
analysis" results from the use of these diagrams. But let's not lose sight
of my point: if someone (following conventional engineering usage) uses
terms like "input" and "output" differently than these term are used in PCT,
and this is not taken into account, then that person will be taken by the
PCTer to be stating absurdities (e.g., control systems control their
outputs) when in fact they are providing an accurate description, given
their definitions of the terms.

Regards,

Bruce

[From Rick Marken (980929.1820)]

Bruce Abbott (980929.1800 EST) --

I can see now that your "conventional diagram" of a control system
can be viewed as the PCT diagram turned on its side with some
new names for the PCT variables. I've copied your diagram with a
double line added to indicate where I think you assume the separation
between system and environment is:

                              ORGANISM || ENVIRONMENT
                                         >>
                                         >> disturbance
                                         >> >
                         error || action V
input---->[comparator]-------->[actuator]|------->[CEV]---+----> output
                ^ || |
                > >>===========|| |
                +------------------------------[sensor]|---+
                             feedback ||
                                                      >>

This diagram is indeed equivalent to the PCT diagram with some
word changes: "input" is "reference" in PCT; "output" is "irrelevant
side effects" in PCT; "feedback" is "perception" in PCT; "actuator"
is "output function" in PCT; "sensor" is "input function" in PCT.
So I think that what you are saying in your reply to Bill [Bruce Abbott
(980929.1800 EST)] is that the word "output" is actually being used
by Ramachandran and Blakeslee to refer to a controlled variable.

This is a wonderfully optimistic view of the situation. It suggests
that there are many people out there who know about controlled
variables since (I presume) Ramachandran and Blakeslee are writing
to communicate to an audience. That audience must know about
controlled variables, but they must know them as "outputs" rather
than as "controlled variables". So there must be other people who
have written about controlled variables; we just haven't known
about it because they are calling controlled variables "outputs"
instead of "controlled variables". Indeed, there must be a
considerable amount of research aimed at determining what perceptual
variables are actually controlled by (are the output of) living
systems. So there should be many research articles aimed at
identifying the variables that are the actual outputs (controlled
variables) the system is producing.

Anyway, this is a great discovery. There are apparently many
people out there who know that behavior is the control of
perceptual variables; we just haven't noticed them because they
would say that behavior is the control of output. And they probably
haven't noticed us because we keep talking about outputs as the
_means_ by which systems control perceptual inputs.

Perhaps you could post a reference list of papers that describe
research on "outputs" which is really research on what we would
call "controlled variables". I would appreciate it because I
was going to write a paper explaining the notion of controlled
variables; but apparently I don't have to do that any more. What
I really need to do is write a paper explaining how the word
"output" corresponds to the unnecessarily complex phrase
"controlled variable".

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

From [ Marc Abrams (980929.2350) ]

Nice insights, great post cc, thanks

Marc

···

[From Chris Cherpas (980929.1100 PT)]

[From Bruce Abbott (980930.1020 EST)]

Rick Marken (980929.2130) --

Bruce Abbott (980929.2200 EST)

The rest of your post reminds me of the scene in Monty Python's
"Search for the Holy Grail" where the villagers are trying to
figure out, logically, how to prove that someone is a witch.
As I recall, the chain of reasoning involved the fact that a
duck floats on water.

Why? What was wrong with it?

Well, you'll just have to judge for yourself:

···

---------------------------------------------------------------------------
BEDEVERE: Quiet! Quiet! There are ways of telling whether she is a witch.

ARTHUR and PATSY ride up at this point and watch what follows with interest

ALL: There are? Tell up. What are they, wise Sir Bedevere?

BEDEVERE: Tell me ... what do you do with witches?

ALL: Burn them.

BEDEVERE: And what do you burn, apart from witches?

FOURTH VILLAGER: ... Wood?

BEDEVERE: So why do witches burn?

SECOND VILLAGER: (pianissimo) ... Because they're made of wood...?

BEDEVERE: Good.

PEASANTS stir uneasily then come round to this conclusion.

ALL: I see. Yes, of course.

BEDEVERE: So how can we tell if she is made of wood?

FIRST VILLAGER: Make a bridge out of her.

BEDEVERE: Ah ... but can you not also make bridges out of stone?

ALL: Ah. Yes, of course ... um ... err ...

BEDEVERE: Does wood sink in water?

ALL: No, no, It floats. Throw her in the pond. Tie weights on her.
           To the pond.

BEDEVERE: Wait. Wait ... tell me, what also floats on water?

ALL: Bread? No, no, no. Apples .... gravy ... very small rocks ...

ARTHUR: A duck.

   They all turn and look at ARTHUR. BEDEVERE looks up very impressed.

BEDEVERE: Exactly. So... logically ...

FIRST VILLAGER: (beginning to pick up the thread)

           If she ... weighs the same as a duck ... she's made of wood.

BEDEVERE: And therefore?

ALL: A witch! ... A duck! A duck! Fetch a duck.
------------------------------------------------------------------------
[From _Monte Python and the Holy Grail_]

Let me know if you spot any flaws in the logic. (;->

Regards,

Bruce

[From Rick Marken (980930.1100)]

Bruce Abbott (980930.1020 EST)

Well, you'll just have to judge for yourself:
...

[From _Monte Python and the Holy Grail_]

Let me know if you spot any flaws in the logic. (;->

What logic?

In the post [Rick Marken (980929.1820)] that apparently set you
off on your quest for the Holy Grail of avoidance I was just
asking for information. I was asking for "..a reference list of
papers that describe research on "outputs" which is really research
on what we would call "controlled variables". I based this request
on my guess that:

...what you are saying in your reply to Bill [Bruce Abbott
(980929.1800 EST)] is that the word "output" is actually
being used by Ramachandran and Blakeslee to refer to a
controlled variable.

I think this guess was correct because, in a subsequent post
to Bill, you said it is correct, given Ramachandran and Blakeslee's
use of the word "output", to say that "output" is protected by
the actions of the system against disturbances of the "output".
That is, Ramachandran and Blakeslee use "output" to refer to what
we call a "controlled variable". If they are using "output" this
way then there must be others (like you) who understand the term
"output" this way too. There should also be research based on
this understanding of the nature of "output". That's all I want
to know about.

I'm not trying to see if you are a "witch" or not; I don't really
care about your bona fides any more, frankly. But you have made a
very important claim; at least it's important to me because much of
my research work over the last 20 years or so has been aimed at
showing that the existence of controlled perceptual variables has
been ignored by the conventional behavioral science/neurophysiology
community. Now you casually imply that the existence and nature of
"controlled variables" is well known to a large segment (possibly
all?) of this community (the segment that reads Ramachandran and
Blakeslee and the segment that came up with the version of the
conventional diagram of a control system that you posted to the
net and which I have never seen before). "Controlled variables" have
just been called "outputs" instead of "controlled variables". If you
are right -- if the existence and nature of "controlled variables"
is already well understood in the behavioral science/neurophysiology
community -- then that would go a long way towards explaining
why my research in particular and PCT in general has been ignored
by this community for so long.

My reading of the conventinosl behavioral science/neurophysiology
literature has given me the _strong_ impression that there is,
indeed, no awareness at all of the existence or nature of controlled
variables. But maybe I've missed it because of language problems.
So I am asking you for references to any behavioral science/
neurophysiology researchers who clearly understand the nature
of what in PCT we call "controlled variables" but simply talk
about these variables as "outputs".

Thanks

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (980930.0851 MDT)]

Bruce Abbott (980929.1800 EST)--

Me:

This says that the perceptual variable being controlled corresponds to arm
position or velocity, which means the action variable consists of the
muscle tensions that tend, along with disturbing forces, to alter the arm's
position or velocity.

Ye:

Yep. The behavior of the arm -- where it goes, and how quickly, is under
control.

If you're defining behavior in this way, then the action or output consists
of muscle tensions; the controlled perception or input is the perception of
movement. What is controlled, in that case, is what the controlling system
perceives as the movement, not what the observer perceives. If the
controlling person is wearing distorting goggles or viewing his own
movements via a distorting TV picture, the observer will see one movement
while the controlling person is controlling a different one (there will be
a similar disparity if it is the observer who is wearing the distorting
goggles, etc.). Generally speaking, the observer is always wearing
"distorting goggles," because the naive observer does not know how the
supposed behavior looks to the other person, and thus does not know what
variable is being controlled.

       CONVENTIONAL DIAGRAM (CLOSED LOOP)
                                                disturbance
                                                     >
                         error action V
  input---->[comparator]-------->[actuator]------->[CEV]---+----> output
                 ^ |
                 > >
                 +------------------------------[sensor]---+
                             feedback

In this diagram, the "input" to the system is PCT's reference signal; by
varying the input, one can "control" (determine, within limits) the

value of

the output.

No, by varying the "input", one can control the variable labeled
"feedback."

You must have missed the unquoted paragraph in which I made that
clarification, to wit:

The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is
controlled.

But your diagram shows the sensor receiving information from the CEV, not
from the "output". There is an arrow from the CEV to the "output",
indicating a physical relationship (If it doesn't indicated a physical
relationship, then you haven't drawn a proper system diagram). The form of
that physical effect might be anything (you haven't said what it is) -- for
example, the CEV might be a fan blade velocity, and the "output" might be a
wind force proportional to the square of that velocity (plus any
disturbances that might be acting). There are always disturbances.

You have also mislabeled the "output", which in this diagram is
shown as an irrelevant side effect (not sensed) of the CEV.

I debated whether to use "output" rather than CEV to label the box as well
as the output arrow. They are one and the same; the box simply represents
the confluence of the influences of action and disturbance on the output.

But for every physical variable, the number of influences on it is
indefinite unless you're prepared to characterize the entire environment.
That is why every PCT diagram contains a disturbing variable, which stands
for the sum of ALL influences (other than the output of the system) on the
controlled variable. These influences are not, in general, minor or
negligible. In fact, most of the action of a system might well be employed
in resisting disturbances, the amount of output needed in the absence of
disturbances being minor.

I decided to retain CEV because that's often been distinguished from "input"
in the PCT diagram, and I wanted to maintain consistency with that diagram.
Output in this diagram is the same as input in the PCT diagram; note that
the sensor is receiving its input from the same source as the arrow labeled
"output."

Stop wiggling so. Your diagram is wrong. If you want the "output" to be
controlled, then connect the sensor to it, not to the so-called "CEV". If
you connect the sensor to the variable you label "output", then the one you
label "CEV" is not, despite your label, the controlled environmental
variable. In that case, the "output" would actually be the controlled
variable, and the "CEV" would be the output quantity, with the form of the
function implied by the arrow from "CEV" to "output" being the
environmental feedback function. And the variable labeled "output" would
actually be an input to the control system, as is the CEV in the diagram
you have drawn (the CEV is connected by an unmediated arrow to the sensor,
which makes it an input).

This would correspond, for example, to the temperature of the
water in a temperature controlled bath.

God, you can be stubborn. The "output" of a temperature controlled bath
system would be the heat output from the electric heater or cooler (or both
combined). The CEV would be the temperature of the bath where the sensor is
located, or even more precisely, the temperature of the sensor. The
environmental feedback function would be the connection from the heater to
the sensor temperature, which would include such things as the heat
capacity of the bath and wasted heat. This is a clear, simple, and
self-consistent way to deal with control. The diagram you have drawn is
none of the above. I don't care if it was drawn by engineers. Engineers can
get just as sloppy as anyone else.

Properly, that
"output" variable should also be subject to disturbances, representing all
the variables other than the CEV on which it depends. The CEV is what is
sensed and thus what is indirectly controlled.

I think you can see now that this conclusion is based on a misunderstanding
of the diagram.

No, it is based on the fact that you have two different variables with a
causal arrow connecting them: "CEV" and "output". If you had meant that the
two variables were identical, you should have designated only one variable,
labeling it "CEV or output." And if you had done that, you would have been
calling a variable an output that is actually an input to the control system.

The actual output of this
control system comes out of the actuator, via an unlabelled arrow that,
along with the disturbance, affects the CEV. The CEV is the _input_

quantity.

"The actual output?" You mean "the output as Bill Powers defines it."

No, I mean the actual physical effect that comes out of the control system
and causes things in its environment to change. That is what an "actuator"
does. The actuator defines the output boundary of the controlling system
just as the sensor defines its input boundary. To be sure, Bill Powers is
defining it that way, but this is only to say that I agree with the general
usage in the fields of electronics and control engineering. Another name
for "actuator" is "output transducer," which is symmetrical with "input
transducer," meaning sensor.

As control-systems engineers define it, "the actual output" is what I've
labeled "output" in the conventional diagram.

No. If you sit down with a conventional control engineer and manage to
engage him or her in enough conversation to get the logical systems
functioning, he or she will eventually, if begrudgingly, admit that the
_real_ output is what comes out of the actuator, and that what is
conventionally called the output is just an _effect_ of the real output. It
happens to be the effect that the customer is concerned with, but
engineering-wise, it's not really the output of the control system. A
temperature control system does not produce an output of temperature, but
of heat. Only people who don't know the difference between heat and
temperature (i.e., your typical customer) would call the temperature the
"output" of the control system. Heat is the output; temperature is the
perceived and controlled consequence of the output: an input to the control
system.

Note that if the controlled variable -- temperature -- were actually the
output of a control system, you'd only need to set the actuator to produce
a fixed output, and the output would remain the same forever. You wouldn't
need a control system. The output of the actuator depends ONLY on the
signal driving the actuator. That's how you _define_ an actuator or output
transducer.

Also, take a look at the
diagram. The so-called unlabeled arrow carries the label "action."

So it does. I meant to refer to the arrow from "CEV" to "output", which is
unlabeled.

Notice that you are NOT calling the action of the controlling system its
output. Doesn't that strike you as a little strange?

Finally, in _your_ diagram the CEV is the input quantity. In the
conventional diagram it is the output quantity.

No, in PCT "qo" corresponds to "action" in your diagram. The state of the
CEV is not due to the control system alone; it is affected through some
function (not shown); it is also affected by the disturbance (and therefore
so is the so-called "output").

It's no wonder some control engineers find PCT hard to understand.

Well, yeah. The same familiar control-system terms mean entirely different
things in PCT.

I'd rather focus on the meanings than the terms. There is a place where the
control system produces its first physical effect on its environment. There
are functions like physical laws that connect this physical effect to other
physical variables in the environment. There are sensors that represent the
states of some of these physical variables as signals inside the control
system (often involving multiple sensors and computations of functions of
multiple sensor signals). You can attach any labels you like to these
critical variables and functions, like x1, x2, x3 and f1, f2, f3 ...., and
solve the system equations (or simulate the control process) to
characterize the behavior of the whole schmear. If the model fits, you have
a correct description of the relation between system and environment. Then
you can argue about the terms, to connect the model to common language.
It's in connecting the arbitrary symbols to the terms of common language
that the most mistakes are made, where a word can be chosen that misleads
the listener into an incorrect understanding of how the system works.

It's at this stage, of putting formal symbols into correspondence with
informal language, that people come to call variables corresponding to
controlled sensor signals "outputs." Any control engineer knows that there
is a variable which is a remote consequence of the actuator's physical
effect, a consequence that is subject to major disturbances and changes in
the connecting function, and a consequence that nevertheless is maintained
in a desired state through feedback action. When the engineer says "output"
to mean this remote consequence of action, the engineer has in mind the
correct model and an understanding of _how_ this remote disturbance-prone
variable can be stabilized at any wanted value even though the actions of
the actuator must vary to do so. But the non-engineering listener whose
only grasp of the possibilities is the lineal cause-effect view will not
hear "output" the same way. That listener will think in terms of the
brain's issuing commands that will produce the desired "output" or
end-result by taking into account the inverses of all processes that lie
between the command and the final result or output. As, indeed, practically
every conventional life scientist now does.

The system diagrammed above actually maintains its _feedback signal_ near

the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is

controlled.

No. The feedback signal represents the CEV, not the "output," as you have
drawn it. And you're assuming a simple linear multiplier as the sensor
function.

Ah, so you did see that paragraph. The output splits off from the sensor
arrow, so what is sensed is identical to the output. But we've already
covered this ground.

Yes, we've covered it, but you still don't see what is wrong. What is
represented, physically, by the arrow from "action" to "CEV"? It is some
kind of lawful relationship through which the action contributes, along
with the disturbance, to the state of the CEV. And what is represented,
physically, by an arrow from CEV to the sensor? Again, it is some set of
physical laws or relationships that determines how the CEV affects the
state of the sensor.

So what does it mean when there is an unlabeled arrow from the CEV to the
variable labeled "output"? It means that there is some unspecified set of
physical laws or relationships that make "output" depend on the state of
"CEV". And finally, what does it mean to have an arrow branching off the
arrow from CEV to Output, and running to the sensor? Your guess is as good
as mine; this notation violates the conventions of this diagram. The
correct way to draw this would be either

                                     f1()
                              [CEV] -------> output
                                >
                                >
perceptual signal<--------------
               sensor function = f2()

or it would be

                                     f1()
                              [CEV] -------> output
                                                >
                                                >

perceptual signal<------------------------------
                      sensor function = f2()

depending on what relationships you mean to propose. The word "sensor" does
not represent a variable, but a function that transforms one variable into
another. And the line can't be connected to another line, because lines
represent functions, not variables. The labels on lines indicate what
function is being performed, in this convention.

Part of the difficulty with your diagram is that you don't maintain a
consistent distinction between a variable and a function in the
environment. For example, you put the variable called CEV in a box, but you
label the arrow entering the box as "action", which is a variable

The phrase "control systems control their inputs, not their outputs" is
incorrect for this diagram.

No, it is correct. The "output" is not protected by the actions of the
system against disturbances of the "output" (which are omitted from the
diagram).

No, it is incorrect. See above.

"Above" does not show it is incorrect. "Above" is a bastard diagram in
which the formal conventions of system diagramming are not followed. It's a
doodle, not a diagram.

       CONVENTIONAL DIAGRAM (OPEN LOOP)
                                              disturbance
                                                   >
                                         action V
   input--------------------->[actuator]-------->[CEV]-------> output

Here the output follows the input, but only to the extent that disturbances
are small or are predictable and thus can be compensated for through
calibration.

The sort of compensation I discussed involved measuring the actual relation
between the input and output (e.g., knob position and motor rpm); whatever
predictable disturbances arise from load and frictional forces are then
compensated for in the dial calibration. You're bringing in another sort of
compensation mechanism, which is fine, but it doesn't invalidate my
description of the open-loop system presented in the diagram.

You're deliberately limiting disturbances to those that are regular, mild,
and predictable. That is not a model of a natural system. The whole point
of a control system is to maintain a variable in a specified state when
disturbances are NOT predictable or negligible, and when even the
controlling system's own "actuator" can vary its properties over some range
in a way that can't be forecast. The above diagram applies only when there
are special circumstances that protect the system against normal
disturbances, and that prevent changes in the system's own output
properties (for example, the calibration is done only after a period of
rest during which muscles can recover their sensitivity to driving signals,
and the predictions are limited to the time before fatigue or boredom sets
in).

For example, if you want a motor to run at 2000 rpm, you can
set the input (reference) to 2000 rpm on the calibrated dial, and the motor
will run at 2000 rpm, so long as the load, supply voltage, etc. remain as
they were during calibration.

And why do you think engineers went to all the trouble of inventing
tachometer feedback and variable power amplifiers and building control
systems to _sense_ and _control_ the speed? Because the above design,
typical of the 18th century, is only marginally useful. James Watt invented
the governer exactly because in real systems, the load, the steam supply
pressure, etc., do NOT remain as they were during calibration. Ceteris is
NEVER paribus, or not often enough to make an open-loop model satisfactory.

That is how all machines were designed before control theory -- say, before
Watt. Such systems require constant attention to recalibration and of
course are helpless before any unanticipated disturbance.

True.

Thank you.

You're begging the question: You haven't shown that "sloppiness of
analysis" results from the use of these diagrams.

Well, what about the analysis that says the input is converted into
commands that are then executed by the output, after which there is
feedback to tell somebody, or something, how close the output came to
matching the input? That is pretty goddamned sloppy in my book.

There is a basic problem here in that we use different conventions inside
and outside the control system: Just consider the control-system part:

     input error qo
   -------->[comparator]-------->[actuator]----->
                 ^
                 >
                 +-----------------[sensor]<-----
                   feedback qi

Notice that the arrows are the variables, and the "boxes" (square brackets)
are the functions relating the variables. So "error" is the comparison
function of "input" and "feedback", "qo" is the actuator function of error,
and feedback is the sensor function of qi. But once we get into the
environment, look what happens in your original diagram:

                        action
            [Actuator]--------->[CEV] ------------> "output"
                                        >
                                        >
           <---------- sensor <--------
             feedback

Now CEV, a variable, is in a box, and action, another variable, labels
an arrow, while Actuator, a device which makes action a function of an
error signal, is in a box. "Output" is presumably a variable just as CEV
is, so the arrow from CEV to output must signify a function making "output"
depend on "CEV". The sensor is, of course, a function converting an input
to a signal inside the controller. But what is its input? It seems to be
another arrow that must represent a function. So this diagram has a sensor
converting a function into a signal.

Norbert Wiener made this same mistake, by drawing the feedback arrow
starting with another arrow at a point he labeled "feedback pickoff." He
should have shown an explicit function or variable at that point.

In PCT we use two conventions, one inside the system and the other in its
environment. Inside the system, the variables are signals in neural
pathways that connect one neuron or cluster of neurons to another. So it's
appropriate to think of the variables as the directional arrows, and the
functions as the localized clusters of neurons where computations take
place -- the boxes. But outside the nervous system, the variables are
physical quantities, which are typically measured in a specific place
(boxes), while the laws or relationships that make one variable depend on
another are typically distributed properties of objects and spaces that lie
between the measuring points and are themselves invisible (arrows). So in
the environment it's appropriate to associate variables with localized
places, and the relationships connecting them with arrows going from one
place to another (as in Vensim). Heat going in here affects temperature
measured over there through a distributed property of matter called heat
capacity.

The transition from one convention to the other takes place at the input
and output boundaries. At the output, we have an arrow labeled "error"
entering a box called the output function or actuator, or some name for a
transducer. Out of this box comes an arrow, which, instead of being labeled
as the output quantity qo, _terminates_ on the symbol qo, meaning that now
the terminus of an arrow is not a function-box but a variable. Now if we
draw an arrow from qo to, say, qi, we label that _arrow_ with the name of
a _function_, and we consider qo and qi to be _variables_ rather than
functions. Throughout the environment, arrows denote functions and names at
locations (in boxes or shown by little circles) denote variables. The
Vensim conventions are closest to appropriate here.

At the input boundary, we show an arrow starting at qi (or CEV or whatever
you want to call it) and terminating not on another variable, but on a
_function_ called the Input Function. So that short arrow from qi to the
input function is neither fish nor fowl, and we don't label it. It makes qi
the input variable to the Input function, now a box, which converts it,
since we are now inside the control system, to another arrow labeled
"perceptual signal" or "feedback" or whatever you like. A similar
transition takes place at the output of the output function.

These are the formal rules of system diagramming that we follow in PCT.
Nowhere is it meaningful to start one arrow in the middle of another.

In Vensim, it is possible for a flow to branch at a point, but special care
must be taken to make sure that all the flows _leaving_ that point add up
to the flow _entering_ it. In other words, there is a special rule in
Vensim to handle that case, and it would not apply to your diagram. Vensim
uses the same convention for all cases, whether inside a controller or
outside it. Variables are _always_ represented by boxes, and the arrows
connecting them are _always_ associated with functions, except in the case
of stocks and flows, where stocks imply the integral function as well as
its value, and flows are labeled as variables. I find these conventions
somewhat confused, since they don't clearly distinguish between variables
and functions, and they switch from one rule to another without any
apparent reason but a well-established custom.

Anyway, I'm sticking to my guns until you come up with some valid reason to
surrender my position.

Best,

Bill P.

[From Bruce Abbott (980930.1945 EST)]

Rick Marken (980930.1600) --

Gorgeous! This one goes up on the refrigerator for sure!

No, put this one up instead, it's better:

       PCT ARRANGEMENT with CONVENTIONAL LABELS

                                   input
                                     >
                     feedback v error
           [f=g(o)]-------------->[e=f-i]-------------->[a=f(e)]
              ^ |
              > >
              +-------------------[o=h(a,d)]<--------------+
                     output ^ action
                                     >
                                     >
                                disturbance

As you see, I've replaced the comparator and CEV labels with functions.
That way Bill won't get confused and think that comparator and CEV are
additional variables in the system, instead of functions or hardware.

Bruce

[From Bill Powers (981001.1207 MDT)]

Bruce Abbott (980930.1945 EST)

      PCT ARRANGEMENT with CONVENTIONAL LABELS

                                  input
                                    >
                    feedback v error
          [f=g(o)]-------------->[e=f-i]-------------->[a=f(e)]
             ^ |
             > >
             +-------------------[o=h(a,d)]<--------------+
                    output ^ action
                                    >
                                    >
                               disturbance

As you see, I've replaced the comparator and CEV labels with functions.
That way Bill won't get confused and think that comparator and CEV are
additional variables in the system, instead of functions or hardware.

The CEV (complex environmental variable, according to Martin Taylor who
made up that label) is a function or hardware? Where did you get that idea?

I'm afraid Old Bill is still suffering from confusion. Inside the control
system I can see that "feedback" is a variable that has the reference
signal, another variable, subtracted from it to produce the error signal, a
third variable. The subtraction function is shown in a box, [].

I can see another box at the output labeled [a=f(e)]. Following the same
interpretation I would expect to see _a_ as a variable, so this function
box would be converting e into a. Sure enough, there is an arrow labeled
"action," so evidently in the environment, just as inside the control
system, variables are represented by arrows. This arrow enters a function
box, [o=h(a,d)], so we have a variable called output that is a function of
an action variable and a disturbing variable. The variable named output is
clearly the input to a function called [f=g(o)], which converts its input
variable (called "output") into the perceptual signal called "feedback."

I would think that the variable shown entering the input function of the
control system (the variable you label "output") would be called the
"input" variable, and that the variable that leaves the output function
(the variable you label "action") would be called the "output" variable.
But if it makes sense to you to call the variable entering the input
function the "output," why don't you label the variable leaving the output
function the "input?" Whatever the logic behind calling an input "output,"
it seems to me that the same logic, or simple justice, should require us to
call outputs "inputs." Why is the term "input" reserved for a signal that,
in PCT, is not either an environmental input _or_ output?

The problem I see here is that engineers were brainwashed by psychologists
(or somebody) into believing that they had to represent a control system as
an input-output system. The "standard" engineering diagram is drawn so you
can put a box around the whole control system, with an "input" coming into
it from the left, and an "output" leaving it to the right. BOTH THE INPUT
AND THE OUTPUT ARE THOUGHT OF AS EXISTING IN THE ENVIRONMENT. This, of
course, is a misconception of an organism: the reference input does NOT
come from the environment, but from higher systems, and their reference
inputs come from still higher systems, and ultimately all reference inputs
are determined by genetically set reference signals. This is my basic
objection to the "standard" engineering diagram, if such a thing can be
said to exist. It gives entirely the wrong impression about the origin of
reference signals.

Best,

Bill P.

[From Bruce Abbott (981002.1240 EST)]

Bill Powers (981001.0538 MDT) --

Bruce Abbott (980930.1715 EST)

I'm not going to be drawn into an argument over which set of labels is best.

That is not what the argument is at this point. The argument is over the
nature of what each element of the diagram is supposed to represent in the
real system.

. . .

Here is how I would rearrange it:

                 PCT ARRANGEMENT with CONVENTIONAL LABELS
                                  input
                                    >
                feedback v error
          [Fi]------------>[comparator]---------->[Fo]---+
             ^ actuator |
             > v
       CEV or "output"<-----------------------------qo or action
                       environmental feedback function ^

                                                         >
                                                         >
                                                     disturbance

Now if you will supply the equivalent PCT-labeled diagram, my day will be
complete. Would the part above labeled CEV or "output" become CEV or "input"?

In the block diagrams I've seen in texts, boxes generally represent
functions, arrows variables. Where one variable "splits off" and goes to
two places, the arrow simply branches: ------+------> Where two
variables merge, their
                               >
                               V

arrows join at a circle which has been divided into quadrants by an X, as
you have sometimes done to represent the comparator. The sign of the
influence may be represented by a plus or minus inside the circle within the
quadrant where the arrow ends: ------->(-X+)<--------- Or in some
diagrams a negative quadrant is colored in.

These conventions evidently differ from those you recommend in the diagram
above.

1. The system works the same no matter how you label it.

This is true. However, when you measure the variables, they may not behave
the way the labels might lead you to expect. If you labeled the output
quantity the "controlled variable," for example, you would find that it is
not controlled: every environmental disturbance causes it to change. It is
not a controlled variable just because you attach that label to it.

There is the vague implication here that I have done that. I have not.

2. This way of labeling the system emphasizes the relationship, enforced
   by the system, between the "input" (time-varying reference) and "output"
   (time-varying changes in the output which follow the reference in spite
   of disturbances acting on the output variable). For example, the
   position of the joystick of a jet (back or forward) is the input, and
   the angle of the elevators on the jet's tail is the controlled output.
   The system automatically varies the force applied to the elevators as
   necessary to overcome disturbances to their angle, so that the actual
   angle agrees with that called for by the joystick, within the limits
   of error of the system.

I think this is, indeed, why engineers have often drawn the diagram as you
present it. It's one of the most confusing aspects of PCT for people who
have learned control theory this way.

Yes, I agree. The problem is that these two ways of representing the system
were developed to serve different purposes. For the control engineer
wishing to generate an output that is stabilized against certain
disturbances, the conventional drawing makes good sense. The "input" is
where a human operator or other device (e.g., the autopilot of an aircraft)
tells the system what output to produce (e.g., a given position of the
servomotor driving the elevator position) and the "output" is the position,
protected against disturbances produced by varying loads. The elevator is
coupled rigidly to the servo, so it is assumed that its position will
faithfully follow the servo's; if not, there will be no correction of those
errors permitted by the loose coupling).

For the biologist of psychologist wanting to understand the human or animal
system, the usual analysis into "stimuli" and "responses," representing
inputs to sensory receptors and outputs from the muscles, makes more sense,
and this of course is the way you have chosen to represent the system. It
provides a natural line between the part of the system within the skin and
the part of the system that resides in the external environment (for
first-order muscle systems, at least). In introducing the engineer's
diagram, I was not _advocating_ this diagram over yours. Rather, I have
merely noted that these differences in labeling exist and therefore, one is
well advised to determine how someone defines the terms before attempting
criticism. Maybe they are saying what you think they are, but then again,
maybe they aren't.

Regards,

Bruce

[From Bruce Abbott (981002.1345 EST)]

Rick Marken (981001.1305) --

I think what is most depressing about this diagram, aside from
the fact that it shows that you are really never going to stop
trying to see PCT as "nothing but" conventional psychology, is
that it seems like an attempt to represent the insights of PCT
as nothing more than re-labelings of a diagram.

That isn't what I intend. My purpose was to show how control systems are
conventionally presented by engineers and note that terms apparently shared
by PCT and control engineers often refer to different parts of the system.
Bill's relabeling and rearranging certainly emphasizes the fact that
perceptions are controlled, a point not easy to extract from the
conventional diagram.

I think what is most depressing about your comment is that you are really
never going to stop trying to characterize me as holding that PCT is
"nothing but" conventional psychology.

I think it's _highly_ unlikely that Ramachandran and Blakeslee
used the word "output" to refer to a controlled perceptual
representation of the environment.

Ramachandran and Blakeslee didn't use the word "output" in the quote suppied
by Bruce Gregory. Instead, they spoke of a "command" being sent to the
muscles. You have taken this term "command" to mean a signal sent via the
spinal cord to the muscles to make them contract (an S-R system), and
perhaps that's how Ramachandran and Sandra Blakeslee view it. However, if
they are referring to the conventional diagram, the "command" is a change in
input (reference) signal. Here's what Ramachandran and Sandra Blakeslee
said next:

Once these command signals are sent to the muscles, a feedback loop is set
in motion. Having received a command to move, the muscles execute the
movement. In turn, signals from the spinal muscles and joints are sent back
up to the brain, via the spinal cord, informing the cerebellum and the
parietal that "yes, the command is being performed correctly." These two
structures help you compare your intention with your actual performance,
behaving like a thermostat in a servo-loop, and modifying the motor commands
as needed (applying brakes if they are too fast and increasing the motor
outflow if it is too slow). Thus intentions are transformed into smoothly
coordinated movements."

I wouldn't have stated it quite that way, but they are writing to a lay
audience and perhaps they were taking some license in the interest of
brevity. The feedback loop is always there, of course; by "set in motion" I
think they mean that the feedback signal changes value dynamically as the
muscles contract. From their description, the comparators of the system
they describe lie in the cerebellum and the parietal lobes. The "modified"
motor commands would be changes in the references to the first-order systems
at the spinal level, varied as necessary to keep the movements (as perceived
via the joint and muscle receptors) matching the specified trajectories
despite any disturbances to the system.

It is also possible to construct a different and less charitable
interpretation of Ramachandran and Blakeslee's words (as you have), but it
is my view that it is at least possible that these authors do have a better
understanding of how a control system works than you are willing to grant.
[Aside: That doesn't mean that they "know PCT" as you assert: PCT isn't
isomorphic with control theory, it is a specific application of control theory.]

Apparently most behavioral scientists -- at least
those in the audience of the premier journal of scientific
psychology; Journal of Experimental Psychology: Human Perception
and Performance ( JEP:HPP) -- seem to think that is _actions_ that
are controlled, not "outputs". I infer this from the description
of JEP:HPP found at http://www.apa.org/journals/jephumde.html:

"The Journal of Experimental Psychology: Human Perception and
Performance publishes studies on perception, formulation and
control of action...".

Yes, but what do they mean by "action"? Behavior? Movement? Muscle
twitches? And "control"? Control to most psychologists can be open-loop as
well as closed-loop, as it is to most engineers. For example, the rate of
input to a leaky bucket "controls" the level of water in the bucket, in this
usage. That doesn't mean that anyone thinks that the leaky bucket per se is
a control system as defined in PCT (closed loop only).

Thanks for illustrating my point. Until you know what people mean by those
terms, you can't really draw the conclusion you do, which are based on
interpreting their words as _you_ define them and not necessarily as _they_ do.

But then, you pay those words handsomely to mean what _you_ want them to
mean, don't you? (;-> Even when they're coming out of someone else's mouth.

Regards,

Bruce

From [ Marc Abrams (980210.1520) ]

[From Bruce Abbott (981002.1240 EST)]

Great post. Some important points were made.

Sometimes we are just don't spend enough time _really_
trying to understand where the other guy is coming from, and
since we _all_ have come from some other place, many of us
are probably walking around with a lot of erroneous ideas
about what other people actually do or don't understand.
Bruce Nevin once brought ( I don't know if he was the
first ) this up and I think it's worth repeating every so
often. We need to _ask_ more questions. I will ask one now.

Which is more valuable and why? A good question, or a good
answer? In my view we sometimes seem to try and _give_ the
right answer when we don't really understand the question.

Marc

[From Bill Powers (981002.1304 MDT)]

Bruce Abbott (981002.1240 EST)]

                 PCT ARRANGEMENT with CONVENTIONAL LABELS
                                  input
                                    >
                feedback v error
          [Fi]------------>[comparator]---------->[Fo]---+
             ^ actuator |
             > v
       CEV or "output"<-----------------------------qo or action
                       environmental feedback function ^

                                                         >
                                                         >
                                                     disturbance

Now if you will supply the equivalent PCT-labeled diagram, my day will be
complete. Would the part above labeled CEV or "output" become CEV or

"input"?

No, it would become CEV or input, because both CEV and input are meant
literally, whereas the quotes around output indicate "so-called". The
disturbance could also be drawn entering the CEV|output position, with
suitable change of scaling. At the output we would write <action or output>.

In the block diagrams I've seen in texts, boxes generally represent
functions, arrows variables.

That is one way. Vensim, however, does not use that convention except for
stocks and flows.

Clearly CEV, under that convention, would be a label on an arrow, not a box
where an arrow terminates. As I explained, I have used both conventions in
PCT, according to which is most isomorphic to the part of the whole system
being represented. Inside the nervous system, arrows are variables because
neural signals are, a few milliseconds' delay aside, the same everywhere
along the axon that carries them from one place to another. Note that when
a neural signal branches, the SAME signal appears on both branches; the
signal is not apportioned between the branches. Functions are boxes,
because neural functions are carried out in localized neurons or nuclei.

In the environment, however, variables are usually localized, being
measured in a particular place, and so I use boxes or little circles to
indicate them. Functions, that is physical laws, lie between the variables,
and not being imposed by localized pieces of matter are drawn as labeled
arrows.

Whichever convention you use, it would never be appropriate to draw this:

              action
            ----------->[CEV]

because both action and CEV are variables, and you can't have one
represented by a box and the other by an adjacent arrow. One variable
cannot affect another without going through a function.

By the same token, it would never be appropriate to draw this:

---------------->
        >
        >
        v

... because of the same reason. If both lines represent functions, you
can't have a function the input to which is another function: the input
must be a variable. Or if they both represent variables, you must have a
function between them.

Where one variable "splits off" and goes to
two places, the arrow simply branches: ------+------>

                                                >
                                                V
If the arrows represent variables, it is far better to draw it this way:

               source --------------->
                 >
                 >
                 v

Otherwise, you have to specify how the signal is apportioned between the
two branches. With the above, you can include different weighting factors
at the two destinations.

Where two variables merge, their
arrows join at a circle which has been divided into quadrants by an X, as
you have sometimes done to represent the comparator. The sign of the
influence may be represented by a plus or minus inside the circle within the
quadrant where the arrow ends: ------->(-X+)<--------- Or in some
diagrams a negative quadrant is colored in.

That's fine: the crossed circle, in this case, is a conventionalized
function symbol. You could attach weights to each quadrant. In analog
computing, different symbols are used, and space is provided for weights at
each possible input (of which there could be one, two, or many).

These conventions evidently differ from those you recommend in the diagram
above.

Yes. There are many conventions.

1. The system works the same no matter how you label it.

This is true. However, when you measure the variables, they may not behave
the way the labels might lead you to expect. If you labeled the output
quantity the "controlled variable," for example, you would find that it is
not controlled: every environmental disturbance causes it to change. It is
not a controlled variable just because you attach that label to it.

There is the vague implication here that I have done that. I have not.

You have. When you draw

[CEV] ------- output
         >
         v

you are (a) not following the convention that arrows indicate variables,
and (b) assuming that the output would be controlled if the CEV is
controlled, which is not true unless nothing else in the universe
contributes to the state of the variable named output.

2. This way of labeling the system emphasizes the relationship, enforced
   by the system, between the "input" (time-varying reference) and

"output"

   (time-varying changes in the output which follow the reference in spite
   of disturbances acting on the output variable). For example, the
   position of the joystick of a jet (back or forward) is the input, and
   the angle of the elevators on the jet's tail is the controlled output.
   The system automatically varies the force applied to the elevators as
   necessary to overcome disturbances to their angle, so that the actual
   angle agrees with that called for by the joystick, within the limits
   of error of the system.

Yes, this is why engineers think of the reference signal as an input -- it
IS an input to the servo box between the joystick and the elevators. But
carrying that image over the living systems leads to serious blunders.

I think this is, indeed, why engineers have often drawn the diagram as you
present it. It's one of the most confusing aspects of PCT for people who
have learned control theory this way.

Yes, I agree. The problem is that these two ways of representing the system
were developed to serve different purposes. For the control engineer
wishing to generate an output that is stabilized against certain
disturbances, the conventional drawing makes good sense. The "input" is
where a human operator or other device (e.g., the autopilot of an aircraft)
tells the system what output to produce (e.g., a given position of the
servomotor driving the elevator position) and the "output" is the position,
protected against disturbances produced by varying loads. The elevator is
coupled rigidly to the servo, so it is assumed that its position will
faithfully follow the servo's; if not, there will be no correction of those
errors permitted by the loose coupling).

More of the same. But the problem here is that the terms "input" and
"output" are floating free between two systems. The human being varies an
output (joystick position, an output of the human being) which is a
reference input (to the servo), and the servo, as a result, makes its own
input (the signal representing elevator position) match its given reference
signal. A side-effect of doing this is to create a force that causes the
airplane to begin pitching up or down, and that pitching is the input that
the pilot (or the autopilot) is controlling by varying the joystick output.

The same variable can be both an input and an output, when it enters one
system from another system, or when one system sends an output to the input
of another system. Output and input have to be defined relative to a
particular system before they have any meaning. With respect to a control
system, its action is its output, and the resulting effects that it senses
are its inputs. However, at the same time the variable that is an input to
the control system may create effects on some other system, so someone
concerned with those effects might see that input as an output to the other
system. And from the standpoint of that other system, the same effect is an
input.

It's all very simple if you just ask whether the influence is leaving or
entering the system. Outputs leave, inputs enter. To make sense of this you
have to anchor your point of view somewhere. The problem with the so-called
conventional diagram is that it lets the point of view drift.

For the biologist of psychologist wanting to understand the human or animal
system, the usual analysis into "stimuli" and "responses," representing
inputs to sensory receptors and outputs from the muscles, makes more sense,
and this of course is the way you have chosen to represent the system.

Be damned careful here, because you're about to screw up. The element of
the PCT diagram that corresponds to "stimulus" is not a sensory input, but
a disturbance. The sensory input corresponds to the controlled variable,
and that does not correspond either to the "response" or to the "stimulus",
when there is a control system involved. The state of the actual sensory
stimulus is the _difference_ between the feedback effects from the action
and the other effects from the disturbing variable, the stimulus.

This is not just a matter of labeling. It involves a phenomenon that is new
to psychology.

Best,

Bill P.

[from Jeff Vancouver 981002.1645 EST]

[From Bruce Abbott (980929.1800 EST)]

I just wanted to say I found this interesting. Can I use it in my paper on
semantic misunderstandings?

Sincerely,

Jeff

[From Bruce Abbott (981002.1630 EST)]

Bill Powers (981002.1304 MDT) --

For the biologist of psychologist wanting to understand the human or animal
system, the usual analysis into "stimuli" and "responses," representing
inputs to sensory receptors and outputs from the muscles, makes more sense,
and this of course is the way you have chosen to represent the system.

Be damned careful here, because you're about to screw up. The element of
the PCT diagram that corresponds to "stimulus" is not a sensory input, but
a disturbance. The sensory input corresponds to the controlled variable,
and that does not correspond either to the "response" or to the "stimulus",
when there is a control system involved. The state of the actual sensory
stimulus is the _difference_ between the feedback effects from the action
and the other effects from the disturbing variable, the stimulus.

That depends on -- gues what -- your definition of "stimulus." "Stimulus"
need not refer to some event that elicits a response. A stimulus can be
that which stimulates a sensory receptor; thus acid is the "adequate
stimulus" to the receptors in the mouth, which when stimulated give rise to
the sensation of sourness.

So there.

Damned carefully,

Bruce