[From Bill Powers (980930.0851 MDT)]
Bruce Abbott (980929.1800 EST)--
Me:
This says that the perceptual variable being controlled corresponds to arm
position or velocity, which means the action variable consists of the
muscle tensions that tend, along with disturbing forces, to alter the arm's
position or velocity.
Ye:
Yep. The behavior of the arm -- where it goes, and how quickly, is under
control.
If you're defining behavior in this way, then the action or output consists
of muscle tensions; the controlled perception or input is the perception of
movement. What is controlled, in that case, is what the controlling system
perceives as the movement, not what the observer perceives. If the
controlling person is wearing distorting goggles or viewing his own
movements via a distorting TV picture, the observer will see one movement
while the controlling person is controlling a different one (there will be
a similar disparity if it is the observer who is wearing the distorting
goggles, etc.). Generally speaking, the observer is always wearing
"distorting goggles," because the naive observer does not know how the
supposed behavior looks to the other person, and thus does not know what
variable is being controlled.
CONVENTIONAL DIAGRAM (CLOSED LOOP)
disturbance
>
error action V
input---->[comparator]-------->[actuator]------->[CEV]---+----> output
^ |
> >
+------------------------------[sensor]---+
feedback
In this diagram, the "input" to the system is PCT's reference signal; by
varying the input, one can "control" (determine, within limits) the
value of
the output.
No, by varying the "input", one can control the variable labeled
"feedback."
You must have missed the unquoted paragraph in which I made that
clarification, to wit:
The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is
controlled.
But your diagram shows the sensor receiving information from the CEV, not
from the "output". There is an arrow from the CEV to the "output",
indicating a physical relationship (If it doesn't indicated a physical
relationship, then you haven't drawn a proper system diagram). The form of
that physical effect might be anything (you haven't said what it is) -- for
example, the CEV might be a fan blade velocity, and the "output" might be a
wind force proportional to the square of that velocity (plus any
disturbances that might be acting). There are always disturbances.
You have also mislabeled the "output", which in this diagram is
shown as an irrelevant side effect (not sensed) of the CEV.
I debated whether to use "output" rather than CEV to label the box as well
as the output arrow. They are one and the same; the box simply represents
the confluence of the influences of action and disturbance on the output.
But for every physical variable, the number of influences on it is
indefinite unless you're prepared to characterize the entire environment.
That is why every PCT diagram contains a disturbing variable, which stands
for the sum of ALL influences (other than the output of the system) on the
controlled variable. These influences are not, in general, minor or
negligible. In fact, most of the action of a system might well be employed
in resisting disturbances, the amount of output needed in the absence of
disturbances being minor.
I decided to retain CEV because that's often been distinguished from "input"
in the PCT diagram, and I wanted to maintain consistency with that diagram.
Output in this diagram is the same as input in the PCT diagram; note that
the sensor is receiving its input from the same source as the arrow labeled
"output."
Stop wiggling so. Your diagram is wrong. If you want the "output" to be
controlled, then connect the sensor to it, not to the so-called "CEV". If
you connect the sensor to the variable you label "output", then the one you
label "CEV" is not, despite your label, the controlled environmental
variable. In that case, the "output" would actually be the controlled
variable, and the "CEV" would be the output quantity, with the form of the
function implied by the arrow from "CEV" to "output" being the
environmental feedback function. And the variable labeled "output" would
actually be an input to the control system, as is the CEV in the diagram
you have drawn (the CEV is connected by an unmediated arrow to the sensor,
which makes it an input).
This would correspond, for example, to the temperature of the
water in a temperature controlled bath.
God, you can be stubborn. The "output" of a temperature controlled bath
system would be the heat output from the electric heater or cooler (or both
combined). The CEV would be the temperature of the bath where the sensor is
located, or even more precisely, the temperature of the sensor. The
environmental feedback function would be the connection from the heater to
the sensor temperature, which would include such things as the heat
capacity of the bath and wasted heat. This is a clear, simple, and
self-consistent way to deal with control. The diagram you have drawn is
none of the above. I don't care if it was drawn by engineers. Engineers can
get just as sloppy as anyone else.
Properly, that
"output" variable should also be subject to disturbances, representing all
the variables other than the CEV on which it depends. The CEV is what is
sensed and thus what is indirectly controlled.
I think you can see now that this conclusion is based on a misunderstanding
of the diagram.
No, it is based on the fact that you have two different variables with a
causal arrow connecting them: "CEV" and "output". If you had meant that the
two variables were identical, you should have designated only one variable,
labeling it "CEV or output." And if you had done that, you would have been
calling a variable an output that is actually an input to the control system.
The actual output of this
control system comes out of the actuator, via an unlabelled arrow that,
along with the disturbance, affects the CEV. The CEV is the _input_
quantity.
"The actual output?" You mean "the output as Bill Powers defines it."
No, I mean the actual physical effect that comes out of the control system
and causes things in its environment to change. That is what an "actuator"
does. The actuator defines the output boundary of the controlling system
just as the sensor defines its input boundary. To be sure, Bill Powers is
defining it that way, but this is only to say that I agree with the general
usage in the fields of electronics and control engineering. Another name
for "actuator" is "output transducer," which is symmetrical with "input
transducer," meaning sensor.
As control-systems engineers define it, "the actual output" is what I've
labeled "output" in the conventional diagram.
No. If you sit down with a conventional control engineer and manage to
engage him or her in enough conversation to get the logical systems
functioning, he or she will eventually, if begrudgingly, admit that the
_real_ output is what comes out of the actuator, and that what is
conventionally called the output is just an _effect_ of the real output. It
happens to be the effect that the customer is concerned with, but
engineering-wise, it's not really the output of the control system. A
temperature control system does not produce an output of temperature, but
of heat. Only people who don't know the difference between heat and
temperature (i.e., your typical customer) would call the temperature the
"output" of the control system. Heat is the output; temperature is the
perceived and controlled consequence of the output: an input to the control
system.
Note that if the controlled variable -- temperature -- were actually the
output of a control system, you'd only need to set the actuator to produce
a fixed output, and the output would remain the same forever. You wouldn't
need a control system. The output of the actuator depends ONLY on the
signal driving the actuator. That's how you _define_ an actuator or output
transducer.
Also, take a look at the
diagram. The so-called unlabeled arrow carries the label "action."
So it does. I meant to refer to the arrow from "CEV" to "output", which is
unlabeled.
Notice that you are NOT calling the action of the controlling system its
output. Doesn't that strike you as a little strange?
Finally, in _your_ diagram the CEV is the input quantity. In the
conventional diagram it is the output quantity.
No, in PCT "qo" corresponds to "action" in your diagram. The state of the
CEV is not due to the control system alone; it is affected through some
function (not shown); it is also affected by the disturbance (and therefore
so is the so-called "output").
It's no wonder some control engineers find PCT hard to understand.
Well, yeah. The same familiar control-system terms mean entirely different
things in PCT.
I'd rather focus on the meanings than the terms. There is a place where the
control system produces its first physical effect on its environment. There
are functions like physical laws that connect this physical effect to other
physical variables in the environment. There are sensors that represent the
states of some of these physical variables as signals inside the control
system (often involving multiple sensors and computations of functions of
multiple sensor signals). You can attach any labels you like to these
critical variables and functions, like x1, x2, x3 and f1, f2, f3 ...., and
solve the system equations (or simulate the control process) to
characterize the behavior of the whole schmear. If the model fits, you have
a correct description of the relation between system and environment. Then
you can argue about the terms, to connect the model to common language.
It's in connecting the arbitrary symbols to the terms of common language
that the most mistakes are made, where a word can be chosen that misleads
the listener into an incorrect understanding of how the system works.
It's at this stage, of putting formal symbols into correspondence with
informal language, that people come to call variables corresponding to
controlled sensor signals "outputs." Any control engineer knows that there
is a variable which is a remote consequence of the actuator's physical
effect, a consequence that is subject to major disturbances and changes in
the connecting function, and a consequence that nevertheless is maintained
in a desired state through feedback action. When the engineer says "output"
to mean this remote consequence of action, the engineer has in mind the
correct model and an understanding of _how_ this remote disturbance-prone
variable can be stabilized at any wanted value even though the actions of
the actuator must vary to do so. But the non-engineering listener whose
only grasp of the possibilities is the lineal cause-effect view will not
hear "output" the same way. That listener will think in terms of the
brain's issuing commands that will produce the desired "output" or
end-result by taking into account the inverses of all processes that lie
between the command and the final result or output. As, indeed, practically
every conventional life scientist now does.
The system diagrammed above actually maintains its _feedback signal_ near
the input value, but the feedback signal is supposed to indicate the
measured state of the output, and so long as this measurement is good, the
output will track variations in the input. That is, the "output" is
controlled.
No. The feedback signal represents the CEV, not the "output," as you have
drawn it. And you're assuming a simple linear multiplier as the sensor
function.
Ah, so you did see that paragraph. The output splits off from the sensor
arrow, so what is sensed is identical to the output. But we've already
covered this ground.
Yes, we've covered it, but you still don't see what is wrong. What is
represented, physically, by the arrow from "action" to "CEV"? It is some
kind of lawful relationship through which the action contributes, along
with the disturbance, to the state of the CEV. And what is represented,
physically, by an arrow from CEV to the sensor? Again, it is some set of
physical laws or relationships that determines how the CEV affects the
state of the sensor.
So what does it mean when there is an unlabeled arrow from the CEV to the
variable labeled "output"? It means that there is some unspecified set of
physical laws or relationships that make "output" depend on the state of
"CEV". And finally, what does it mean to have an arrow branching off the
arrow from CEV to Output, and running to the sensor? Your guess is as good
as mine; this notation violates the conventions of this diagram. The
correct way to draw this would be either
f1()
[CEV] -------> output
>
>
perceptual signal<--------------
sensor function = f2()
or it would be
f1()
[CEV] -------> output
>
>
perceptual signal<------------------------------
sensor function = f2()
depending on what relationships you mean to propose. The word "sensor" does
not represent a variable, but a function that transforms one variable into
another. And the line can't be connected to another line, because lines
represent functions, not variables. The labels on lines indicate what
function is being performed, in this convention.
Part of the difficulty with your diagram is that you don't maintain a
consistent distinction between a variable and a function in the
environment. For example, you put the variable called CEV in a box, but you
label the arrow entering the box as "action", which is a variable
The phrase "control systems control their inputs, not their outputs" is
incorrect for this diagram.
No, it is correct. The "output" is not protected by the actions of the
system against disturbances of the "output" (which are omitted from the
diagram).
No, it is incorrect. See above.
"Above" does not show it is incorrect. "Above" is a bastard diagram in
which the formal conventions of system diagramming are not followed. It's a
doodle, not a diagram.
CONVENTIONAL DIAGRAM (OPEN LOOP)
disturbance
>
action V
input--------------------->[actuator]-------->[CEV]-------> output
Here the output follows the input, but only to the extent that disturbances
are small or are predictable and thus can be compensated for through
calibration.
The sort of compensation I discussed involved measuring the actual relation
between the input and output (e.g., knob position and motor rpm); whatever
predictable disturbances arise from load and frictional forces are then
compensated for in the dial calibration. You're bringing in another sort of
compensation mechanism, which is fine, but it doesn't invalidate my
description of the open-loop system presented in the diagram.
You're deliberately limiting disturbances to those that are regular, mild,
and predictable. That is not a model of a natural system. The whole point
of a control system is to maintain a variable in a specified state when
disturbances are NOT predictable or negligible, and when even the
controlling system's own "actuator" can vary its properties over some range
in a way that can't be forecast. The above diagram applies only when there
are special circumstances that protect the system against normal
disturbances, and that prevent changes in the system's own output
properties (for example, the calibration is done only after a period of
rest during which muscles can recover their sensitivity to driving signals,
and the predictions are limited to the time before fatigue or boredom sets
in).
For example, if you want a motor to run at 2000 rpm, you can
set the input (reference) to 2000 rpm on the calibrated dial, and the motor
will run at 2000 rpm, so long as the load, supply voltage, etc. remain as
they were during calibration.
And why do you think engineers went to all the trouble of inventing
tachometer feedback and variable power amplifiers and building control
systems to _sense_ and _control_ the speed? Because the above design,
typical of the 18th century, is only marginally useful. James Watt invented
the governer exactly because in real systems, the load, the steam supply
pressure, etc., do NOT remain as they were during calibration. Ceteris is
NEVER paribus, or not often enough to make an open-loop model satisfactory.
That is how all machines were designed before control theory -- say, before
Watt. Such systems require constant attention to recalibration and of
course are helpless before any unanticipated disturbance.
True.
Thank you.
You're begging the question: You haven't shown that "sloppiness of
analysis" results from the use of these diagrams.
Well, what about the analysis that says the input is converted into
commands that are then executed by the output, after which there is
feedback to tell somebody, or something, how close the output came to
matching the input? That is pretty goddamned sloppy in my book.
There is a basic problem here in that we use different conventions inside
and outside the control system: Just consider the control-system part:
input error qo
-------->[comparator]-------->[actuator]----->
^
>
+-----------------[sensor]<-----
feedback qi
Notice that the arrows are the variables, and the "boxes" (square brackets)
are the functions relating the variables. So "error" is the comparison
function of "input" and "feedback", "qo" is the actuator function of error,
and feedback is the sensor function of qi. But once we get into the
environment, look what happens in your original diagram:
action
[Actuator]--------->[CEV] ------------> "output"
>
>
<---------- sensor <--------
feedback
Now CEV, a variable, is in a box, and action, another variable, labels
an arrow, while Actuator, a device which makes action a function of an
error signal, is in a box. "Output" is presumably a variable just as CEV
is, so the arrow from CEV to output must signify a function making "output"
depend on "CEV". The sensor is, of course, a function converting an input
to a signal inside the controller. But what is its input? It seems to be
another arrow that must represent a function. So this diagram has a sensor
converting a function into a signal.
Norbert Wiener made this same mistake, by drawing the feedback arrow
starting with another arrow at a point he labeled "feedback pickoff." He
should have shown an explicit function or variable at that point.
In PCT we use two conventions, one inside the system and the other in its
environment. Inside the system, the variables are signals in neural
pathways that connect one neuron or cluster of neurons to another. So it's
appropriate to think of the variables as the directional arrows, and the
functions as the localized clusters of neurons where computations take
place -- the boxes. But outside the nervous system, the variables are
physical quantities, which are typically measured in a specific place
(boxes), while the laws or relationships that make one variable depend on
another are typically distributed properties of objects and spaces that lie
between the measuring points and are themselves invisible (arrows). So in
the environment it's appropriate to associate variables with localized
places, and the relationships connecting them with arrows going from one
place to another (as in Vensim). Heat going in here affects temperature
measured over there through a distributed property of matter called heat
capacity.
The transition from one convention to the other takes place at the input
and output boundaries. At the output, we have an arrow labeled "error"
entering a box called the output function or actuator, or some name for a
transducer. Out of this box comes an arrow, which, instead of being labeled
as the output quantity qo, _terminates_ on the symbol qo, meaning that now
the terminus of an arrow is not a function-box but a variable. Now if we
draw an arrow from qo to, say, qi, we label that _arrow_ with the name of
a _function_, and we consider qo and qi to be _variables_ rather than
functions. Throughout the environment, arrows denote functions and names at
locations (in boxes or shown by little circles) denote variables. The
Vensim conventions are closest to appropriate here.
At the input boundary, we show an arrow starting at qi (or CEV or whatever
you want to call it) and terminating not on another variable, but on a
_function_ called the Input Function. So that short arrow from qi to the
input function is neither fish nor fowl, and we don't label it. It makes qi
the input variable to the Input function, now a box, which converts it,
since we are now inside the control system, to another arrow labeled
"perceptual signal" or "feedback" or whatever you like. A similar
transition takes place at the output of the output function.
These are the formal rules of system diagramming that we follow in PCT.
Nowhere is it meaningful to start one arrow in the middle of another.
In Vensim, it is possible for a flow to branch at a point, but special care
must be taken to make sure that all the flows _leaving_ that point add up
to the flow _entering_ it. In other words, there is a special rule in
Vensim to handle that case, and it would not apply to your diagram. Vensim
uses the same convention for all cases, whether inside a controller or
outside it. Variables are _always_ represented by boxes, and the arrows
connecting them are _always_ associated with functions, except in the case
of stocks and flows, where stocks imply the integral function as well as
its value, and flows are labeled as variables. I find these conventions
somewhat confused, since they don't clearly distinguish between variables
and functions, and they switch from one rule to another without any
apparent reason but a well-established custom.
Anyway, I'm sticking to my guns until you come up with some valid reason to
surrender my position.
Best,
Bill P.