[From Bill Powers (940413.1800 MDT)]
Martin Taylor (940412.1800) --
RE: integrals
The second equation above could equally well be written
do/dt = k*(r - p)
No it couldn't. The integral equation entails the differential
one, but the reverse is not true. It misses precisely the
constant of integration, which is where the effects of past
values of p are found.
The constant of integration has to do with the _initial_ value of
the output of the integral, not with past values of the input. The
output value is a consequence of past values of the integrand, but
being a single number it can't be decomposed into effects of
specific inputs at specific times in the past.
···
----------------------------------------
Let's change the reference signal by a step. What does e do? >It
goes up by a step, instantaneously.
....
The rates are all based on current values, but the levels are >not.
They are based on what has happened in the past.
What does "based on" mean? This is not a mathematical or scientific
term, but a term from informal verbal reasoning. Are you trying to
say that from the present state of the output, it is possible to
deduce what the input was at any particular time in the past? Are
you saying that given the input at a particular time, it is possible
to predict the state of the integral at some future time? If you're
saying either of these things, you are wrong. If you're not, I can't
imagine what you mean by "based on," which implies some particular
relationship between the integral's output and a previous input. An
integration erases all information about specific values of the
input in the past, and preserves only the cumulative sum of effects.
All information about particular occurrances in the past is lost.
If the output function is an integrator, it can take ANY value,
given only the current values of p and r.... Do you really intend
to deny this?
No. It is also true that the output of the integrator can take ANY
value, given only the value of p and r at ANY one time, present or
past.
----------------------------------------
RE: transport lags
What goes into the input now may not affect what comes out of
the output until much later, or it may affect the output almost
immediately.
That would be a very peculiar transport lag, one in which the delay
for any given input signal is unpredictable. I don't think that such
a device is physically realizable. In all real processes involving a
transport lag, the delay between an input and its effects at the
output is one fixed number set by the physical properties of the
device, such as the length of an axon. What goes into the input will
come out of the output exactly tau seconds later, neither more nor
less. In the type of transport lag you seem to be imagining, you
couldn't even predict the order in which input impulses would appear
at the output.
Is this why you have been speaking about "convolutions" (actually,
transport lags -- you haven't described any convolutions) as if they
introduced some sort of randomness into a process? If so, you've
been suffering a misconception.
We are each trying to get the other to see that the variables
that are effective NOW are those that exist NOW at the various
places in the loop. I think we agree on that, but not on its
implications. To me, the implication is that what exists NOW
in the loop can in no way be affected by what exists NOW at any
other place in the loop, and may be most strongly affected by
what existed some time ago at a different place in the loop.
If you have defined the functions correctly, this will take care of
itself. Trying to analyze how one variable depends on past values of
other variables is guaranteed to lead to confusion; the sequential
view is simply not viable. After you have seen how the whole loop
looks in continuous operation, you can go back and look at time or
phase relationships. If you try to start with the time or phase
relationships, you will start chasing cause and effect around and
around the loop, which is a blind alley first explored by the
ancient Greeks or earlier peoples, Zeno among them.
-----------------------------------------------
RE: defining the disturbance
(I'll call it the "wydhawf." The wydhawf is the value of the
output of the disturbance function.)
...
I think we need a word for the output of the disturbance
function, which is in the dimensions of the CEV, just as is the
output of the feedback function.
...
What I want a word for is the value of H(d). It isn't the
fluctuation of the CEV, because that is dC/dt, which is the sum
of the derivatives of the other two, if the effect is linear.
Above, I used "wydhawf," but since there is already a word for
"disturbing variable," why not use the more familiar
"disturbance," as I had (apparently wrongly) thought we
previously agreed?
Because the disturbing variable and the word "disturbance" have been
defined since the beginning of PCT as d, not H(d). Except where I
have accidentally been slipshod,l I have used the same term in the
same way consistently.
But if you really want to use two words for one thing, I'll try >to
remember to use "wydhawf" in future--if I can remember it
tomorrow.
They are two words for two different things. H(d) is not d. You have
allowed yourself to be misled by a specific case, the case seen in
our tracking experiments where H is a multiplier of 1. In that case
there is a trivially simple relationship between the value of the
disturbance and its contribution to the state of the controlled
variable. But the general case is the one in which H is a general
function, not a constant of proportionality. H(d) could be, for
example d^2 - 200 + dd/dt, in which case it would be obvious that
the value of H(d) is not d. In the case of a crosswind, d would be
the velocity of the wind, while H(d) would be a complex nonlinear
function of the velocity of the car, the velocity of the wind, and
the angle between them.
If you decide to use "disturbance" for H(d) despite my steadfast
objections, which have never changed, you will be no better off than
before, because as you point out, H(d) is NOT the observed state of
the controlled variable. The actual state is produced by the sum of
H(d) and G(o). So "the disturbance" would still not be a fluctuation
in the controlled variable, and would still not be directly
detectable by the input function.
If you refer to H(d) as "the disturbance," you will leave the
impression that the disturbance is the same as the change in the
controlled variable, which it is not. So you gain nothing with
respect to popular usages.
But I do not accept this usage of "the disturbance" because to do so
would require going back and rewriting everything I have written
about the disturbance for 40 years (which you seem not to have
read). The term "disturbance" has been defined in PCT; it already
has a meaning; it has been used up. You can say "wydhawf" for the
quantity you mean, but H(d) is much simpler and conveys its meaning
directly. H(d) represents the _effect_ of d, the _contribution_ of d
to the state of the controlled variable, as I have put it many times
before. You can't waltz into this game and arbitrarily start
changing the rules without obtaining the consent of the other
players.
You follow this with a discussion of why the wydhawf is an
unnecessary construct, a computational fiction. I don't think
it is unnecessary, any more than is the error signal, which may
not exist as a measurable quantity in a real control system,
such as one in which the output function is based on a
differential input Op-Amp.
I said it was a computational fiction, which it is, but not that it
is unnecessary. I use H(d) (or more often, D(d)) to indicate one of
two converging effects on the controlled variable. In our equations,
you will quite often find H(d) -- Rick employs that notation
consistently. It is necessary to indicate H(d) somehow. But this
does not mean that there is a physically separate H(d)
distinguishable from G(o).
In the case of the massive controlled position variable, the
position is given, simplified, by
cv = int(int((o+d)/m),
with m being the mass. This can be decomposed into
cv = int(int(o/m)) + int(int(d/m)).
Then H(d) is int(int(()/m) where () is the placeholder where the
disturbance goes. But in fact, the two forces add to produce a net
force, and that force produces a single acceleration of a single
object, and the acceleration of that object is integrated twice to
produce the position that the input function senses. The first
representation above reflects, in the grammar of algebra, the actual
physical process. The second creates an impression of two apparent
processes where only one exists. The two representations are
conceptually, but not physically, equivalent. Given only the second
expression, one could make the mistake of thinking that in general a
different mass could be involved in the two terms.
As to the error signal, if the reference signal enters a physically
distinct comparator, an explicit error signal is required. If
instead it enters an output function, no error signal is required
but we would still want to identify the important quantity r - p,
which we could do simply by writing
(r - p). H(d) has the same kind of existence as the error signal,
except that there are hardly any cases, perhaps no cases, where H(d)
could be examined by itself, whereas there are known control systems
where the error signal can be identified.
The wydhawf, as I once before described it, is the effect the
disturbance would have, were it not opposed by the output
effect (which is exactly as unobservable).
I have long defined H(d) in exactly the same way. So what? The
effect that the disturbance would have if it were not opposed by the
output effect is not what we see in an intact control system, nor
what the input function senses.
Even knowing the effect that the disturbance would have in the
absence of an opposing output does not permit you to deduce the
disturbance from observing the now-uncontrolled variable. That is
because H(d) is not d. Neither is it H. You can't deduce either the
form of a function or the values of its arguments by looking only at
the value of the function.
Let me put that another way:
YOU CAN'T DEDUCE EITHER THE FORM OF A FUNCTION OR THE VALUES OF ITS
ARGUMENTS BY LOOKING ONLY AT THE VALUE OF THE FUNCTION.
--------------------------------------------
RE: indeterminacy
I said
just as obviously, it DOES NOT contain information about the
latter because the relationship is indeterminate.
You said
It is quite wrong to say that an indeterminate relationship
eliminatesinformation. It may, but more commonly it reduces
rather than eliminates information. I quote from my previous
posting:
This is an example of a function of the
form X = F(S+N).It is wrong to say of such a function that X
_inherently_ is independent of S or of N.
...
The same holds if N is partitioned into N1, N2, N3, ..., Nk.
But this says that you know that there are k N's, and that there are
none in addition that you don't know about. I am talking about the
kind of mathematical indeterminacy that is involved in being asked
to calculate the area of a rectangle, and being given only the
length of one side. Knowledge of that one side does not in the
slightest reduce your uncertainty about the area. That is what I
mean by "indeterminate" as opposed to "uncertain."
As far as a control system is concerned, the "most common" case is
the one where the system has no indication of the form of H or the
value of d, and where even the analyst is incapable of anticipating
all possible causes of disturbances and the paths through which they
will act. Fortunately, the control system does not have to know
either H or d to control successfully, because its actions work
directly on the controlled variable, and are based entirely on
perception of the controlled variable and the setting of the
reference signal. Identical control actions can follow upon an
infinite variety of different H's and d's, so knowledge of H and d
can't be important in achieving control. It is impossible for the
perceptual signal to contain information about which set of H's and
d's happen to be operating at a given moment, because that set can
change without any change in the effects on the controlled variable
and hence on the perception.
A control system can be designed strictly on the basis of the
properties of the available components, without taking into
consideration the kind of disturbances that might arise. The
properties of the system, if utilized to the maximum, will determine
which disturbances can be resisted and which can't. Disturbance
waveforms, spectra, distributions, average values, or statistical
properties are irrelevant to the design, if the design is already
the best that can be obtained from the available materials.
-------------------------------------------------------------
Best,
Bill P.