[from Wayne Hershberger 930324]
I just got around to reading my E-mail or I would
have had my oar in the disturbance controversy before this.
I hope the dialectic will continue until thesis and antithesis
begets a constructive synthesis. Imagine what Claude
Shannon and Harold Black could have discovered about the
nature of information and control by arguing with each
other in the way M & M have been doing here on the net--
it reminds me of Wilbur and Orvile Wright's heated
arguments about the nature of flight: very productive.
Recognizing the fact that a control system's
disturbance can be mirrored in its output without being
represented in its input is a matter of the first importance
in understanding the nature of closed-loop control. Rick
Marken's steadfast defense of this fact, both as fact (no
loose canon, this) and as the essence of HPCT has been
marvelous.
As Bill Powers has noted from time to time, a control
system may be viewed as an analog computer that
determines the magnitude of a variable, d, not by sensing it
directly, but, rather, by controlling the value of an
alternate, sensed variable, p, that is disturbed by variable
d whose magnitude is being computed; the system's output,
o, is the system's estimate of d. This is the fact that Rick
is insisting that we not fudge: control systems compute an
estimate of d while sensing only p. It is magical, but true.
However, since this magical fact is natural, not
supernatural, it should be possible to explain the "trick."
The trick, of course, is negative feedback--feeding the
system's output back on itself so that it is self-limiting.
That is to say, the control loop's output is at once error
driven and error reducing. Yet, saying this, it seems to me
that the question Martin is asking remains unanswered.
That is, how exactly does the control system compute its
estimate of d from p?
I believe the answer is implicit in M & M's observation
that the better the control, the less p is attributable to d,
with the limit being zero--not >>0 as Ashby supposed. The
control system's estimate of d depends upon this limit. That
is, the control system's irreducible error, at the limit of
control, is attributable exclusively to d, so the estimate of d
is a function of this error and the system's gain--which
determines the error's limit.
Martin, I believe you overstated the case when you
acknowledged ([Martin Tailor 930319 14:30) that:
With infinite precision perceptual signals and zero
transport lag around the loop, the perceptual signal is
always completely under control and the disturbance
is never represented there.
Only if the gain is infinite and the bandwidth of the
disturbance is not too great. Suppose that the system is
perfectly stable but the gain is not infinite. Suppose that
the forward gain is 990, and the reference value is a
constant .11 units; further,
(Bill Powers 930320.2100) Suppose the disturbing
variable is a constant 10 units, and the output is a
constant -9.9 units, both measured in terms of effect
on the CEO when acting alone. The perception,
referred to the environment, is 0.1 units.
The irreducible error is .01 units. Since this error is
irreducible at the limit of control it is an amount of p that
is attributable exclusively to d. Therefore, O = .01 * 990, is
a good estimate of d.
···
---------------------------------------
(Gary Cziko 930323)
When I'm driving, I control the acceleration of the car
and so I seem to use this advance knowledge of
impending accelerations to minimize my head bobbing.
This phenomenon is what I'm having some trouble
understanding as in PCT terms since it appears to be
a good example of what a "normal" psychologist would
probably refer to as FEEDFORWARD.
Yes, or classical conditioning, or both, as I did in my
chapter " Control theory and learning theory" in Rick's
special issue of American Behavioral Scientist. You may
think of any such feedforward (or conditional reflex) as an
endogenous disturbance added to the output and timed so
that it coincides with an anticipated exogenous disturbance
thereby mutually canceling each other. You may also think
of it as a pulse added to the error signal (i.e., added to the
output before it is amplified), in which case the pulse is
effectively being added to the reference signal; that is, r -
p + pulse = r + pulse - p (this is what Tom Bourbon was
describing several months ago on the net). This bumps the
matter up a level in the hierarchy where the pulse may be
either anticipatory (feedforward; i.e., output added after
amplification) or error driven (feedback; i.e., added before
amplification). If it is the latter, then the anticipatory
pulse is added to the reference signal at level 2--which
bumps the question of whether it is ultimately feedforward
or feedback up to the 3rd level, etc., etc. Bill Powers
insists that ultimately if is feedback. I'm not convinced.
Gary, the paper you sent me to read was incomplete,
comprising only the odd-numbered pages. Or was it the
even-numbered ones? I forget. I apologize for not telling
you this earlier, but the matter is academic because I don't
know when I'll be able to get around to it. It seems, these
days, that the hurrieder I go, the behinder I get.
Warm regards, Wayne
Wayne A. Hershberger Work: (815) 753-7097
Professor of Psychology
Department of Psychology Home: (815) 758-3747
Northern Illinois University
DeKalb IL 60115 Bitnet: tj0wah1@niu