[from Wayne Hershberger 930324]

      I just got around to reading my E-mail or I would
have had my oar in the disturbance controversy before this.
I hope the dialectic will continue until thesis and antithesis
begets a constructive synthesis. Imagine what Claude
Shannon and Harold Black could have discovered about the
nature of information and control by arguing with each
other in the way M & M have been doing here on the net--
it reminds me of Wilbur and Orvile Wright's heated
arguments about the nature of flight: very productive.

      Recognizing the fact that a control system's
disturbance can be mirrored in its output without being
represented in its input is a matter of the first importance
in understanding the nature of closed-loop control. Rick
Marken's steadfast defense of this fact, both as fact (no
loose canon, this) and as the essence of HPCT has been

      As Bill Powers has noted from time to time, a control
system may be viewed as an analog computer that
determines the magnitude of a variable, d, not by sensing it
directly, but, rather, by controlling the value of an
alternate, sensed variable, p, that is disturbed by variable
d whose magnitude is being computed; the system's output,
o, is the system's estimate of d. This is the fact that Rick
is insisting that we not fudge: control systems compute an
estimate of d while sensing only p. It is magical, but true.

      However, since this magical fact is natural, not
supernatural, it should be possible to explain the "trick."
The trick, of course, is negative feedback--feeding the
system's output back on itself so that it is self-limiting.
That is to say, the control loop's output is at once error
driven and error reducing. Yet, saying this, it seems to me
that the question Martin is asking remains unanswered.
That is, how exactly does the control system compute its
estimate of d from p?

      I believe the answer is implicit in M & M's observation
that the better the control, the less p is attributable to d,
with the limit being zero--not >>0 as Ashby supposed. The
control system's estimate of d depends upon this limit. That
is, the control system's irreducible error, at the limit of
control, is attributable exclusively to d, so the estimate of d
is a function of this error and the system's gain--which
determines the error's limit.

      Martin, I believe you overstated the case when you
acknowledged ([Martin Tailor 930319 14:30) that:

      With infinite precision perceptual signals and zero
      transport lag around the loop, the perceptual signal is
      always completely under control and the disturbance
      is never represented there.

      Only if the gain is infinite and the bandwidth of the
disturbance is not too great. Suppose that the system is
perfectly stable but the gain is not infinite. Suppose that
the forward gain is 990, and the reference value is a
constant .11 units; further,

      (Bill Powers 930320.2100) Suppose the disturbing
      variable is a constant 10 units, and the output is a
      constant -9.9 units, both measured in terms of effect
      on the CEO when acting alone. The perception,
      referred to the environment, is 0.1 units.

      The irreducible error is .01 units. Since this error is
irreducible at the limit of control it is an amount of p that
is attributable exclusively to d. Therefore, O = .01 * 990, is
a good estimate of d.



(Gary Cziko 930323)
      When I'm driving, I control the acceleration of the car
      and so I seem to use this advance knowledge of
      impending accelerations to minimize my head bobbing.
      This phenomenon is what I'm having some trouble
      understanding as in PCT terms since it appears to be
      a good example of what a "normal" psychologist would
      probably refer to as FEEDFORWARD.

      Yes, or classical conditioning, or both, as I did in my
chapter " Control theory and learning theory" in Rick's
special issue of American Behavioral Scientist. You may
think of any such feedforward (or conditional reflex) as an
endogenous disturbance added to the output and timed so
that it coincides with an anticipated exogenous disturbance
thereby mutually canceling each other. You may also think
of it as a pulse added to the error signal (i.e., added to the
output before it is amplified), in which case the pulse is
effectively being added to the reference signal; that is, r -
p + pulse = r + pulse - p (this is what Tom Bourbon was
describing several months ago on the net). This bumps the
matter up a level in the hierarchy where the pulse may be
either anticipatory (feedforward; i.e., output added after
amplification) or error driven (feedback; i.e., added before
amplification). If it is the latter, then the anticipatory
pulse is added to the reference signal at level 2--which
bumps the question of whether it is ultimately feedforward
or feedback up to the 3rd level, etc., etc. Bill Powers
insists that ultimately if is feedback. I'm not convinced.

      Gary, the paper you sent me to read was incomplete,
comprising only the odd-numbered pages. Or was it the
even-numbered ones? I forget. I apologize for not telling
you this earlier, but the matter is academic because I don't
know when I'll be able to get around to it. It seems, these
days, that the hurrieder I go, the behinder I get.

Warm regards, Wayne

Wayne A. Hershberger Work: (815) 753-7097
Professor of Psychology
Department of Psychology Home: (815) 758-3747
Northern Illinois University
DeKalb IL 60115 Bitnet: tj0wah1@niu

[From Bruce Abbott (951206.1415 EST)]

Bill Powers (951206.0530 MST) --

Bruce Abbott (951205.1745 EST)

    If you read just a little further (past the semicolon that denotes
    the start of the subordinate sentence that served to explain my
    assertion), you will see that I am referring to the steady state,
    after the system has come to equilibrium and the system is
    generating a constant output that just offsets the tendency of the
    constant disturbance to increase the error. Under this condition
    the constant disturbance is having only one effect and one effect
    only on the error: it is pushing in a direction that would increase
    the error if the control system did nothing to resist it.

Look at the table of numbers in the referenced post. These are steady-
state conditions. Consider just one line:

d qc qo error, r-p

10 91.8 409.1 8.2

Here the constant disturbance is "pushing" qc in the positive direction,
and so is the action, the output quantity. The actual value of qc is
still below the reference level of 100, so the combined effect of the
disturbance (10) and the action (409.1/5) still leaves an error of 8.2
units. BOTH the disturbance and the action are pushing in the direction
that would REDUCE error. This is the equilibrium state of the system for
that constant value of disturbance.

Now at least I understand where the problem lies. The situation I have been
considering is one in which the disturbance is tending to "push" the
controlled variable away from its reference, as in the typical operant
experiment when food deprivation is imposed to bring body weight below
setpoint. The situation you have been considering is one in which the
disturbance is tending to "push" the controlled variable toward its
reference, as when rate of food delivery is being controlled and the
disturbance consists of extra food deliveries. I'm sorry, apparently I
didn't make that explicit. My only reason for considering the effect of a
constant disturbance was to provide an example in which the reinforcer's
effect (if unopposed) would be to reduce error, yet no error-reduction is
actually observed. In this situation the reinforcer is effective because it
tends to prevent the error from increasing.

In the situation you modeled, the extra food deliveries reduce error, and so
does the delivery of the reinforcer: the effects on the controlled variable
summate, rather than subtract. To the extent that there is still error,
however, the reinforcer still tends to reduce it and therefore would still
function as reinforcer. However, the lower the error, the lower the
"drive," so to the extent that the disturbance independently reduced error,
responding would be correspondingly reduced.

My use of the term "drive" in this case is not quite appropriate, as this
term is usually reserved for cases such as hunger, thirst, etc. where a
physiological perception is involved. In thinking about how reinforcement
would be treated within PCT, I was assuming that responding produced an
event that reduced error in such a system, and for simplicity simply ignored
the fact that lever-pressing doesn't usually directly inject the food into
the rat's gut. I assumed that, once the pellet appeared, appropriate
references would be set so as to produce the series of acts necessary to
ingest the food. For the purpose of my analysis it doesn't really matter
whether the action of the control system directly results in food in the gut
or consists in the setting of references that achieve this result.

I think that part of the difficulty, maybe the main difficulty, is that
you're trying to separate the individual jolts of reinforcer from the
mean value of the reinforcer/controlled-quantity. Thus you come up with
the idea that each brief reduction of error due to a single
reinforcement-event creates an increment in the probability of the
behavior, which results in an increased mean rate of behavior and an
increase in the mean reinforcement rate.

A detailed model would include the "individual jolts" provided by the
reinforcer, but they are not central to my analysis. What is central is
that these jolts tend to move the error toward zero. I don't recall
anything about each single reinforcement-event creating an increment in the
probablity of the behavior in THIS discussion, wherein I am NOT attempting
to offer a reinforcement explanation for what is going on.

In this analysis, you're using the structure of the control-system model
but ignoring the control-system analysis and trying to explain the
operation of a control system in terms of reinforcement.

No, you've got it backward. What I'm trying to do is use a control-system
analysis to identify what such things in reinforcement theory as reinforcer
delivery, "establishing operations," "drive," and so on correspond to in a
control-system model.

If you use the control-system structure and the control-system analysis,
there is simply no need for the concept of behavior-maintaining
reinforcement. None at all. And if you try to introduce that concept, it
predicts the WRONG SIGN OF EFFECT.

I agree that there is no NEED for the concept of behavior-maintaining
reinforcement. I do not agree that it predicts the wrong sign of effect;
your assertion that it does comes about from a misunderstanding of the
concept as it is understood today, and I have been hoping that by showing to
what control-system analog the event termed a "reinforcer" corresponds, we
might both have a better handle on the subject.

Bruce, my impression of EABers has been and is that they simply don't
know how to do rigorous system analysis. I say this because editors have
let papers through that contain ridiculous conceptual errors and even
mathematical errors. Myerson and Meizen, for example, published a paper
in which one "system of equations" consisted of two linearly dependent
expressions, which any beginning engineer would realize indicates that
they really had only one equation, not two. And that was far from the
only blunder in this paper.

The biggest problem here is not that M&M made this elementary blunder,
but that the editors and reviewers of the journal let it be printed. If
the editors and reviewers, the arbiters of good science, can't pick up
elementary blunders in system analysis, what are we to think of the
field as a whole?

I absolutely agree with you here. The root of the problem, to my mind, is
that for a very long time in experimental psychology it has been deemed
nearly impossible to do the kind of systems analysis that PCT shows can be
done. It has been held that the system is of daunting complexity, and that
at this early (yes, early) stage in the game the best an experimentalist
could hope for is to discover lawful empirical relationships that a future
knowledge of the underlying physiological mechanisms might one day explain.
Meanwhile, these relationships would provide at least some degree of
prediction and control and would offer some basis for theory development and
testing. The consequence of this belief (reinforced by some bitter failures
early on to produce mathematical models that worked) has been to emphasize
the importance of empirical studies over mathematical analysis, and
therefore, unfortunately, not to require graduate students in the field to
become proficient in these techniques. (This is not to say that there are
no good mathematicians in experimental psychology.)

What needs to be done to change this is to demonstrate to experimental
psychologists what systems analysis can do. They have not been unwilling to
learn techniques if they thought the effort would pay off. For example,
there has been a fair amount of effort recently to apply the mathematical
models of neural networks and chaos theory to psychological questions.
There are certainly experimental psychologists with better mathematical
training that I have who are ready, willing, and able to use systems
analysis and other mathematical procedures if the application seems warranted.

PCTers can go only so far in building a bridge to EAB. At some point,
EABers who want to understand PCT are going to have to complete their
educations, and learn how to apply mathematical analysis to real
physical systems. That, I claim, they do not now know how to do.

I agree; the question is, can we convince them that it will be worth the
effort? I think we can.