[From Bill Powers (971205.1111 MST)]
Bruce Abbott (971205.1040 EST)--
Let's talk about that. A control system is in a state of equilibrium when
the magnitude of disturbance to the CV begins to change so as to push the CV
farther away from its reference value, the CV, which at this moment is being
affected both by the changing disturbance and the constant system output,
begins to move further away from the system's current reference value for
the CV. This change is picked up by the perceptual input device, whose
output now also begins to change as a function of the changing input. The
change in perceptual signal propagates to the comparator, where it is
combined with the current reference signal to produce an output change in
error signal. This change in signal propagates to the output device, which
produces a change in output according to the output function, and this
change in output now begins to affect the CV.
As Bruce Gregory reminds us, causation itself is an aspect of a model. If
you have a model in mind, you naturally tend to see things in terms of that
model. The lineal causation model emphasizes temporal lags, which allow you
to think of causal events followed by their effects in the form of later
events. Even if the lag is very small, it allows enough conceptual space
between cause and effect to keep them separated in one's mind.
The PCT concept of behavior is not based on events, but on continuous
variations in variables. Let's take the same example you use, but look at
it in terms of continuous variations. We begin the same way:
A control system is in a state of equilibrium when
the magnitude of disturbance to the CV begins to change so as to push the CV
farther away from its reference value, the CV, which at this moment is being
affected both by the changing disturbance and the constant system output,
begins to move further away from the system's current reference value for
the CV.
Now let's follow in some detail. As the disturbance rises from zero, there
is a change in the error signal which produces a change in the output which
produces a change in the effect of output on the input, with some delay
representing the total propagation time from input back to input. During
the loop delay time, the error keeps increasing. After the initial delay,
the output begins to rise in a direction opposite to the disturbance. This
rising output slows the the increase of the error signal, until finally the
output is rising at nearly the same rate as the disturbance (in the
opposite direction); the error signal now is rising at a much lower rate,
heading toward an asympotote. If we plot the disturbance on the same scale
as the output, with the sign of one reversed, we will see that as the
disturbance rises there is a brief delay and then the output begins to
increase along a parallel curve with the same shape but slightly delayed.
When the disturbance reaches it maximum value, the output keeps rising for
one or two delay times, and then levels out at the equilibrium
(steady-state) value. When the disturbance begins to level out, the error
signal declines from the steady value it had during most the transition,
until it drops to the equilibrium value.
So the error signal follows a particular waveform, something like this:
ยทยทยท
*******************************
* <------------------------------> *
* output lagging dist by delay-time *
**** ****************
> >
start of change end of change
This whole plot should have a slight positive slope if the integrating
output function is leaky, but it's hard to indicate with asterisks.
I recommend looking at all the variables in a simulation containing a
delay. The overall picture is one of a continuous relationship with a
slight lag between the changes in the variables. The lag is visible only
when the disturbance is changing; when the disturbance reaches a steady
state, the effect of the lag disappears.
Clearly, in this way of seeing the behavior there are no significant
events, except those we may mark off at arbitrary points in the plot.
Instead we have continuous variations with time lags. The only hint of
normal cause-effect sequences appears when the disturbance is varying;
otherwise the behavior is just as if there were no delays.
So how could we _force_ this system to behave in a way that was more
clearly causal? The most popular way is to think of disturbances as on-off
events, with instantaneous rise times. And it helps to think of the delays
as being quite long:
dist
********************************************************
###############################
# output
<---- delay ----> #
#
***********################
Now we clearly have a stimulus and a response, with a definite onset time
for the disturbance and a somewhat less definite onset time for the
response. The error signal would look like this:
****************
*
*
*********** **********************************
Now we can see that the disturbance causes an error, which, after a delay,
is corrected by the rise in the output action. That's the description we
most often hear from people who want to create definite events in a
cause-effect sequence.
There's clearly a continuum here, with a sequence-of-events concept at one
end and a continuous-variation-with-lags concept at the other. If you want
to think in terms of discrete cause-effect sequences, you pick one end as a
source of examples. If you want to consider continuous variations, you pick
the other.
Best,
Bill P.