# Slowing factor, delays

[From Bill Powers (980818.1530 MDT)]

Here are some posts from the "MODELING" thread. This one relates to Rupert
Young's queries about some modeling details. The first post is from Rupert,
with comments from me indicated by >.

···

------------------------------------------------------------------

The slowing factor "slow" appears in this program step for the output
function (which is a leaky integrator):

o := o + (gain*e - o)/slow

Gain*e is the next value of the output for a proportional output function.
The parenthesis (gain*e - o) is the difference between that computed next
value and the present value of the output variable. So the right-hand term,
which is (gain*e - o)/slow, is a fraction of the difference between the
current output and the computed next value, "slow" being a number greater
than or equal to 1. Note that if slow = 1, the line of code reduces to

o := gain*e

The whole line of code says that the next value of o (left side of
replacement statement) is equal to the current value (o on the right) plus
a fraction of the difference between the current value and the calculated
next value for a proportional system. The smaller the fraction 1/slow is,
the more slowly the value of o approaches its final state for constant e.

[Rupert:]
Ok, I was using

o := o + (gain*e)/slow

which I've now changed to your equation,

>I've tried for a randomly changing disturbance and
>the error is low but never zero.

You can reduce the error by increasing the gain, up to a point (which needs
a separate discussion).

As long as the gain isn't less than the slowing factor ?

So a change in the disturbance of 100 units has 0.2 units of effect on the
perceptual signal, and the error remains within 0.1 to 0.2 percent of the
reference signal.

If you've programmed this control system, try a sine-wave disturbance that
starts with a very low frequency and gradually increase the frequency.
You'll see that at low frequencies the errors stay small and p is nearly
equal to r. As frequency rises, the errors will start getting larger. All
real control systems (including human ones) have this property. By
convention, the frequency limit of control is taken to be the frequency
where the output has become 0.7071 of the zero-frequency (or very low
frequency) output, for a sine-wave disturbance. This is known as the
"corner frequency."

What do you mean by zero-frequency ?

I haven't tried a square wave yet, but have just tried the sine wave. With a
constant frequency sine wave the input oscillates around the reference
following the sine wave frequency and phase, but very low amplitude.

And yes, with a sine wave of increasing frequency the input oscillations keep
increasing, ie. error gets larger and larger.

I think it would be best to digest what we have so far before going on to

Thanks. I think this clears up most of my other questions as well. Though
I'm
still not clear about the relationship between the gain and the slowing
factor. It seems as if they cancel each other out or could be combined into
one variable.

With slow = 501, this control system will come to a steady state in 1 or 2
iterations. But we need to discuss the effect of time lags in a control
system to understand what this means.

# Did you want to say something about time lags ?

To: bugs
From: Bill Powers <powers_w@frontier.net>
Subject: Re: Some control queries
Cc:
Bcc:
X-Attachments:
References: <Your message of "Wed, 12 Aug 1998 16:44:52 MDT."
<3.0.1.32.19980812164452.006903c4@mail.frontier.net> >

Hi, Rupert --

You can reduce the error by increasing the gain, up to a point (which

needs a separate discussion).

As long as the gain isn't less than the slowing factor ?

I think you have it backward, but the idea's right. The slowing factor must
be at least gain + 1, for error correction without artificial oscillations
(generated because of using a digital computer). So if the condition is

Slow >= gain + 1, then

slow - 1 >= gain, and

gain <= slow - 1.

If the slowing factor is 100, the gain can be as large as 99. The derivation
is in that Psych Rev article, which is reprinted in Living Control Systems,
the first of my two volumes by that name.

What do you mean by zero-frequency ?

The disturbance or reference level is constant. If you keep reducing the
frequency of oscillation of a variable, you'll eventually get to one cycle
per second, one cycle per minute, one cycle per hour, one cycle per year, one
cycle per millenium --- at some point you'll decide that the variable might
as well be constant.

I haven't tried a square wave yet, but have just tried the sine wave. With a
constant frequency sine wave the input oscillates around the reference
following the sine wave frequency and phase, but very low amplitude.

Right, that's what you should see.

Did you want to say something about time lags ?

OK, I guess we might as well get into that, if you're satisfied with all the
preceding.
-------------------------------------------------------------------
There are two kinds of lags, integration lag and transport lag. We'll
consider integration lag first.

Integration lag is what you see in a leaky integrator (or a pure one) when
you plot output against input. If you apply a sine wave at some frequency to
the input, the output will also be a sine wave of the same frequency, but
running a little behind the input as measured by the time of zero crossing.
When the slowing factor in the leaky integrator is 1, this lag will be
minimum. As you increase the slowing factor, the integral lag -- the time
between zero crossings of the input and the output -- will become greater and
greater in proportion to the period. At the same time, the output amplitude
will become smaller and smaller -- increasing lag relative to period goes
with a decrease in amplitude.

In a control system you can compare a signal to itself after one trip around
the loop (breaking the loop is the easiest way). If there is just one
integrator in the loop, the maximum phase shift between the original signal
and the final signal is 90 degrees, at any frequency -- in terms of time
lags, that is one quarter of the time between successive zero crossings. You
can verify this by putting sine waves of constant amplitude and variable
frequency into a leaky integrator and plotting input and output on the same
axis. At low frequencies there will be almost zero delay between input and
output, and at high frequencies the delay will come close to 1/4 of the
period of the oscillation -- delays being expressed as a fraction of the
period of oscillation.

[Note: Rupert later pointed out an error here: I should have said
"positive-going zero crossings," or referred to half of the period of the
sine wave.]

If you use a pure integrator, the delay between input and output zero
crossings will be 1/4 of the period at all frequencies.

This is an important fact in achieving stable control. If the overall
characteristic of a closed loop is equivalent to a single integrator, leaky
or perfect, the loop will be stable at any loop gain (but see later, because
transport lag must also be considered) and at all input frequencies. When we
design control systems, therefore, we want to achieve this overall loop
characteristic, known in servo circles as a "one-stage Tschebychev filter". I
think "leaky integrator" is simpler. It can be achieved if all the functions
inside the control system are simple constants of proportionality, and the
environmental feedback function is a leaky or perfect integrator.

My inverted pendulum model is designed that way.

One last fact before we go on to transport lags. If we have _two_
integrations in a row, each one, at a high enough frequency, will create a
lag of 1/4 the period of the disturbance, making a total of 1/2 the period.
If the loop gain is high enough at this frequency, the negative feedback
around the loop will be converted to positive feedback: the effect of an
oscillating disturbance will be fed back late enough to add to the
disturbance instead of opposing it, and the loop will run away.

In a system with integral lags only, the amplitude of the response to an
oscillating disturbance falls at the same time that the lag is becoming a
greater and greater fraction of the period of the oscillation. Therefore it
is possible to choose a loop gain so that just as the lag approaches half of
the period, the amplitude response drops to unity. This is possible in a
control system containing _two_ leaky integrators, but not two pure
integrators. When this is achieved, the system is _just_ stable. It may
oscillate but the oscillations will die out with time when the inputs are
constant. At low frequencies the system will act like a high-gain control
system, and at high frequencies it will respond to disturbances so little
that we can say control doesn't exist. This is what engineers try to achieve
when they are trying to make a control system stable.

Now let's consider transport lags. A transport lag is a lag induced by the
time it takes a signal to travel from one location to another. A signal of
any frequency subject to a transport lag has exactly the same waveform at the
output as at the input; the ONLY difference is the time-delay. The output
amplitude does NOT decrease as the input frequency increases.

Obviously it's much easier to achieve the condition in which the lag is half
the period of the input waveform and the loop gain is still greater than 1.
The gain doesn't fall off with frequency at all, so when the transport lag
equals half the period of a disturbance, the feedback becomes positive and
the system runs away if the loop gain at _any_ frequency is greater than 1.

The first conclusion one is likely to reach about transport laqs is that no
system with a transport lag can have a loop gain greater than 1. Warren
McCullouch, reporting to his group on the first 5 Macy conferences in
cybernetics, reached just such a conclusion. If that were true, it would be
impossible to build control systems with very large loop gains, because all
physical systems contain transport lags of at least a few nanoseconds.

There is, however, a way to compensate for transport lags so that very large
loop gains can be used. What you have to give up is the ability to control at
very high frequencies, frequencies comparable to 1/lag. Actually, you have to
give up a little more, because the cure is to add an integral lag to the
system. The integral lag, which has the property of making its output
amplitude fall as frequency increases, can be adjusted so that the loop gain
falls below 1 at the frequency where the transport lag inherent in the
system, plus the integral lag, is just equal to half the period.

That is why we have to use a slowing factor in all digital representations of
a control system. In a digital representation, the transport lag is always
present: it is whatever physical time is represented by one iteration of the
program loop. This iteration time is always there, although it's seldom
mentioned in digital simulations. If no special provision is put into a
program for specifying how much physical time is to be represented by one
iteration of the program, the actual time is simply the time taken by the
computer to do one complete iteration. As a result, many simulations of
physical systems unknowingly are showing physical systems changing at rates
of many megahertz.

In our control system simulations, we specify the time taken by one iteration
as an explicit variable, dt. This is used anywhere in the loop that there are
processes which depend on physical time. Consider our output function,
written previously as

o := o + (gain*e - o)/slow

This equation represents an increment in the output that depends on the
amplified error signal and the slowing factor. But it also represents
something unspoken: the length of time over which the influence of (gain*e -
o) is acting. The longer that period of time, the greater will be the change
in o for given values of gain, e, o, and slow. There is a dt hidden in this
program step; it is hidden because it is implicitly defined as 1 second, or 1
unit of whatever time period we have unwittingly selected to go with one
program iteration.

If we put dt in explicitly, as "dt", the equation looks like this:

o := o + (gain*e - o)*(dt/slow)

If we want to use time units of seconds for everything, which is generally
good practice in simulations, then we want dt to represent the actual
physical time that one program iteration is to represent. In much of our
tracking work, the data are sampled 30 or 60 times per second, so the natural
size of dt would be 1/30 or 1/60 second: 0.0333 or 0.0167 sec. If our
previous value of "slow" had been 5 units when we weren't using dt, it should
now be adjusted to dt*5 -- for dt = 0.033, "slow" should become 0.167.

The reciprocal of the slowing factor represents the time-constant tau in a
leaky integrator that responds to a step input according to the formula

y(t) = y0*[1 - exp(-t/tau)]

I mentioned "artificial" oscillations that can happen when "slow" is made too
small, or the gain is too high for the selected value of "slow." They will
occur when "slow" is smaller than dt. We know that computations done when
slow is smaller than dt can't be taken literally because in effect we're
trying to calculate the values of variables that change in LESS than one
iteration of the program -- which is impossible. The oscillations that we see
are an artifact which is due chiefly to picking too long a time for dt. The
cure is clearly to change dt to a smaller number.

If the data were taken at a certain rate (usually 30 times per second in our
tracking experiments), there is a lower limit to dt. If it turned out that we
got artificial oscillations when fitting our model to the data using a dt of
1/30 sec, the only fix would be to re-do our experiments while we sampled the
data at a higher rate. Fortunately, that has never happened so far. What we
find is that the value of "slow" needed to obtain the best match of model to
data is 5 to 10 times the minimum permissible value where we would start
seeing artifacts of computation.

A last remark, before pausing for discussion and digestion. We begin with a
(human) system that is known, from its behavior, to be stable. Therefore it
is not up to us to calculate the integration lag or the gain that is needed
to obtain stability. When we measure the lag in the human control system, and
fit a model which has both transport and integral lags in it to the data, we
find that the actual integral lag, as measured, is large enough to prevent
instabilities in a system with the measured transport lag. In other words,
nature has done the job for us. It would be very bothersome if the measured
integration times were not consistent with the measured transport lags, but
this has never been the case.

Best,

# Bill P.

Hi, Rupert & watchers --

At 07:22 PM 8/16/98 +0100, you wrote:

When the slowing factor in the leaky integrator is 1, this lag
will be minimum.

Do you mean when gain=slow ?

I don't think so. Consider our program step:

o := o + (gain*e - o)/slow

we can expand the right-hand side:

o := o + gain*e/slow - o/slow

o := o *(1 - 1/slow) + gain*e/slow

When slow = 1, we have

o := o *(1 - 1/1) + gain*e/1 or

o := gain*e

This says that when slow = 1.0, the output is proportional to the error,
which involves the least possible amount of delay (1 iteration).

As you increase the slowing factor, the integral lag --
the time between zero crossings of the input and the output -- will become
greater and greater in proportion to the period. At the same time, the
output amplitude will become smaller and smaller -- increasing lag
relative to period goes with a decrease in amplitude.

And with larger oscillations of the input ?

For this kind of analysis you assume a constant input amplitude.

In a control system you can compare a signal to itself after one trip
around the loop (breaking the loop is the easiest way). If there is just
one integrator in the loop, the maximum phase shift between the original
signal and the final signal is 90 degrees, at any frequency -- in terms of
time lags, that is one quarter of the time between successive zero
crossings.

Aren't zero crossings every 180 degrees ?

Yes, you're right. I should have said "positive-going zero crossings" or else
used half the period instead of the full period.

You can verify this by putting sine waves of constant amplitude
and variable frequency into a leaky integrator and plotting input and
output on the same axis.

Not really with you here. Do you mean something like this ?

O(n) = sin(f*x(n)) * slow

where slow=0.99, say and f is a function of n.

No, like this:

e := sin(2*pi*f0*0.001*n) {n = 1,2,3 ... }

o := o + (gain*e - o)/slow

The first statement creates a sine wave with a frequency that starts at
f0*0.001 and gradually increases iteration by iteration. The input, e, is
then used in the next statement to generate the output.

Best,