[Martin Taylor 2005.11.02.09.18]
[From Bjorn Simonsen (2005.11.02.11:45 EUST)]
Martin Taylor 2005.11.01.15.50]
"We talk of "Gain" as if it were a simple multiplier, but really it
isn't".
I understand a multiplier as an analog circuit that makes a product of two
volt-input. I have never thought upon the "Gain" as if it is a
multiplier(with inputs).
But you talk about it as if it is.
I am sorry. I have never used the concept "a simple multiplier" on this
list. My first sentence above is what I understand a multiplier is in an
amplifier. The way I did understand the concept "Gain" in Output functions
was simultaneous arrivals of impulses in two (or more) paths and one output
impulse.
But the diqagram of a control loop has only two places where a function has two inputs. One is the comparator function, where the two inputs are the reference (R) and the perceptual signal (P). The other is the node in the "outer world" where the two inputs are the output (O) and the disturbance (D). Neither of these is ordinarily considered to be a multiplier (though there's no reason in principle why they should not be).
When you presented the "simple multiplier" concept, I thought this also
expressed the way I explained the concept "Gain".
When you say that "Gain" is not a _simple_ multiplier, I understand it is OK
to say multiplier, but not a _simple_ multiplier. Rather a complex
multiplier.
Is this thinking in harmony with "But you talk about it as if it is"?
What is a complex multiplier? The only kind I know is one in which complex numbers are multiplied. But I don't think that's what you mean. What I meant by "not a simple multiplier" should perhaps have been written "not simply a multiplier".
The concept of "Gain" appears several times in any analysis of a control loop, once as "loop gain", once for each functional component and path in the loop. Usually, any analysis is done using a gain of 1.0 for most of the paths and functions, except for the output function. If that assumption is made, the loop gain is numerically equal to the gain of the output function. The loop gain itself refers to what would happen if you were to break the loop at any point and inject a signal at the break point. A signal would quickly appear on the other side of the break, and the loop gain would be the relationship between the signal you injected and the one you measured at the other end.
In a physical system, it takes time for the signal to go around the loop, so you also have to consider that. Also, if you inject an impulse as your signal, what you get at the other end is likely not to be an impulse, but to be extended in time. If the loop's output function is an integrator and you inject an impulse, what you get out is a step function. If the output function is a leaky integrator, what you get out is a step that declines exponentially to zero. If the integrator also has a slowing factor, you get a slower step rise before the exponential decay. That's the canonical case in Bill's simulations.
>>I have thought upon "Gain" as a necessary parameter to achieve good
feedback
in a loop.
It is. That's loop gain, but it is improper to think of it as a
scalar number. It's a time function, and if the system is linear, you
can use the same equations you have been using, but recognize that
the different symbols actually represent the Laplace transforms of
the waveforms of these time functions.
Yes, That is OK. I have problems with your last subordinate clause. I
understand that the different symbols are not constant but they change in
time. I am sorry, today I cant calculate a Laplace transformations.
That doesn't matter. Forget it. It's just a way of making the analysis of functions extended over time much easier than by considering the waveform sample by sample -- at least it is easier if the waveforms of the functions concerned are suitable, which is the case for the canonical control loop. If it doesn't make sense to you, don'[t worry about it.
I have thought upon the product ki*ke*ko (ki has units of signal
per physical unit, ke has units of input per unit of output and ko has
units
of (other) physical units per signal unit)(from BCP). This is effective
for
my understanding. But now I will understand your wording.
If I understand you, ke is the multiplier that would be the gain of a
non-integrating output function. If the Reference signal were zero, P
would be S*ki, E=P, o=ke*P and S=ko*o. Each k is a simple multiplier.
Or, in a linear system, a Laplace transform of some waveform.
Here I think different. ke is the multiplier that convert the error signal
to the Output quantity.
That's what I said that you said, isn't it? But I'm trying to point out that this "multiplier" ke isn't a scalar multiplier. It's extended in time.
Let's consider a discrete (time-sampled) simulation. If the output function is an integrator, and the input signal (call it the "error signal" for this special case) is zero except for the time sample at time t0, when it is 1.0, whatis the output signal? It is zero until time t0, and ke ever afterwards.
When you say "the gain of a _non-integrating_ output
function, I think upon a quantity that is not directed back to the Input
function.
Not at all what I mean. By a non-integrating output function I mean an output function for which, if the input is zero for all time samples except that it is 1.0 at time t0, the output is zero for all time samples except that it is ke at time t0. Feedback has nothing to do with it. This is the isolated output function alone that we are talking about.
I think upon a negative feedback loop where the output quantity is directed
to the Input function.
I understand PCT as if:
p = (ki*ke*ko*r + ki*kd*d)/(1+ ki*ke*ko) (most often ki =1).
Working with simulations as Bill and Rick do, they say "Gain" = ki*ke*ko.
And they contract the different k values to one value called Gain.
Yes, but all these k factors represent functions of time. They aren't simple multipliers. In this context "Gain" means "loop gain".
When you say "It represents how fast the integrator's output changes for a
given value of error. It's a rate multiplier rather than an error
multiplier. I think I understand this OK. But this sentence switch my
thoughts to the 'slowing factor'. And in the output function I think upon
the output signal, o(t+1) = o(t)+ s[g(r-p) - o(t)].
Forget the slowing factor. It's the "leak" in the "leaky integrator",
and its function is to allow the output to forget long-past values of
the input. It smooths out past errors, and in discrete simulations it
ensures that you don't see oscillations that are due purely to the
method of computation (or not much).
OK, I think. With my words the "leaky integrator" and the non-linear
function compensates for the slowing factor.
Non-linear or linear has nothing to do with the slowing factor. The leaky integrator is a linear function. Nothing "compensates for" it. The slowing factor is just there to smooth out the system response. Actually, it isn't itself the leak. I was wrong to say that, but the leak is usually introduced in the same formula as the slowing function, which tends to get them confused. It is, in your formula: o(t+1) = o(t)+ s[g(r-p) - o(t)]. The first component in the "s" bracket is the slowing factor, the second is the leak.
Let's try to clarify this all with a series of examples: a loop with, first, a non-integrating output function, second, an integrating output function, and third, an integrating output function with a slowing factor.
Think what would happen in a time-sampled simulation of a loop that had no integration and no slowing factor. Imagine that everything is nice and stable, with a reference value of zero. Now for only one time sample, at t0, add a disturbance of +1.0. The S(t0) = 1.0, P(t0) = 1.0, E(t0) = -1.0, O(t0)= ke*-1.0, and ... but now we have a problem, because at t0, we have asserted that O(t0) is zero. That's the starting condition we supposed to be the case at t0.
You can't, even in a spreadsheet, have a cell take on two different values at the same moment. We have to put in at least a one time-sample delay somewhere in the loop (anywhere will do). It's probably most convenient to put the one-sample delay into the output function, and change what I said above to O(t0) = 0 (the initial condition, and O(t1) = ke*-1.0.
Now what's the situation?
S(t1) = O(t1) + D(t1) = ke*-1.0 + 0.0
P(t1) = ke*-1.0
E(t1) = ke*1.0 (changing sign again, because E = R-P)
O(t2) = ke*ke*1.0 = ke^2 * 1.0
S(t2) = ke^2 * 1.0
P(t2) = ke^2 * 1.0
E(t2) = ke^2 * -1.0
O(t3) = ke*ke^2 * -1.0 = ke^3 * -1.0
Do you see what's happening? P doesn't return to the reference value at all, even though the loop gain is negative and could be quite large. P shows an ever-increasing series of impulses up and down. Assuming ke = 3, successive time samples of P would be (starting with the disturbance at t0), 1, -3, 9, -27, 81, -243 ...
Now let's try the same thing, but with an integrating output function O(tn) = O(tn-1) + ke*E(tn). Again we start with everything at zero, and apply a disturbance of 1 that lasts for one sample. I'll assume ke = 3 as before, and just use numeric values. I'll also ignore the S stage after the initial disturbance impulse, since S and P are both equal to the output after the disturbance impulse has returned to zero, assuming they both have unity gain.
S(t0) = 1
P(t0) = 1
E(t0) = -1
O(t1) = -3
P(t1) = -3
E(t1) = +3
O(t2) = +9 -3 = +6
P(t2) = +6
E(t2) = -6
O(t3) = -18 + 6 = -12.
We still have a series of impulses going up and down, but they don't escalate as fast. The series goes -3, 6, -12, 24, -48 ...
Now incorporate a slowing factor of, say, .1 in the output integration, meaning that O(tn)=O(tn-1)+.1*ke*E(tn). There's still no leak.
S(t0) = 1
P(t0) = 1
E(t0) = -1
O(t1) = 0 + 0.1*3*-1 = -0.3
P(t1) = -0.3
E(t1) = 0.3
O(t2) = -0.3 + 0.1*3*0.3 = -.21
P(t2) = -.21
E(t2) = .21
O(t2) = -.21 + 0.1*3*.21 = -.147
P(t3) = -.147
E(t3) = .147
O(t3) = -.147 + 0.1*3*.147 = -.1029
and so on.
This time, the perceptual signal is moving smoothly toward the reference value, which is zero. The slowing factor has smoothed out the effects of the time sampling that is inherent in the discrete simulation.
If it were a really continuous system, like the one being simulated, there would still have to be some time lag around the loop. Nothing physical happens in zero time, and the integrator would still take some time to come to its final value. In the continuous case, we have to deal with filter bandwidths and phase shifts, but you can still imagine what would happen after an impulse, with its effects running round and round the loop until they finally either die away or explode to infinity. The continuous equivalent of the slowing factor smears the impulse into something flatter each time around the loop. It's a "low-pass filter".
I think I have seen your comments about "Gain" and "non-linear" functions
earlier. I think I remember Bill saying that Martin uses a RIF.
That's true, but it's irrelevant. The RIF is the function that combines outputs from higher-level control systems and produces a single value that is the reference input to the comparator. It doesn't enter this discussion, because it is outside the loop.
Today I
don't think you substitute the "Gain" with some none-linear functions. You
substitute the Slowing factor and prevent unexpected shocks in the
Comparator/or RIF with a "tol" and a non-linear function.
No. I don't alter the slowing factor, and the RIF is irrelevant. The tolerance band doesn't "prevent unexpected shocks" at all. All it does is ensure that if the difference between the reference value and the perceptual signal value is less than "tol", the error signal sent to the output function is zero.
I have not seen that neither Rick nor Bill have changed their simulations in
your way. I would have appreciated that both Bill (when he is back) and Rick
commented why.
I think Bill says that he has tried it, or something similar, on occasion, and found that a small tolerance band does improve the model fit. I looked for the relevant post, but couldn't find it, so maybe I'm wrong. A tolerance band certainly improves the model fit to my tracking data from the "Sleepy Teams" study.
To your last sentence.
In the comparator function we talk about signals valued as frequencies. They
are different dependent on which level we are. (Higher frequencies higher
up) (Now I am thinking when I write).
Let us expect frequencies near 10 per sec. Now we have to develop a "tol".
This "tol" should be different if the errors are near and "over" (+)(-)10 or
the errors are near zero. Shall we estimate "tol":
IF ((r-p)>7;tol=0.1;IF ((r-p)< -7;tol=-
0.1;IF(-0.5<(r-p)<0.5;tol=0.05;tol=0)))
"tol" can't depend on (r-p).
In principle, I suppose it could be a function of r or of p, or of any other signal in the whole system. But the one thing it CANNOT be a function of is (r-p), because "tol" is what (r-p) is compared with.
Shall we estimate the comparator function:
error= r-p+tol
That's one possibility, if r - p is guaranteed to be more negative than tol. It wouldn't work under any other circumstances. As I said and showed in graphs, I've tried
E = if(|r-p| < tol, 0) else (r-p), and
E = if(|r-p| < tol, 0) else sgn(r-p)*(|rip| - tol)
where sgn(x) = +1 if x >0, and -1 if x < 0.
The first form seems to work better than the second for my data.
Comments?
Enough?
Martin