Good Corporate Citizen (was Interesting law)

[From Rick Marken (2005.11.01.0920)]

Bjorn Simonsen (2005.11.01,15:45 EUST)--

Martin Taylor 2005.11.01.09.14]
It's a _linguistic_ difference in which element is considered to be
fixed and which to be variable, not, if I read Rick correctly, a
difference in the relation between "tolerance" and PCT mechanism.

The main point is that we agree.
I still have some problems with Rick's Gain. But I stop here.

I think the only difference between what Martin and I are saying is that
Martin has looked at tolerance in terms of a non-linear gain function, one
with a band of zeros that defines the "tolerance" zone while I have been
looking at tolerance in terms of a linear gain function, the slope of which
determine the level of tolerance (low slope = high tolerance). I think
Martin himself pointed this out. Let me try to make this clear using the
equation of the output function of a simple, proportional controller. The
basic output function equation for the static case is:

o = k (r - p)

Output (o) at any instant is proportional (k) to the size of the error
signal (reference - perceptual signal). Assuming that all other multipliers
around the loop remain constant, k determines the loop gain. So k can be
called the gain. The larger k, the higher the loop gain and (within the
dynamic constraints required for stability) the better is control (that is,
the closer p is kept to r).

I think of a tolerant system as one that will put up with an absolute level
of error (r-p) greater than zero. Such a system will not produce much output
when (r - p) >0. The way to get a system to be tolerant in this sense is to
lower k. In fact, if k is set to 0, the system won't react regardless of the
size of r - p. The larger k, the more output per error and, in general, the
smaller the error. That is my approach to conceptualizing tolerance; a
tolerance control system could just increase it's tolerance for error (r -
p) by reducing k. Reducing k will increase tolerance in the sense that it
will increase the prevailing level of error (r - p) experienced by the
system.

Martin points out that another way to be tolerant is to make the gain
dependent on the size of r - p. If r - p is between a lower cutoff value
(lc) and an upper cut off value (uc) then k = 0 otherwise k becomes some
large number. In this case the tolerance control system would determine how
much error it will put up with by varying the values of lc and uc. So gain
is zero when error is between lc and uc -- the system does nothing to
correct error when it's in this range; it _tolerates_ the error -- but the
gain is high when the error is outside this range. So the error, when it
gets outside the lc, uc range, is immediately brought back into the lc, uc
range

So the difference between Martin and I is actually no difference at all; we
both see tolerance as adjusting the gain of the control system. A tolerant
control system is one with low gain and an intolerant control system is one
with high gain. The gain adjustment could be done as I suggest, so that the
gain is changed for error in general, or as Martin suggests, so that the
gain goes to zero only in a specified range of error. These are just two
different models of the details of how a tolerance control system might
work. Which is actually the better model would have to be tested by doing
experiments.

Does this help?

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bjorn Simonsen (2005.11.01,20:55 EUST)]
Martin Taylor 2005.11.01.10.11]

If we take the canonical form of the control loop as it is
often drawn, the top and right sides of the drawing consist
of a comparator and an output function. The output
function is usually an integrator (probably leaky). We talk
of "Gain" as if it were a simple multiplier, but really it isn't.
It represents how fast the integrator's output changes for
a given value of the error. It's a rate multiplier rather than
an error multiplier.

You must excuse me, but I must put it into my words and ask if you
understand me and if I am on a straight way.
"We talk of "Gain" as if it were a simple multiplier, but really it isn't".
I understand a multiplier as an analog circuit that makes a product of two
volt-input. I have never thought upon the "Gain" as if it is a
multiplier(with inputs).
I have thought upon "Gain" as a necessary parameter to achieve good feedback
in a loop. I have thought upon the product ki*ke*ko (ki has units of signal
per physical unit, ke has units of input per unit of output and ko has units
of (other) physical units per signal unit)(from BCP). This is effective for
my understanding. But now I will understand your wording.

When you say "It represents how fast the integrator's output changes for a
given value of error. It's a rate multiplier rather than an error
multiplier. I think I understand this OK. But this sentence switch my
thoughts to the 'slowing factor'. And in the output function I think upon
the output signal, o(t+1) = o(t)+ s[g(r-p) - o(t)].

The comparator, likewise, is ordinarily taken to be a simple
sub tractor. That's a linear function (as is a leaky integrator).
When I talk about non-linear gain, I really mean that the
comparator is a non-linear function. The output function
may still be the (linear) leaky integrator. So I've been
unnecessarily confusing.

Your first and second sentence is OK. When you talk about non-linear gain,
you describe something in the comparator. I shall forget the Output function
and think upon the output function as if there is no gain and slowing
factor. Here I must misunderstand. Because the error value is really
paraphrased to a greater output quantity. Here I am uncertain.

In the "tolerance" discussion, if I wanted to make a simulation
I would describe the comparator function something like:
E = If(abs(R-P)<tol, 0, (some function of R-P, tol)), where
"tol" is the tolerance limit, and the portion after the second
comma is the "else" clause.

OK. Normally in Rick's Spreadsheet, the E (error) is just (r-p) and if r=p,
E=0.
This is OK.

One possibility is to make the "some function" be: if(R>P, R-P-tol,

R-P+tol).
OK

The lower the slope of the line from the origin to the
point {(R-P), E}, the lower the effective gain rate of the
whole system, and if |(R-P)|< tol, the effective gain is zero.

OK

As a side note, in my tracking simulations, I've found a different
function to fit better: E = if(|R-P| < tol, 0, (R-P)).

OK

Probably there are lots of better functions, but the one graphed
first above is easy to talk about.

Still OK

The output function could also be nonlinear. But that's a different
matter. It has to be nonlinear in that any physical system will have
a maximum possible output value. But most control systems don't
get into that range very often.

OK

>When they do, it's often because of conflict. As many demonstrations

have shown, conflict between two control systems escalates until
one has reached its maximum output.

Do I think correct when I say that an Input function forms two perceptual
signals. The one goes to the one comparator. The other goes to the other
comparator. The two comparators get different reference signals. From each
comparator an error goes to an output function (two output functions). From
each output function two different output signals follow each feedback loop
to the input function. NO.
Is it possible to give me a sketch?

However, if either of the two control systems has a tolerance
zone such that the reference of value of the other is within the
tolerance zone, the conflict will not be manifest, and both will
maintain their effective error signal "E" at zero.

OK

Do you both have your "tol" in the comparator and a gain and slowing factor
in the output function at the same time you also have a "tol" there?

Bjorn

[Martin Taylor 2005.11.01.15.50]

[From Bjorn Simonsen (2005.11.01,20:55 EUST)]
Martin Taylor 2005.11.01.10.11]

If we take the canonical form of the control loop as it is
often drawn, the top and right sides of the drawing consist
of a comparator and an output function. The output
function is usually an integrator (probably leaky). We talk
of "Gain" as if it were a simple multiplier, but really it isn't.
It represents how fast the integrator's output changes for
a given value of the error. It's a rate multiplier rather than
an error multiplier.

You must excuse me, but I must put it into my words and ask if you
understand me and if I am on a straight way.

OK

"We talk of "Gain" as if it were a simple multiplier, but really it isn't".
I understand a multiplier as an analog circuit that makes a product of two
volt-input. I have never thought upon the "Gain" as if it is a
multiplier(with inputs).

But you talk about it as if it is.

I have thought upon "Gain" as a necessary parameter to achieve good feedback
in a loop.

It is. That's loop gain, but it is improper to think of it as a scalar number. It's a time function, and if the system is linear, you can use the same equations you have been using, but recognize that the different symbols actually represent the Laplace transforms of the waveforms of these time functions.

I have thought upon the product ki*ke*ko (ki has units of signal
per physical unit, ke has units of input per unit of output and ko has units
of (other) physical units per signal unit)(from BCP). This is effective for
my understanding. But now I will understand your wording.

If I understand you, ke is the multiplier that would be the gain of a non-integrating output function. If the Reference signal were zero, P would be S*ki, E=P, o=ke*P and S=ko*o. Each k is a simple multiplier. Or, in a linear system, a Laplace transform of some waveform.

When you say "It represents how fast the integrator's output changes for a
given value of error. It's a rate multiplier rather than an error
multiplier. I think I understand this OK. But this sentence switch my
thoughts to the 'slowing factor'. And in the output function I think upon
the output signal, o(t+1) = o(t)+ s[g(r-p) - o(t)].

Forget the slowing factor. It's the "leak" in the "leaky integrator", and its function is to allow the output to forget long-past values of the input. It smooths out past errors, and in discrete simulations it ensures that you don't see oscillations that are due purely to the method of computation (or not much).

The pure integrator output function without the slowing factor is
O(t) = g*integral(E(t)dt), or in discrete notation, O(t) = O(t-1) + g*E(t-1).
Here, "g" is the gain per unit time, whatever you choose that unit of time to be.

>The comparator, likewise, is ordinarily taken to be a simple

sub tractor. That's a linear function (as is a leaky integrator).
When I talk about non-linear gain, I really mean that the
comparator is a non-linear function. The output function
may still be the (linear) leaky integrator. So I've been
unnecessarily confusing.

Your first and second sentence is OK. When you talk about non-linear gain,
you describe something in the comparator. I shall forget the Output function
and think upon the output function as if there is no gain and slowing
factor. Here I must misunderstand. Because the error value is really
paraphrased to a greater output quantity. Here I am uncertain.

Greater or smaller. You can't tell with an integrator. It's output might be zero when its input is maximum, as would be the case if the input were a sinusoid. The output function does indeed have gain, as I describe above. What the nonlinear functions in the comparator do is compounded with whatever the output function does. If the comparator emits a zero signal, then the output function is working on a zero input. If the output function is indeed a pure integrator, that means its output value won't change, not that its output value is zero.

>In the "tolerance" discussion, if I wanted to make a simulation

I would describe the comparator function something like:
E = If(abs(R-P)<tol, 0, (some function of R-P, tol)), where
"tol" is the tolerance limit, and the portion after the second
comma is the "else" clause.

OK. Normally in Rick's Spreadsheet, the E (error) is just (r-p) and if r=p,
E=0.
This is OK.

One possibility is to make the "some function" be: if(R>P, R-P-tol,

R-P+tol).
OK

The lower the slope of the line from the origin to the
point {(R-P), E}, the lower the effective gain rate of the
whole system, and if |(R-P)|< tol, the effective gain is zero.

OK

As a side note, in my tracking simulations, I've found a different
function to fit better: E = if(|R-P| < tol, 0, (R-P)).

OK

Probably there are lots of better functions, but the one graphed
first above is easy to talk about.

Still OK

The output function could also be nonlinear. But that's a different
matter. It has to be nonlinear in that any physical system will have
a maximum possible output value. But most control systems don't
get into that range very often.

OK

>When they do, it's often because of conflict. As many demonstrations

have shown, conflict between two control systems escalates until
one has reached its maximum output.

Do I think correct when I say that an Input function forms two perceptual
signals.
The one goes to the one comparator. The other goes to the other
comparator. The two comparators get different reference signals.

I'm talking here about two independent control systems acting through the same environmental variable. Each has its own reference value for its internal representation of that environmental variable. That's the canonical conflict situation (but not the only one).

From each
comparator an error goes to an output function (two output functions). From
each output function two different output signals follow each feedback loop
to the input function. NO.
Is it possible to give me a sketch?

I did that in the message to which you are responding, Didn't it come out? Just take two of those, and make both outputs go to the same node at the bottom left of the diagram (and of course, both inputs come from that node). The input value to both control systems is the sum of their two outputs plus the disturbance.

>However, if either of the two control systems has a tolerance

zone such that the reference of value of the other is within the
tolerance zone, the conflict will not be manifest, and both will
maintain their effective error signal "E" at zero.

OK

Do you both have your "tol" in the comparator and a gain and slowing factor
in the output function at the same time you also have a "tol" there?

I didn't make any assumptions about the form of the output function, but I normally consider it to be a simple leaky integrator. I suppose it also could have a tolerance range, but to assume that seems to me to be redundant. How would you tell the difference between it's effect and the effect of a tolerance band in the comparator function?

Martin

[From Bjorn Simonsen (2005.11.02.11:45 EUST)]
Martin Taylor 2005.11.01.15.50]

"We talk of "Gain" as if it were a simple multiplier, but really it

isn't".

I understand a multiplier as an analog circuit that makes a product of two
volt-input. I have never thought upon the "Gain" as if it is a
multiplier(with inputs).

But you talk about it as if it is.

I am sorry. I have never used the concept "a simple multiplier" on this
list. My first sentence above is what I understand a multiplier is in an
amplifier. The way I did understand the concept "Gain" in Output functions
was simultaneous arrivals of impulses in two (or more) paths and one output
impulse.
When you presented the "simple multiplier" concept, I thought this also
expressed the way I explained the concept "Gain".
When you say that "Gain" is not a _simple_ multiplier, I understand it is OK
to say multiplier, but not a _simple_ multiplier. Rather a complex
multiplier.
Is this thinking in harmony with "But you talk about it as if it is"?

I have thought upon "Gain" as a necessary parameter to achieve good

feedback

in a loop.

It is. That's loop gain, but it is improper to think of it as a
scalar number. It's a time function, and if the system is linear, you
can use the same equations you have been using, but recognize that
the different symbols actually represent the Laplace transforms of
the waveforms of these time functions.

Yes, That is OK. I have problems with your last subordinate clause. I
understand that the different symbols are not constant but they change in
time. I am sorry, today I cant calculate a Laplace transformations. But I
understand the "answer" is different values at different time. I don't
remember many rules from integral calculus. (maybe I should go back to more
mathematics? - I don't think so)

I have thought upon the product ki*ke*ko (ki has units of signal
per physical unit, ke has units of input per unit of output and ko has

units

of (other) physical units per signal unit)(from BCP). This is effective

for

my understanding. But now I will understand your wording.

If I understand you, ke is the multiplier that would be the gain of a
non-integrating output function. If the Reference signal were zero, P
would be S*ki, E=P, o=ke*P and S=ko*o. Each k is a simple multiplier.
Or, in a linear system, a Laplace transform of some waveform.

Here I think different. ke is the multiplier that convert the error signal
to the Output quantity. When you say "the gain of a _non-integrating_ output
function, I think upon a quantity that is not directed back to the Input
function.
I think upon a negative feedback loop where the output quantity is directed
to the Input function.

I understand PCT as if:

p = (ki*ke*ko*r + ki*kd*d)/(1+ ki*ke*ko) (most often ki =1).

Working with simulations as Bill and Rick do, they say "Gain" = ki*ke*ko.
And they contract the different k values to one value called Gain.

When you say "It represents how fast the integrator's output changes for a
given value of error. It's a rate multiplier rather than an error

multiplier. I think I understand this OK. But this sentence switch my

thoughts to the 'slowing factor'. And in the output function I think upon
the output signal, o(t+1) = o(t)+ s[g(r-p) - o(t)].

Forget the slowing factor. It's the "leak" in the "leaky integrator",
and its function is to allow the output to forget long-past values of
the input. It smooths out past errors, and in discrete simulations it
ensures that you don't see oscillations that are due purely to the
method of computation (or not much).

OK, I think. With my words the "leaky integrator" and the non-linear
function compensates for the slowing factor.

I think I have seen your comments about "Gain" and "non-linear" functions
earlier. I think I remember Bill saying that Martin uses a RIF. Today I
don't think you substitute the "Gain" with some none-linear functions. You
substitute the Slowing factor and prevent unexpected shocks in the
Comparator/or RIF with a "tol" and a non-linear function.
I have not seen that neither Rick nor Bill have changed their simulations in
your way. I would have appreciated that both Bill (when he is back) and Rick
commented why.
I see advantages with your thinking. (conflicts).

The pure integrator output function without the slowing factor is
O(t) = g*integral(E(t)dt), or in discrete notation, O(t) = O(t-1) +

g*E(t-1).

Here, "g" is the gain per unit time, whatever you choose that unit of
time to be.

OK.

Greater or smaller. You can't tell with an integrator. It's output
might be zero when its input is maximum, as would be the case if the
input were a sinusoid. The output function does indeed have gain, as
I describe above. What the nonlinear functions in the comparator do
is compounded with whatever the output function does. If the
comparator emits a zero signal, then the output function is working
on a zero input. If the output function is indeed a pure integrator,
that means its output value won't change, not that its output value
is zero.

OK

>When they do, it's often because of conflict. As many demonstrations

have shown, conflict between two control systems escalates until

one has reached its maximum output.
Do I think correct when I say that an Input function forms two perceptual
signals.
The one goes to the one comparator. The other goes to the other
comparator. The two comparators get different reference signals.

I'm talking here about two independent control systems acting through
the same environmental variable. Each has its own reference value for
its internal representation of that environmental variable. That's
the canonical conflict situation (but not the only one).

Forget my thinking. It was nonsense. I see it now.

I didn't make any assumptions about the form of the output function,
but I normally consider it to be a simple leaky integrator. I suppose
it also could have a tolerance range, but to assume that seems to me
to be redundant. How would you tell the difference between it's
effect and the effect of a tolerance band in the comparator function?

OK.
To your last sentence.
In the comparator function we talk about signals valued as frequencies. They
are different dependent on which level we are. (Higher frequencies higher
up) (Now I am thinking when I write).
Let us expect frequencies near 10 per sec. Now we have to develop a "tol".
This "tol" should be different if the errors are near and "over" (+)(-)10 or
the errors are near zero. Shall we estimate "tol":
IF ((r-p)>7;tol=0.1;IF ((r-p)< -7;tol=-
0.1;IF(-0.5<(r-p)<0.5;tol=0.05;tol=0)))

Shall we estimate the comparator function:
error= r-p+tol

Comments?

Bjorn

[Martin Taylor 2005.11.02.09.18]

[From Bjorn Simonsen (2005.11.02.11:45 EUST)]
Martin Taylor 2005.11.01.15.50]

"We talk of "Gain" as if it were a simple multiplier, but really it

isn't".

I understand a multiplier as an analog circuit that makes a product of two
volt-input. I have never thought upon the "Gain" as if it is a
multiplier(with inputs).

But you talk about it as if it is.

I am sorry. I have never used the concept "a simple multiplier" on this
list. My first sentence above is what I understand a multiplier is in an
amplifier. The way I did understand the concept "Gain" in Output functions
was simultaneous arrivals of impulses in two (or more) paths and one output
impulse.

But the diqagram of a control loop has only two places where a function has two inputs. One is the comparator function, where the two inputs are the reference (R) and the perceptual signal (P). The other is the node in the "outer world" where the two inputs are the output (O) and the disturbance (D). Neither of these is ordinarily considered to be a multiplier (though there's no reason in principle why they should not be).

When you presented the "simple multiplier" concept, I thought this also
expressed the way I explained the concept "Gain".
When you say that "Gain" is not a _simple_ multiplier, I understand it is OK
to say multiplier, but not a _simple_ multiplier. Rather a complex
multiplier.
Is this thinking in harmony with "But you talk about it as if it is"?

What is a complex multiplier? The only kind I know is one in which complex numbers are multiplied. But I don't think that's what you mean. What I meant by "not a simple multiplier" should perhaps have been written "not simply a multiplier".

The concept of "Gain" appears several times in any analysis of a control loop, once as "loop gain", once for each functional component and path in the loop. Usually, any analysis is done using a gain of 1.0 for most of the paths and functions, except for the output function. If that assumption is made, the loop gain is numerically equal to the gain of the output function. The loop gain itself refers to what would happen if you were to break the loop at any point and inject a signal at the break point. A signal would quickly appear on the other side of the break, and the loop gain would be the relationship between the signal you injected and the one you measured at the other end.

In a physical system, it takes time for the signal to go around the loop, so you also have to consider that. Also, if you inject an impulse as your signal, what you get at the other end is likely not to be an impulse, but to be extended in time. If the loop's output function is an integrator and you inject an impulse, what you get out is a step function. If the output function is a leaky integrator, what you get out is a step that declines exponentially to zero. If the integrator also has a slowing factor, you get a slower step rise before the exponential decay. That's the canonical case in Bill's simulations.

>>I have thought upon "Gain" as a necessary parameter to achieve good
feedback

in a loop.

It is. That's loop gain, but it is improper to think of it as a
scalar number. It's a time function, and if the system is linear, you
can use the same equations you have been using, but recognize that
the different symbols actually represent the Laplace transforms of
the waveforms of these time functions.

Yes, That is OK. I have problems with your last subordinate clause. I
understand that the different symbols are not constant but they change in
time. I am sorry, today I cant calculate a Laplace transformations.

That doesn't matter. Forget it. It's just a way of making the analysis of functions extended over time much easier than by considering the waveform sample by sample -- at least it is easier if the waveforms of the functions concerned are suitable, which is the case for the canonical control loop. If it doesn't make sense to you, don'[t worry about it.

I have thought upon the product ki*ke*ko (ki has units of signal
per physical unit, ke has units of input per unit of output and ko has

units

of (other) physical units per signal unit)(from BCP). This is effective

for

my understanding. But now I will understand your wording.

If I understand you, ke is the multiplier that would be the gain of a
non-integrating output function. If the Reference signal were zero, P
would be S*ki, E=P, o=ke*P and S=ko*o. Each k is a simple multiplier.
Or, in a linear system, a Laplace transform of some waveform.

Here I think different. ke is the multiplier that convert the error signal
to the Output quantity.

That's what I said that you said, isn't it? But I'm trying to point out that this "multiplier" ke isn't a scalar multiplier. It's extended in time.

Let's consider a discrete (time-sampled) simulation. If the output function is an integrator, and the input signal (call it the "error signal" for this special case) is zero except for the time sample at time t0, when it is 1.0, whatis the output signal? It is zero until time t0, and ke ever afterwards.

When you say "the gain of a _non-integrating_ output
function, I think upon a quantity that is not directed back to the Input
function.

Not at all what I mean. By a non-integrating output function I mean an output function for which, if the input is zero for all time samples except that it is 1.0 at time t0, the output is zero for all time samples except that it is ke at time t0. Feedback has nothing to do with it. This is the isolated output function alone that we are talking about.

I think upon a negative feedback loop where the output quantity is directed
to the Input function.

I understand PCT as if:

p = (ki*ke*ko*r + ki*kd*d)/(1+ ki*ke*ko) (most often ki =1).

Working with simulations as Bill and Rick do, they say "Gain" = ki*ke*ko.
And they contract the different k values to one value called Gain.

Yes, but all these k factors represent functions of time. They aren't simple multipliers. In this context "Gain" means "loop gain".

When you say "It represents how fast the integrator's output changes for a
given value of error. It's a rate multiplier rather than an error

multiplier. I think I understand this OK. But this sentence switch my

thoughts to the 'slowing factor'. And in the output function I think upon
the output signal, o(t+1) = o(t)+ s[g(r-p) - o(t)].

Forget the slowing factor. It's the "leak" in the "leaky integrator",
and its function is to allow the output to forget long-past values of
the input. It smooths out past errors, and in discrete simulations it
ensures that you don't see oscillations that are due purely to the
method of computation (or not much).

OK, I think. With my words the "leaky integrator" and the non-linear
function compensates for the slowing factor.

Non-linear or linear has nothing to do with the slowing factor. The leaky integrator is a linear function. Nothing "compensates for" it. The slowing factor is just there to smooth out the system response. Actually, it isn't itself the leak. I was wrong to say that, but the leak is usually introduced in the same formula as the slowing function, which tends to get them confused. It is, in your formula: o(t+1) = o(t)+ s[g(r-p) - o(t)]. The first component in the "s" bracket is the slowing factor, the second is the leak.

Let's try to clarify this all with a series of examples: a loop with, first, a non-integrating output function, second, an integrating output function, and third, an integrating output function with a slowing factor.

Think what would happen in a time-sampled simulation of a loop that had no integration and no slowing factor. Imagine that everything is nice and stable, with a reference value of zero. Now for only one time sample, at t0, add a disturbance of +1.0. The S(t0) = 1.0, P(t0) = 1.0, E(t0) = -1.0, O(t0)= ke*-1.0, and ... but now we have a problem, because at t0, we have asserted that O(t0) is zero. That's the starting condition we supposed to be the case at t0.

You can't, even in a spreadsheet, have a cell take on two different values at the same moment. We have to put in at least a one time-sample delay somewhere in the loop (anywhere will do). It's probably most convenient to put the one-sample delay into the output function, and change what I said above to O(t0) = 0 (the initial condition, and O(t1) = ke*-1.0.

Now what's the situation?

S(t1) = O(t1) + D(t1) = ke*-1.0 + 0.0
P(t1) = ke*-1.0
E(t1) = ke*1.0 (changing sign again, because E = R-P)
O(t2) = ke*ke*1.0 = ke^2 * 1.0
S(t2) = ke^2 * 1.0
P(t2) = ke^2 * 1.0
E(t2) = ke^2 * -1.0
O(t3) = ke*ke^2 * -1.0 = ke^3 * -1.0

Do you see what's happening? P doesn't return to the reference value at all, even though the loop gain is negative and could be quite large. P shows an ever-increasing series of impulses up and down. Assuming ke = 3, successive time samples of P would be (starting with the disturbance at t0), 1, -3, 9, -27, 81, -243 ...

Now let's try the same thing, but with an integrating output function O(tn) = O(tn-1) + ke*E(tn). Again we start with everything at zero, and apply a disturbance of 1 that lasts for one sample. I'll assume ke = 3 as before, and just use numeric values. I'll also ignore the S stage after the initial disturbance impulse, since S and P are both equal to the output after the disturbance impulse has returned to zero, assuming they both have unity gain.

S(t0) = 1
P(t0) = 1
E(t0) = -1
O(t1) = -3
P(t1) = -3
E(t1) = +3
O(t2) = +9 -3 = +6
P(t2) = +6
E(t2) = -6
O(t3) = -18 + 6 = -12.

We still have a series of impulses going up and down, but they don't escalate as fast. The series goes -3, 6, -12, 24, -48 ...

Now incorporate a slowing factor of, say, .1 in the output integration, meaning that O(tn)=O(tn-1)+.1*ke*E(tn). There's still no leak.

S(t0) = 1
P(t0) = 1
E(t0) = -1
O(t1) = 0 + 0.1*3*-1 = -0.3
P(t1) = -0.3
E(t1) = 0.3
O(t2) = -0.3 + 0.1*3*0.3 = -.21
P(t2) = -.21
E(t2) = .21
O(t2) = -.21 + 0.1*3*.21 = -.147
P(t3) = -.147
E(t3) = .147
O(t3) = -.147 + 0.1*3*.147 = -.1029
and so on.

This time, the perceptual signal is moving smoothly toward the reference value, which is zero. The slowing factor has smoothed out the effects of the time sampling that is inherent in the discrete simulation.

If it were a really continuous system, like the one being simulated, there would still have to be some time lag around the loop. Nothing physical happens in zero time, and the integrator would still take some time to come to its final value. In the continuous case, we have to deal with filter bandwidths and phase shifts, but you can still imagine what would happen after an impulse, with its effects running round and round the loop until they finally either die away or explode to infinity. The continuous equivalent of the slowing factor smears the impulse into something flatter each time around the loop. It's a "low-pass filter".

I think I have seen your comments about "Gain" and "non-linear" functions
earlier. I think I remember Bill saying that Martin uses a RIF.

That's true, but it's irrelevant. The RIF is the function that combines outputs from higher-level control systems and produces a single value that is the reference input to the comparator. It doesn't enter this discussion, because it is outside the loop.

Today I
don't think you substitute the "Gain" with some none-linear functions. You
substitute the Slowing factor and prevent unexpected shocks in the
Comparator/or RIF with a "tol" and a non-linear function.

No. I don't alter the slowing factor, and the RIF is irrelevant. The tolerance band doesn't "prevent unexpected shocks" at all. All it does is ensure that if the difference between the reference value and the perceptual signal value is less than "tol", the error signal sent to the output function is zero.

I have not seen that neither Rick nor Bill have changed their simulations in
your way. I would have appreciated that both Bill (when he is back) and Rick
commented why.

I think Bill says that he has tried it, or something similar, on occasion, and found that a small tolerance band does improve the model fit. I looked for the relevant post, but couldn't find it, so maybe I'm wrong. A tolerance band certainly improves the model fit to my tracking data from the "Sleepy Teams" study.

To your last sentence.
In the comparator function we talk about signals valued as frequencies. They
are different dependent on which level we are. (Higher frequencies higher
up) (Now I am thinking when I write).
Let us expect frequencies near 10 per sec. Now we have to develop a "tol".
This "tol" should be different if the errors are near and "over" (+)(-)10 or
the errors are near zero. Shall we estimate "tol":
IF ((r-p)>7;tol=0.1;IF ((r-p)< -7;tol=-
0.1;IF(-0.5<(r-p)<0.5;tol=0.05;tol=0)))

"tol" can't depend on (r-p).

In principle, I suppose it could be a function of r or of p, or of any other signal in the whole system. But the one thing it CANNOT be a function of is (r-p), because "tol" is what (r-p) is compared with.

Shall we estimate the comparator function:
error= r-p+tol

That's one possibility, if r - p is guaranteed to be more negative than tol. It wouldn't work under any other circumstances. As I said and showed in graphs, I've tried

E = if(|r-p| < tol, 0) else (r-p), and
E = if(|r-p| < tol, 0) else sgn(r-p)*(|rip| - tol)

where sgn(x) = +1 if x >0, and -1 if x < 0.

The first form seems to work better than the second for my data.

Comments?

Enough?

Martin

[From Bjorn Simonsen (2005.11.03,12:30 EUST)]
Martin Taylor 2005.11.02.09.18

Enough?
Yes, thank you. Your mails have been expository. I don't hope you exert all
your time and energy to help me understand.
Here is what I have learned.
The starting point was Rick's statement: From Rick Marken (2005.10.28.1440)
  To me, tolerance just means putting up with error in a control
  system, whether that error is the result of an active attempt by someone to
  change the state of something you care about or whether it is a passive
  result of a disturbance.
Today _this_ is OK to me.
2005.10.28 I thought upon tolerance as a reference signal at the principle
level. I thought there were loops with "I wish to tolerate" and other loops
"I wish not to tolerate". When these loops were disturbed by seeing, hearing
or feeling a certain person, copies of perceptual signals were directed to
higher levels, also to levels above the principle level. Here the output
signal got a certain value. Some of these outputs received the comparator at
the principle level comparing for "I wish to tolerate" and to those
comparing for "I wish not to tolerate". The one loop (let us say the "I wish
to tolerate" got a high valued reference value, the other got a near zero
valued reference (because of the inhibited effect). When the perceptual
signal arrived these comparators, it resulted in an error in the first and
not in the second. (Because the perceptual signal was greater than the
reference signal and it inhibited the error, the error was negative. And
negative frequencies don't exist). This error was directed to the output
function and converted to an output signal and so on.
If these loops were disturbed by seeing, hearing or feeling another certain
person the reference signal I referred to would got another value (maybe it
would be the "I wish not to tolerate" loops that got an error this time). [I
am sorry for so many words]

In the same mail Rick also said:
  Not the way I see tolerance. Tolerance, to me, is related to the gain of a
  control system. A low gain control system is tolerant, a high gain one is
  not. The setting of the reference is irrelevant to tolerance, except to the
  extent that a zero setting completely removes the person from the error
  creating situation by stopping control of the error creating perception.

To me, this was new. I said "It�s the first time I have seen such a
technical definition for the concept �tolerance�. I liked it."
At this moment, [Martin Taylor 2005.10.31.09.43], you appeared and I
recognized your comparison with engineering design. And in the next mail you
stimulated my curiosity writing:
  In fact, non-linear gain curves are the only way that direct conflicts
  can be prevented from escalating to infinite energies (I mean literally
  infinite; in a real physical system, the non linearity would at the very
  least show up as an explosion!).

You explained this very well in [Martin Taylor 2005.11.01.10.11] and you
introduced concepts I had to read twice because I am not well enough
informed with leaky generators and multipliers. You said:
  We talk of "Gain" as if it were a simple multiplier, but really it isn't.
  It represents how fast the integrator's output changes for a given
  value of the error. It's a rate multiplier rather than an error
multiplier.

And you explained the effect of a tolerance using E = if(|R-P| < tol, 0,
(R-P)).

At this time I agree that it is more effective having a non-linear gain to
have simulations to function more soft. Of course I had been thinking about
non-linear gains earlier. BCP has a section called Non linearties. From here
I remembered that any value of gain greater than 10 would keep the
difference between the perceptual signal and the reference signal less than
9.1 % of the reference signal (d=0) using linearity. You think upon a higher
precision. And I would like to understand your thinking.

You said:
  "It is. That's loop gain, but it is improper to think of it as a
  scalar number. It's a time function, and if the system is linear, you
  can use the same equations you have been using, but recognize that
  the different symbols actually represent the Laplace transforms of
  the waveforms of these time functions".

And I found that OK.

You said:
  "Forget the slowing factor. It's the "leak" in the "leaky integrator",
  and its function is to allow the output to forget long-past values of
  the input. It smoothes out past errors, and in discrete simulations it
  ensures that you don't see oscillations that are due purely to the
  method of computation (or not much)".
And you continued [Martin Taylor 2005.11.02.09.18] saying:
  Non-linear or linear has nothing to do with the slowing factor. The
  leaky integrator is a linear function. Nothing "compensates for" it.
  The slowing factor is just there to smooth out the system response.
  Actually, it isn't itself the leak. I was wrong to say that, but the
  leak is usually introduced in the same formula as the slowing
  function, which tends to get them confused. It is, in your formula:
  o(t+1) = o(t)+ s[g(r-p) - o(t)]. The first component in the "s"
  bracket is the slowing factor, the second is the leak.
Now I understood that the "leak" not was energy that disappeared in an
unknown way. I learned that "leak" was s*o(t) and the value we must subtract
to get o(t+1) starting with o(t). Is this what happens in a concrete leaky
integrator?

When you said:
  "The output function could also be nonlinear. But that's a
  different matter. It has to be nonlinear in that any physical
  system will have a maximum possible output value. But
  most control systems don't get into that range very often".

I got a better understanding when you talk about non-linearity in the loops.

From your [Martin Taylor 2005.11.02.09.18] I got a good understanding of

multipliers in a PCT loop, Thank you.
  "But the diagram of a control loop has only two places where a
  function has two inputs. One is the comparator function, where the
  two inputs are the reference (R) and the perceptual signal (P). The
  other is the node in the "outer world" where the two inputs are the
  output (O) and the disturbance (D). Neither of these is ordinarily
  considered to be a multiplier (though there's no reason in principle
  why they should not be)".

What is a complex multiplier? The only kind I know is one in which
complex numbers are multiplied. But I don't think that's what you
mean. What I meant by "not a simple multiplier" should perhaps have
been written "not simply a multiplier".

Yes, that's what I should have understood.

Your comments about the slowing factor corresponded with the way I
understand the formula
o(t+1) = o(t)+ s[g(r-p) - o(t)]. You explained it very well in your comments
after:
  "Let's try to clarify this all with a series of examples: a loop with,
  first, a non-integrating output function, second, an integrating
  output function, and third, an integrating output function with a
  slowing factor".

Nice numerical examples that explain much.

Today I
don't think you substitute the "Gain" with some none-linear functions. You
substitute the Slowing factor and prevent unexpected shocks in the
Comparator/or RIF with a "tol" and a non-linear function.

No. I don't alter the slowing factor, and the RIF is irrelevant. The
tolerance band doesn't "prevent unexpected shocks" at all. All it
does is ensure that if the difference between the reference value and
the perceptual signal value is less than "tol", the error signal sent
to the output function is zero.

Your last sentence tells me much about non-linearity in smoothly functioning
PCT loops and about "tol".

IF ((r-p)>7;tol=0.1;IF ((r-p)< -7;tol=-
0.1;IF(-0.5<(r-p)<0.5;tol=0.05;tol=0)))

"tol" can't depend on (r-p).

OK. When I wrote the two lines I said to myself that "tol" has an effect
before the error is formed.

That's one possibility, if r - p is guaranteed to be more negative
than tol. It wouldn't work under any other circumstances. As I said
and showed in graphs, I've tried

E = if(|r-p| < tol, 0) else (r-p), and
E = if(|r-p| < tol, 0) else sgn(r-p)*(|rip| - tol)

where sgn(x) = +1 if x >0, and -1 if x < 0.

The first form seems to work better than the second for my data.

Very Good. Thank you. I will remember non-linearity. Than you for spending
so much time.
I think we stop here.

If Rick reads this I would appreciated a comment when the output quantity
changes so little when the gain changes so much in the example I sent
earlier. There were

r=3, d=5, gain(1) = 10, o = -1.818
r=3, d=5, gain(2) = 10, o = -1.961
r=3, d=5, gain(3) = 10, o = -1.980

Bjorn