LWG: Parameter reorganisation

[From Rupert Young (2017.11.25 14.20)]

  Here is a demo of a parameter reorganisation using random changes

as used in Bill’s demos.

  (The upload to youtube is a bit

blurry so you can see the original here if needs be
)

  It is adjusting the gain in an output leaky integrator function

for a single control system to reduce the error response of the
system as measured over a certain period, of 100 iterations.

  The video shows graphs for 1) the error response, 2) the control

system input, output and reference, and 3) the gain value.

  The gain starts out at 500000 (slowing factor of 1000000).

Initially, while the error increases, the gain adjustment changes
randomly (every 100 iterations), but when an adjustment value
results in the error reducing then that value is continuously
applied to the gain on each iteration. This continues until the
gain value has changed to a value that results in increased error,
which would then lead to another random adjustment.

  In the video it can be seen that the gain increases until it

reaches the optimal value of around 2000000 and stabilises (for a
while) at around time of 1.30. Later, around 5.45, it gets grossly
out of control but does actually recover to be very stable.

The correction to the gain is computed thus,

correct = learningrate * delta * errorresponse * parameterMA;

  where delta is a random number between -1 and 1, parameterMA is a

moving average of the gain value. In the demo learningrate= 1;

  This adjustment value is changed every 100 iterations if the

current error response is > than the previous error response

  Comments welcome. As well as any suggestions of issues that need

to be taken into account, particularly in terms of generalising
the algorithm.

···

https://youtu.be/HQwPShnYwLshttps://www.dropbox.com/s/eotka6uef8dfuxo/gain%20reorg.mp4?dl=0

Regards,
Rupert

[Martin Taylor 2017.

Rupert, what do you mean by "gain" and "slowing factor" here? They

are nowhere near what I am used to in simulations. The steady-state
(asymptotic) gain of a good control system is usually between 10 and
20, not millions. That is equal to the ratio between gain-rate and
leak-rate (slowing factor, I think). The gain-rate (the parameter in
the usual output function equation) is typically well under 1.0,
because to have an increment of more than the total error at every
iteration would mean no control at all. Bill observed on one
occasion that for his own human tracks, the best-fitting model
always seemed to have a leak time-constant of about 600 msec (which
with an iteration interval of 1/60 second would imply a slowing
factor of about 1/36000). How do your numbers relate to these?

Martin
···

https://youtu.be/HQwPShnYwLshttps://www.dropbox.com/s/eotka6uef8dfuxo/gain%20reorg.mp4?dl=0

Regards,
Rupert

[From Rupert Young (2017.11.26 21.55)]

(Martin Taylor 2017.

Rupert, what do you mean by "gain" and "slowing factor" here? They are nowhere near what I am used to in simulations. The steady-state (asymptotic) gain of a good control system is usually between 10 and 20, not millions. That is equal to the ratio between gain-rate and leak-rate (slowing factor, I think). The gain-rate (the parameter in the usual output function equation) is typically well under 1.0, because to have an increment of more than the total error at every iteration would mean no control at all. Bill observed on one occasion that for his own human tracks, the best-fitting model always seemed to have a leak time-constant of about 600 msec (which with an iteration interval of 1/60 second would imply a slowing factor of about 1/36000). How do your numbers relate to these?

I am using the equation,

o = o + (ge-o)/s

where g is the gain and s the slowing factor. So I was starting with g = 500000 and s = 1000000.

I could have used 50 and 100 or 5 and 10, but I find that the larger values result in less steady state error when� p is close to r.

For the reorganisation runs it seemed that the lower values were more unstable, but I need to check that out.

Rupert

[Martin Taylor 2017.11.26.17.08]

  [From

Rupert Young (2017.11.26 21.55)]

  (Martin Taylor 2017.
    Rupert, what do you mean by "gain" and

“slowing factor” here? They are nowhere near what I am used to
in simulations. The steady-state (asymptotic) gain of a good
control system is usually between 10 and 20, not millions. That
is equal to the ratio between gain-rate and leak-rate (slowing
factor, I think). The gain-rate (the parameter in the usual
output function equation) is typically well under 1.0, because
to have an increment of more than the total error at every
iteration would mean no control at all. Bill observed on one
occasion that for his own human tracks, the best-fitting model
always seemed to have a leak time-constant of about 600 msec
(which with an iteration interval of 1/60 second would imply a
slowing factor of about 1/36000). How do your numbers relate to
these?

  I am using the equation,




  o = o + (ge-o)/s




  where g is the gain and s the slowing factor. So I was starting

with g = 500000 and s = 1000000.

  I could have used 50 and 100 or 5 and 10, but I find that the

larger values result in less steady state error when p is close
to r.

  For the reorganisation runs it seemed that the lower values were

more unstable, but I need to check that out.

  Rupert
Yes, the steady-state error is i/(i+G), where G is the asymptotic

gain of a leaky integrator. The operation of a leaky integrator is
easier to understand (or so I find) as

o = (o+re) - od

where r is the gain rate = gain per time unit, and d is the leak or

decay rate, 1/s, well below unity. The gain rate is your g/s = r and
the leak rate d = 1/s. To figure out the asymptotic gain G, hold e
constant, and note that a steady state exists when re = od. The gain
G is defined as o/e, which equals r/d.

Using your variables, r = g/s, d=1/s, so  G = g.

You didn't mention, and I forgot to ask, what the loop transport

delay is. The optimum gain and slowing factor should be functions of
that. It would be useful to run your experiment with a range of
transport delays.

Martin

<sup></sup>

[From Rupert Young (2017.11.28 11.00)]

(Martin Taylor 2017.11.26.17.08]

Yes, the steady-state error is i/(i+G), where G is the asymptotic gain of a leaky integrator. The operation of a leaky integrator is easier to understand (or so I find) as

o = (o+re) - od

Ok, I believe it is also the exponential smoothing function

o = (1-d)o + dge

What's i?

You didn't mention, and I forgot to ask, what the loop transport delay is. The optimum gain and slowing factor should be functions of that. It would be useful to run your experiment with a range of transport delays.

I didn't use transport delays as I just wanted to sort out the reorganisation algorithm.

Rupert

[Martin Taylor 2017.11.28.10.43]

[From Rupert Young (2017.11.28 11.00)]

(Martin Taylor 2017.11.26.17.08]

Yes, the steady-state error is i/(i+G), where G is the asymptotic gain of a leaky integrator. The operation of a leaky integrator is easier to understand (or so I find) as

o = (o+re) - od

Ok, I believe it is also the exponential smoothing function

o = (1-d)o + dge

What's i?

A typo for 1 (one).

You didn't mention, and I forgot to ask, what the loop transport delay is. The optimum gain and slowing factor should be functions of that. It would be useful to run your experiment with a range of transport delays.

I didn't use transport delays as I just wanted to sort out the reorganisation algorithm.

OK, but if there's no transport delay, the optimum parameters are infinite gain and zero leak (which is the only case that results in zero error). The only reason your reorganization algorithm wouldn't go explosively wild is that you actually do have a transport lag of one sample period. But that introduces its own problems of digital simulation of analogue processes. The sampling period MUST be short compared to the time-scale of anything interesting happening if the simulation is to imply anything about the progress of the analogue system being simulated. You may remember that Bill P. was very careful about this in his simulations, and more than once pointed out how one can go very wrong when simulating with an insufficiently fast sampling period.

Analyzing the loop in frequency space instead of in time-space, we are talking about the Nyquist limit. Rapid changes in any variable and its derivatives appear as high frequency components in the spectrum of the variable. For a variable with a spectrum that had a bounded bandwidth W, Nyquist showed that the exact waveform of the signal could be represented in 2W samples per second. In the real world signals are do not have a bounded spectrum, but do have spectra for which the energy outside a bandwidth W is negligible for the problem under consideration.

Typical engineering practice is to sample at more than three times the Nyquist rate, which is the same as saying you aren't concerned with frequency components outside a bandwidth of 3W. Bill was conservative, and usually used a safety factor well above three.

The effect is to smooth out rapid fluctuations in the analogue waveform being represented digitally. That's why the flip-flop I sent you has smoothly changing passages between A high and B high. And that's why you can't rely on a digital simulation to tell you anything about the performance of the simulated analogue system in which the transport lag in the loop is less than a few sample periods.

Martin

···

Rupert

[From Rupert Young (2017.12.03 19.15)]

lhbfahmcilhfohpa.png

···

The video I sent previously concerned
reorganisation of a gain while there was a step disturbance. It
seemed to work pretty well with the gain value converging to
around the same value from wherever it started.

  The image below shows the same system except with a sine

disturbance. This doesn’t work so well though.

  The gain value just wanders around its starting value without

reducing error as it should. In this demo, as you can see in the
image, I manually change the gain value twice from 50000 to 100000
and to 200000. Each time the average error goes down but then
oscillates around the same value.

  I guess this means it is jumping around local minima, but why is

this the case? Any suggestions on how to visualise the landscape?

  Rupert

[Martin Taylor 2017.12.03.14.53]

What are the three graphs? The writing is much too small for me to

read. Whatever they are. the top one shows a classical case of
beats, where the frequency of one signal is close to a harmonic of
the other. I don’t know what those two signals might be, but I
presume this time you have a decent transport lag, so the beat might
be between the lag and the period of the sine wave disturbance. Just
a stab in the dark. If you don’t have a reasonable transport lag and
sufficient smoothing this time around, all bets are off, because you
probably have a digital artifact rather than a representation of
what reorganization might do in the system being simulated.
Independent of that, it looks as though you have a very slow sample
rate for the reorganization process, which means either that it has
to be very slow or you are guaranteed to have bad digital artifacts
that have nothing to do with the process being simulated. The
probability of an e-coli tumble is proportional to the rate of
change of absolute error, and that has to have a bandwidth W that is
small compared to the number of samples per second. But you seem to
change something every sample time (the abrupt shift moments in the
top graph).
Not without knowing what you are representing. Martin

···

[From Rupert Young (2017.12.03 19.15)]

    The video I sent previously concerned

reorganisation of a gain while there was a step disturbance. It
seemed to work pretty well with the gain value converging to
around the same value from wherever it started.

    The image below shows the same system except with a sine

disturbance. This doesn’t work so well though.

    The gain value just wanders around its starting value without

reducing error as it should. In this demo, as you can see in the
image, I manually change the gain value twice from 50000 to
100000 and to 200000. Each time the average error goes down but
then oscillates around the same value.

    I guess this means it is jumping around local minima, but why is

this the case? Any suggestions on how to visualise the
landscape?

    Rupert

[From Rupert Young (2017.12.03 21.40)]

(Martin Taylor 2017.12.03.14.53]

What are the three graphs? The writing is much too small for me to
read. Whatever they are. the top one shows a classical case of beats,
where the frequency of one signal is close to a harmonic of the other.
I don't know what those two signals might be, but I presume this time
you have a decent transport lag, so the beat might be between the lag
and the period of the sine wave disturbance. Just a stab in the dark.
If you don't have a reasonable transport lag and sufficient smoothing
this time around, all bets are off, because you probably have a
digital artifact rather than a representation of what reorganization
might do in the system being simulated.

I've attached the image rather than inline, hopefully that'll be
clearer. The change in the top graph, the error response, indicates the
reorganisation period, I think that might need to be much longer than
the phase of the sine wave.

Rupert

[Martin Taylor 2017.12.03.16.45]

[From Rupert Young (2017.12.03 21.40)]

(Martin Taylor 2017.12.03.14.53]

What are the three graphs? The writing is much too small for me to read. Whatever they are. the top one shows a classical case of beats, where the frequency of one signal is close to a harmonic of the other. I don't know what those two signals might be, but I presume this time you have a decent transport lag, so the beat might be between the lag and the period of the sine wave disturbance. Just a stab in the dark. If you don't have a reasonable transport lag and sufficient smoothing this time around, all bets are off, because you probably have a digital artifact rather than a representation of what reorganization might do in the system being simulated.

I've attached the image rather than inline, hopefully that'll be clearer. The change in the top graph, the error response, indicates the reorganisation period, I think that might need to be much longer than the phase of the sine wave.

Thanks. Now I can read the graph.

My diagnosis would be different from yours. I would allow a tumble at any moment (by "moment" I mean an iteration of the simulation). The criterion for the probability of a tumble at a moment should be the value of the average recent rate of change of error-squared. The faster the error has recently been increasing, the higher the tumble probability. If the error has on average recently been decreasing, don't tumble, keep changing the parameters the same amount each iteration as you had been doing, until the recent average change of error starts increasing. Mind you, the probability of a tumble never goes completely to zero.

If you are going to modify only one parameter, you don't have to think of tumbling, because there's no new direction to tumble into. All you have is a one-dimensional control problem, changing the value of that parameter until the perceived absolute (or squared) error is a minimum. A "tumble" becomes a reverse in direction if the error seems to be increasing if you keep going the way you were. But the e-coli approach should work when you are changing all the loop parameters at the same time. (Although as I mentioned in my CSG-2005 presentation <http://www.mmtaylor.net/PCT/CSG2005/CSG2005bFittingData.ppt&gt;, you get better results if you rotate the parameter space first).

The objective is to minimize the RMS error (or mean Absolute Error, which comes to the same thing), but you need to ask about the "M" (Mean"). Over what period is it averaged? You don't want a boxcar average that takes the mean of the last N values. You want a leaky integrator, just like the leaky integrator in a control loop. Indeed, what you would be building is exactly a control loop in which the perception is the absolute or the squared error in the control loop whose parameters you are changing.

The question, as in any control loop with a leaky integrator output stage, is the balance between gain rate and leak rate. The leak has to be slow enough to allow you to take account of enough data, but not so slow that values from last week have much influence on what you do in the next second or two. In your case, I might suggest as a first cut that the leak rate of your error-averaging integrator might be a few cycles of your sine wave -- say four or five at a guess.

If you think of the problem as one of building a control loop that has as its perception the control quality of the loop whose parameters you are changing, and that influences that perception by altering a parameter, I think you will have better results.

Martin

···

Rupert

[From Rupert Young (2017.12.05 16.45)]

(Martin Taylor 2017.12.03.16.45]

My diagnosis would be different from yours. I would allow a tumble at any moment (by "moment" I mean an iteration of the simulation). The criterion for the probability of a tumble at a moment should be the value of the average recent rate of change of error-squared. The faster the error has recently been increasing, the higher the tumble probability. If the error has on average recently been decreasing, don't tumble, keep changing the parameters the same amount each iteration as you had been doing, until the recent average change of error starts increasing. Mind you, the probability of a tumble never goes completely to zero.

0k, that's a possible strategy, I'll add it to my plan. I think the reason that Bill didn't do this (in ThreeSys) was because he had a recurring cycle of reference signal patterns and the meaningful error comparison was between cycles, rather than a moving average. If the moving average were used you'd get different values depending where you were in the cycle.

If you are going to modify only one parameter, you don't have to think of tumbling, because there's no new direction to tumble into. All you have is a one-dimensional control problem, changing the value of that parameter until the perceived absolute (or squared) error is a minimum. A "tumble" becomes a reverse in direction if the error seems to be increasing if you keep going the way you were.

Sure, I am just trying to sort out the ecoli algorithm to use later for multiple variables. For one variable you can just change the sign of the direction.

But the e-coli approach should work when you are changing all the loop parameters at the same time. (Although as I mentioned in my CSG-2005 presentation <http://www.mmtaylor.net/PCT/CSG2005/CSG2005bFittingData.ppt&gt;, you get better results if you rotate the parameter space first).

Could you expand on this rotation, I didn't find an explanation in the slides?

The objective is to minimize the RMS error (or mean Absolute Error, which comes to the same thing), but you need to ask about the "M" (Mean"). Over what period is it averaged? You don't want a boxcar average that takes the mean of the last N values. You want a leaky integrator, just like the leaky integrator in a control loop. Indeed, what you would be building is exactly a control loop in which the perception is the absolute or the squared error in the control loop whose parameters you are changing.

The question, as in any control loop with a leaky integrator output stage, is the balance between gain rate and leak rate. The leak has to be slow enough to allow you to take account of enough data, but not so slow that values from last week have much influence on what you do in the next second or two. In your case, I might suggest as a first cut that the leak rate of your error-averaging integrator might be a few cycles of your sine wave -- say four or five at a guess.

How can the leak rate (which is the exponential moving average smoothing factor) be set to the sine wave cycles (or number of iterations)?

If you think of the problem as one of building a control loop that has as its perception the control quality of the loop whose parameters you are changing, and that influences that perception by altering a parameter, I think you will have better results.

Ok, another strategy for the plan.

Rupert

[Martin Taylor 2017.12.05.12.42]

Various PCT subtleties here.

  [From

Rupert Young (2017.12.05 16.45)]

  (Martin Taylor 2017.12.03.16.45]
    My diagnosis would be different from

yours. I would allow a tumble at any moment (by “moment” I mean
an iteration of the simulation). The criterion for the
probability of a tumble at a moment should be the value of the
average recent rate of change of error-squared. The faster the
error has recently been increasing, the higher the tumble
probability. If the error has on average recently been
decreasing, don’t tumble, keep changing the parameters the same
amount each iteration as you had been doing, until the recent
average change of error starts increasing. Mind you, the
probability of a tumble never goes completely to zero.

  0k, that's a possible strategy, I'll add it to my plan. I think

the reason that Bill didn’t do this (in ThreeSys) was because he
had a recurring cycle of reference signal patterns and the
meaningful error comparison was between cycles, rather than a
moving average. If the moving average were used you’d get
different values depending where you were in the cycle.

Yes. Any structure in the disturbance affects the best way to handle

the situation. Bill’s Artificial Cerebellum is a way of using the
temporal structure of either the disturbance or the dynamic
behaviour of the loop itself to improve control. A leaky integrator
output function is the opposite. It explicitly does not take into
account any temporal structure in either, but kicks it upstairs to
be handled by higher levels in the hierarchy. So if you have a leaky
integrator output function, and are trying to determine the best
parameter values for a particular transport lag, your technique must
assume nothing about the temporal structure of the disturbance.

If the system can control against a cyclic variation of some kind in

the disturbance, as opposed to controlling a variable while the
disturbance happens to be moving through a slow cycle, then the
output function or something else in the loop has some component or
relationship that is a complement to the cycle in the same way that
one strand of a DNA helix is a complement to the other. So we assume
that is not the case, and ignore the cyclicity. The averaging just
has to be over a time that covers the cycle smoothly (i.e. not a
box-car averager, which would get messed up with the beats between
the averaging duration and the cycle period). Again, a leaky
integrator seems to be the appropriate function for the averager.

    If you are going to modify only one

parameter, you don’t have to think of tumbling, because there’s
no new direction to tumble into. All you have is a
one-dimensional control problem, changing the value of that
parameter until the perceived absolute (or squared) error is a
minimum. A “tumble” becomes a reverse in direction if the error
seems to be increasing if you keep going the way you were.

  Sure, I am just trying to sort out the ecoli algorithm to use

later for multiple variables. For one variable you can just change
the sign of the direction.

    But the e-coli approach should work when

you are changing all the loop parameters at the same time.
(Although as I mentioned in my CSG-2005 presentation
,
you get better results if you rotate the parameter space first).

  Could you expand on this rotation, I didn't find an explanation in

the slides?

So I see. That was a bit of an oversight in the presentation. You

are the first to ask about it, so maybe you are the first to think
carefully about it.

Think of the parameters, say gain rate, leak rate, and tolerance, as

axes in a 3-D space. The set of three values of the parameters
defines a point in the space, and a measure of control performance.

This figure shows three stages in the rotation and scaling process,

but in only two dimensions. The rings represent the performance
measure, inside the smallest ring being best. I imagine two
parameters, X and Y, and a performance measure that is a function of
{X, Y}, the location in the X-Y space. In this case, the performance
measure is a joint function of X and Y, and their effects are
correlated. A small change in 2Y-X has the same effect as a big
change in 2X+Y, while changes in either X or Y have intermediate
effects, and when one of them is at its optimum for a given value of
the other, both must be changed together to improve performance
further, unless both are at their optimum at the same time.

Start with the {X,Y} value represented by the black dot in each

panel. The e-coli move changes each parameter by a different defined
amount to a new place in the space, with a new value for the measure
of control performance (the grey dot). The move is in a particular
direction and distance in the space. Unless there is a tumble, the
next move changes them by the same amounts, moving in the same
direction and by the same distance (to the open dot).

![e-coli_DataRotation.jpg|1490x460](upload://wFWzVdFSXjO0DnlK3MBRkxpEDlH.jpeg)

At this point, performance begins to get worse, and a tumble is

likely. In a tumble, all directions are equally likely. The shaded
triangle represents directions toward the “much better” area of the
innermost ring. The probability that the move is in that range of
directions is rather small (though the probability that it will
result in a direction of some improvement is 0.5, as always).

The first move that makes finding the optimum point in the space is

a rotation to eliminate the correlation between the variables (among
them all in a higher-dimensional space). After rotation, it is
possible to optimize one variable and then another, but the range of
directions that lead to the “much better” area is unchanged. To
improve this, the next stage is to scale the variables so that the
contours of equal performance become circles. The landscape now has
no ridge lines to be followed, and the range of directions toward
the “much better” area is the same from any starting point having
the same performance measure, and the e-coli tumble is much more
likely to be in that range than it is from a point on or near a
ridge line.

The same or related issues apply to any hill-climbing technique.

Rotation and scaling to equalize the “slope” in all directions from
the optimum will make the hill easier to climb. The problem is how
to find the optimum rotation angles and scaling constants. That’s
where the genetic algorithm came in. In two dimensions there are two
degrees of freedom for the variables, one for the rotation angle and
one for the relative scaling factor for the rotated variables. A
genetic algorithm with four genes seems appropriate. In my CSG2005
presentation I was comparing two 5-variable fits, which required 35
genes in seven “chromosomes”.

    The objective is to minimize the RMS error

(or mean Absolute Error, which comes to the same thing), but you
need to ask about the “M” (Mean"). Over what period is it
averaged? You don’t want a boxcar average that takes the mean of
the last N values. You want a leaky integrator, just like the
leaky integrator in a control loop. Indeed, what you would be
building is exactly a control loop in which the perception is
the absolute or the squared error in the control loop whose
parameters you are changing.

    The question, as in any control loop with a leaky integrator

output stage, is the balance between gain rate and leak rate.
The leak has to be slow enough to allow you to take account of
enough data, but not so slow that values from last week have
much influence on what you do in the next second or two. In your
case, I might suggest as a first cut that the leak rate of your
error-averaging integrator might be a few cycles of your sine
wave – say four or five at a guess.

  How can the leak rate (which is the exponential moving average

smoothing factor) be set to the sine wave cycles (or number of
iterations)?

Re-reading my own comment, the leak rate in question is not that of

the control loop, but of the integrator that is part of the
calculation of the RMS error. My suspicion was that you might be
averaging over fixed time intervals rather than using a leaky
integrator. You just want to make the leak rate slow enough to avoid
getting mixed up with the phase of the sine wave and fast enough not
to get mixed up when the disturbance changes its character.

    If you think of the problem as one of

building a control loop that has as its perception the control
quality of the loop whose parameters you are changing, and that
influences that perception by altering a parameter, I think you
will have better results.

  Ok, another strategy for the plan.
Here's a possible functional diagram for a "reorganization control

loop" in which the loop shown deals with the magnitude of an e-coli
step, while a separate path (not shown) uses the derivative of the
QoC to set the probability of a tumble happening at each
computational iteration. That could also be inside the output
function (starred because it is not the same as the simple leaky
integrators of the perceptual control hierarchy even though it might
well include one).

![4.19_ReorganizationControl_v3.jpg|1114x630](upload://81BEcU5C5cqYp2Vqw4FfKG3cdOm.jpeg)

If the intrinsic variable in question isn't QoC, it would not

normally be possible to assign its error value to any particular
control loop in the hierarchy. The same kind of reorganizing control
loop would apply, but interacting with a lot more control loops and
the parameters of their mutual interactions. That’s a whole 'nuther
discussion. I think it is part of the high-dimensional removal of
correlation by rotation discussion. But who knows?

Martin
···

http://www.mmtaylor.net/PCT/CSG2005/CSG2005bFittingData.ppt

[From Rupert Young (2017.12.06 11.50)]

(Martin Taylor 2017.12.05.12.42]

Or a box-car averager if done over exactly one cycle?
Ok, I think I understand this, conceptually. Perhaps we can revisit
this when we get to the point of implementation.
My understanding of the leaky integrator is that you are averaging
over time, but that the influence of each data point
decreases exponentially with respect to time into the past. So is
there a way of calculating the rate such that the influence of a
data point becomes negligible, 1% say?
Rupert

···
  If

the system can control against a cyclic variation of some kind in
the disturbance, as opposed to controlling a variable while the
disturbance happens to be moving through a slow cycle, then the
output function or something else in the loop has some component
or relationship that is a complement to the cycle in the same way
that one strand of a DNA helix is a complement to the other. So we
assume that is not the case, and ignore the cyclicity. The
averaging just has to be over a time that covers the cycle
smoothly (i.e. not a box-car averager, which would get messed up
with the beats between the averaging duration and the cycle
period). Again, a leaky integrator seems to be the appropriate
function for the averager.

  The

same or related issues apply to any hill-climbing technique.
Rotation and scaling to equalize the “slope” in all directions
from the optimum will make the hill easier to climb. The problem
is how to find the optimum rotation angles and scaling constants.
That’s where the genetic algorithm came in. In two dimensions
there are two degrees of freedom for the variables, one for the
rotation angle and one for the relative scaling factor for the
rotated variables. A genetic algorithm with four genes seems
appropriate. In my CSG2005 presentation I was comparing two
5-variable fits, which required 35 genes in seven “chromosomes”.

    How can the leak rate (which is the exponential moving average

smoothing factor) be set to the sine wave cycles (or number of
iterations)?

  Re-reading my own comment, the leak rate in question is not that

of the control loop, but of the integrator that is part of the
calculation of the RMS error. My suspicion was that you might be
averaging over fixed time intervals rather than using a leaky
integrator. You just want to make the leak rate slow enough to
avoid getting mixed up with the phase of the sine wave and fast
enough not to get mixed up when the disturbance changes its
character.

all

[Martin Taylor 2017.12.06.10.16]

[From Rupert Young (2017.12.06 11.50)]

  (Martin Taylor 2017.12.05.12.42]

Or a box-car averager if done over exactly one cycle?

To do that would presume the control system was constructed to

control against only a disturbance of this precise periodicity. The
point of the leaky integrator output function is that it is a
function (and I believe but can’t prove it is the only function)
that is totally indifferent to the temporal form of the disturbance.
The same is true of the reorganizing system. If you want to create a
control loop optimized only for that kind of disturbance, then the
reorganizing system can use the structure of the disturbance. If
not, the same considerations apply, and I think that a leaky
integrator is the only valid form of averager. I could be proven
wrong.

    The same or related issues apply to any hill-climbing technique.

Rotation and scaling to equalize the “slope” in all directions
from the optimum will make the hill easier to climb. The problem
is how to find the optimum rotation angles and scaling
constants. That’s where the genetic algorithm came in. In two
dimensions there are two degrees of freedom for the variables,
one for the rotation angle and one for the relative scaling
factor for the rotated variables. A genetic algorithm with four
genes seems appropriate. In my CSG2005 presentation I was
comparing two 5-variable fits, which required 35 genes in seven
“chromosomes”.

  Ok, I think I understand this, conceptually. Perhaps we can

revisit this when we get to the point of implementation.

      How can the leak rate (which is the exponential moving average

smoothing factor) be set to the sine wave cycles (or number of
iterations)?

    Re-reading my own comment, the leak rate in question is not that

of the control loop, but of the integrator that is part of the
calculation of the RMS error. My suspicion was that you might be
averaging over fixed time intervals rather than using a leaky
integrator. You just want to make the leak rate slow enough to
avoid getting mixed up with the phase of the sine wave and fast
enough not to get mixed up when the disturbance changes its
character.

  My understanding of the leaky integrator is that you are averaging

over all time, but that the influence of each data point
decreases exponentially with respect to time into the past. So is
there a way of calculating the rate such that the influence of a
data point becomes negligible, 1% say?

The result of a step change applied to the input of a leaky

integrator is declining exponential approach to an asymptote. For
example, a step down from 1.0 to 0.0 produces output of the form e-kt .
After 1/k seconds (if t is measured in seconds), the decline has
gone down to 1/e, or after kln2 seconds it has gone down to half.
No matter where you are on the decline, k
ln2 seconds later the
value will be half as much. The reason the function is agnostic
about the temporal character of its input is that whatever ratio you
choose, pairs of points for which the values have that ratio will be
spaced the same distance apart wherever you look on the time axis.

However, you are interested not in that value, but in the total

remaining weight beyond X seconds out, a definite integral of e-kt
from X to infinity, or e-kX /k if my rusty elementary
calculus is correct. You want to find X for that expression to be
0.01, so taking logs of both sides, -kX-ln(k) = ln(0.01), and X =
ln(.01*k)/k. Don’t take my word for the algebra, since I am
notorious for making silly typos and getting signs wrong.

Martin
···
    If

the system can control against a cyclic variation of some kind
in the disturbance, as opposed to controlling a variable while
the disturbance happens to be moving through a slow cycle, then
the output function or something else in the loop has some
component or relationship that is a complement to the cycle in
the same way that one strand of a DNA helix is a complement to
the other. So we assume that is not the case, and ignore the
cyclicity. The averaging just has to be over a time that covers
the cycle smoothly (i.e. not a box-car averager, which would get
messed up with the beats between the averaging duration and the
cycle period). Again, a leaky integrator seems to be the
appropriate function for the averager.

from Kent McClelland (2017.12.06.1400)

Rupert, Martin, I have appreciated your discussion of the technical details of modeling control systems with reorganization, because I too have been working on a modeling project that involves reorganization, in my case a model of collective control
model with two hierarchical levels. I’m not very far along on the project, but your examination of the issues involved in creating control-system models with reorganization has been very helpful. Thanks!

Kent

···

If the system can control against a cyclic variation of some kind in the disturbance, as opposed to controlling a variable while the disturbance happens to be moving through a slow cycle, then the output function or something else in the loop has some component
or relationship that is a complement to the cycle in the same way that one strand of a DNA helix is a complement to the other. So we assume that is not the case, and ignore the cyclicity. The averaging just has to be over a time that covers the cycle smoothly
(i.e. not a box-car averager, which would get messed up with the beats between the averaging duration and the cycle period). Again, a leaky integrator seems to be the appropriate function for the averager.