Loop delay and limits on control

[Martin Taylor 2010.12.04.17.46]

This is the second in a series relating to information and control.

The first was [Martin Taylor 2010.11.18/17/35]. The present essay
does not use information theory concepts, but assumes that all
variabilities have a Gaussian distribution, meaning that variance
analysis is appropriate. It deals with the limits of control for a
noise-free control unit that has some precisely specified transport
lag around the loop (called “loop delay” in this essay). All
physically realizable control units necessarily have some delay and
some noise in every pathway, so the kind of noise-free control
system considered here is not physically realizable. It provides an
idealized upper bound on the control realizable in practice.

For an organism living in an uncertain environment, what matters is

not the state of a perceptual variable, but the state of the
environment. Yet the organism cannot control anything in the
environment; it can control only perceptual variables that are
functions of states of the environment. Ignoring possible inputs
from what in PCT is called “an imagination loop”, the perceptual
input function of a control unit determines a relationship among
properties of the environment that we denote the CEV (complex
environmental variable) of the control unit, the value of which is
denoted “s” in the equations.

The essay begins by considering possible variablility of the CEV

when the perception is controlled but the relation between the CEV
and the perceptual input may have an offset, at first static, and
then variable. This leads to an enquiry into the limits on
perceptual control when there is finite loop delay. Some
experimental tracking runs provide data as a sanity check on the
theoretical limits.

Throughout this essay we consider only a single noise-free control

unit at a single level of control, and assume that the reference
value is static throughout. Consideration of varying reference level
is left for a possible future episode in the series.

Here is a figure with the symbols I use in the equations (plus other

symbols that may be used later).

![Loop delay and limits on contro.jpg|336x362](upload://2AeH7GPEypPkdgaRxB5O954lfTC.jpeg)

Loop delay and limits on contro1.jpg

Loop delay and limits on contro2.jpg

Loop delay and limits on contro3.jpg

···

http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Geometric_interpretation

` dif delay Q Qmax Eff CR CRmax CR/CRmax
L

  1   10        36.6    73.8    0.49        6.05    8.59    0.70    

12.0

      11        32.8    55.2    0.59        5.73    7.43    0.77    

12.2

      11        32.6    55.2    0.59        5.71    7.43   

0.77 10

  2   11        14.6    28.4    0.52        3.82    5.33    0.72    

10

      10        15.0    35.3    0.42        3.87    5.94    0.65   
10.5  

      10        15.6    36.5    0.42        3.95    6.04   

0.65 10.7

  3    9        7.1     23.2    0.31        2.66    4.82   

0.55 10.5

      10        5.8     18.1    0.31        2.41    4.25    0.57   
10.5

      10        6       17.5    0.34        2.45    4.18   

0.59 10.5

  4   9        2.7     13.0    0.21        1.64    3.61   

0.46 10.6

      9        2.9     12.9    0.22        1.70    3.59    0.47     

10.5

      9        2.8     11.4    0.24        1.67    3.38   

0.50 10

  5   9        1.47    6.87    0.21        1.21    2.62   

0.46 9.2

      9        1.54    6.4     0.24        1.24    2.53    0.49   
 10.1

      9        1.58    6.8     0.23        1.26    2.61    0.48    
9.3        `

`Difficulty Spectral peak 6db bandwidth

  1                 0.6 Hx        1.2Hz

  2                 1.2 Hz        1.8Hz

  3                 1.6 Hz        2.0 Hz

  4                 1.6 Hz        2.2 Hz

  5                 2.0 Hz        2.6 Hz   `

[From Rick Marken (2010.12.17.1000)]

[Martin Taylor 2010.12.04.17.46]

This is the second in a series relating to information and control.

The first was [Martin Taylor 2010.11.18/17/35]. The present essay
does not use information theory concepts, but assumes that all
variabilities have a Gaussian distribution, meaning that variance
analysis is appropriate. It deals with the limits of control for a
noise-free control unit that has some precisely specified transport
lag around the loop (called “loop delay” in this essay). All
physically realizable control units necessarily have some delay and
some noise in every pathway, so the kind of noise-free control
system considered here is not physically realizable. It provides an
idealized upper bound on the control realizable in practice.

I think you might get more attention to this if you could write up a short (250 word) abstract describing 1) the background to the problem (why is interesting) 2) a simple description of the analysis 3) a simple description of the results of the analysis and 4) your conclusion (explaining why this is important to know). This is a lot of stuff to go through and much of it is very difficult to follow. And the results are not that easy to understand, partly because the table that contains those results wraps in a way that makes it difficult to see which numbers are in which columns.

Best regards

Rick

···
For an organism living in an uncertain environment, what matters is

not the state of a perceptual variable, but the state of the
environment. Yet the organism cannot control anything in the
environment; it can control only perceptual variables that are
functions of states of the environment. Ignoring possible inputs
from what in PCT is called “an imagination loop”, the perceptual
input function of a control unit determines a relationship among
properties of the environment that we denote the CEV (complex
environmental variable) of the control unit, the value of which is
denoted “s” in the equations.

The essay begins by considering possible variablility of the CEV

when the perception is controlled but the relation between the CEV
and the perceptual input may have an offset, at first static, and
then variable. This leads to an enquiry into the limits on
perceptual control when there is finite loop delay. Some
experimental tracking runs provide data as a sanity check on the
theoretical limits.

Throughout this essay we consider only a single noise-free control

unit at a single level of control, and assume that the reference
value is static throughout. Consideration of varying reference level
is left for a possible future episode in the series.

Here is a figure with the symbols I use in the equations (plus other

symbols that may be used later).

<img alt="" src="https://mail.google.com/mail/?ui=2&amp;ik=f4dcc9166a&amp;view=att&amp;th=12ceb5d01ecca941&amp;attid=0.1.1&amp;disp=emb&amp;zw" width="272" height="293">



-----------------

Stage 1: static offset  (v fixed).



To start, assume a simple static offset of the sensor-perception

alignment, such as happens when a chicken is fitted with prismatic
goggles that make it see everything a little to the right of its
true position. A chicken fitted with such goggles may try to peck at
a seed, but will peck at the earth to the left of the seed. Chickens
seem not to learn to compensate for this offset. If a person is
fitted with such spectacles, the first move to pick up a cup will be
to the left of the handle, but a higher-level control system
controlling perception of the relationship between hand and handle
will change the reference value for the hand movement so that the
hand does eventually meet the handle.

Throughout this essay we consider only a single control unit at a

single level of control, ignoring any possible influences from
higher-level control units, and assume that the reference value is
static throughout.

Here are the control equations, for notational simplicity taking P(

) and E( ) to be the identity transform. The symbols could be simple
variables and operators, or they could be Laplace transforms:

p = P(qi) = qi

   = s + v

   = (qo + d) + v

   = G(e) + d + v

   = G(r-p) + d + v



p(1+G) = Gr + d + v



p = Gr/(1+G) + (d+v)/(1+G)



or, in an approximation we use several times hereafter, for G large



p  ≈ r + (d+v)/G      



In other words, the offset does not affect control of the perception

because the offset simply adds to the influence of the disturbance.
Control still keeps the perceptual value almost the same as its
reference value.

But what of the situation in the environment. What is the value of

the CEV, represented by “s” in the equation? Can the chicken peck
the seed? Does your hand grab the cup handle?

p = qi = s +v, or in other words,

s = p-v ≈ r - v



When the perception is controlled, the CEV has the wrong magnitude;

the chicken does not get to eat the seed, the hand does not meet the
cup handle.

----side note----



If v remains constant, reorganization will probably correct the

problem (though for a human the problem will be resolved by a
changing reference level from a higher level control system that
controls, for example, the relationship between hand and
cup-handle). Indeed, experiments from the 1930s to at least the
1970s showed that under these conditions, when people (but not
chickens) wearing prismatic spectacles act to control some
perception of the outer environment, their perceptual functions
change to compensate for the effect of the prism, but correction
does not happen, or happens much more slowly, for aspects of the
environment observed passively (see work by J.G.Taylor, or by Hein
and Held, and for earlier work the Gestaltists such as Kohler). The
perceptual consequences are sometimes odd, as when an observer who
is wearing spectacles that invert up and down perceives the smoke
from a cigarette to drift downward toward the ceiling above.

----end side note ----



So, a fixed offset between the perceptual signal and the outer world

initially means that actions through the environmental feedback path
result in a feedback effect that mismatches the disturbance value by
the amount of the offset. This displacement probably will be
corrected by reorganization, but for now at least, I want to
consider only the uncorrected raw equations for a single control
system.

-------end Stage 1------------



Stage 2: varying offset (v variable)



The equations are the same as in Stage 1, but now we consider

statistics over some period in which v changes randomly with
Gaussian probability statistics and a mean of zero. Why this might
happen is irrelevant to the argument. Call the offset variance
var(v). We also (for this essay) assume d also varies with Gaussian
statistics, with variance var(d). Since the variations of v and d
are by definition independent, var(d+v) = var(v) + var(d).

The quality of control (Q) is sometimes reported as var(d)/var(p),

or as its square root RMS(d)/RMS(p), which is often called the
“Control Ratio” or CR (sometimes the inverse of those ratios is
used, such as that the RMS variation of the perceptual signal is x%
of the RMS variation of the disturbance). To reduce the number of
symbols in the written form of the equations, I will use Q =
var(d)/var(p), and use CR for its square root when appropriate.

Good control means Q is a large number. if there is no control at

all, Q = 1. If the control system causes the perceptual signal value
to fluctuate more than the disturbance does alone, Q < 1.

As noted above, p ≈ r + (d+v)/G, so if r is constant, var(p) ≈

var(d+v)/G

1/Q ≈ var(p)/var(d)

       = (var(d) + var(v)) / (G*var(d)) 

       = (1 + var(v)/var(d)) / G



Q ≈ G/(1 + var(v)/var(d))



The effect of varying v on the quality of control depends on the

relative variances of v and d. If v varies as much as d, the control
is only half as good as it would be if only d varies. (If you use CR
as the measure of control quality, the ratio is sqrt(2)).

That's not very interesting, but what can we say about the effect in

the environment. How does the CEV vary? For example, does the hand
now usually catch the cup handle quite precisely? Does the chicken
usually peck accurately at the seed?

The value of the CEV is denoted by "s" in the equations. For the

hand to catch the cup handle accurately, the mean value of the CEV
should be r (which it is in these equations) and its variance should
be small compared to var(d). Let us have a look.

s = p-v, from the equations of Stage 1.



var(s) = var(p) + var(v) if p and v are uncorrelated, as is nearly

the case when control is good

         = ((var(d) + var(v))/G) + var(v)

         = var(d)/G + var(v)*(1 + 1/G)



Control reduces the variance component of s due to the disturbance,

but the component due to the varying offset is affected only a small
amount by the fact of control. The variability of s is the variance
of the perceptual offset, plus a little due to the fact that for a
finite gain, control is not perfect. If perceptual control were
perfect, the variance of s would be exactly the variance of the
offset.

That's also not very interesting, since it is intuitively obvious

that if the perceptual value is stable despite having a variable
offset from the value of the CEV, the value of the CEV must be
variable.

This kind of problem cannot be corrected by reorganization, but a

higher-level control system controlling for the relationship between
hand and cup, or between beak and seed, could still function well by
continuously changing the reference value sent to the lower control
system, provided it can act quickly as compared to the rate at which
v varies.

----------end Stage 2-----------



Stage 3. Delay in the loop (v = 0)



The foregoing was just preparation for the core of this essay,

consideration of the effect of loop delay.

In this stage, we forget about the offset v (i.e. we set v = 0

permanently), but let there be loop delay of precisely T seconds.
(For any practically realizable control loop, T would not be a
precise number, but would represent a delay distributed over some
time. But here we consider only the effect of a fixed and precise
delay. Is the effect the same as introducing a variable offset but
no loop delay?

For simplicity in writing the equations, I assume that all the delay

is in the environmental feedback path. It could be anywhere around
the loop for the purposes of the following analysis. I will still
assume that E( ) is a unitary transform, but now it is one with a
delay, such that its input at time t0 appears at its output at time
t1 = t0+T, where T is the loop delay. We start by considering the
situation at time t1.

Now the equations become time-based. Please forgive the notation. I

write p(t1) where I should more properly write p subscript_t1 (t),
but I hope this will nevertheless be intelligible.

p(t1) = s(t1) = d(t1) + qo(t0)

        = d(t1) + G(e(t0))

        = d(t1) + G(r - p(t0))  (assuming, as before, that r is

static)

Oops! We can't do what we did before, and move Gp across to the left

hand side of the equation to solve for p, because p(t0) differs from
p(t1). We have to continue around the loop, one loop delay at a
time…

p(t1) = d(t1) + G(r - s(t0))

        = d(t1) + G(r - (d(t0) + qo(t-1))

        = d(t1) + G(r - (d(t0) + G(e(t-1)))

        = ....



Round and round the loop we go, creating an infinite equation that

may be possible to evaluate, but not by me.

We no longer have a direct way to compute p(t1) as a function of

d(t1), or even as a joint function of d(t1) and d(t0), because the
influence that opposes the disturbance has a value based on an
earlier value of the disturbance – in fact, on an infinite series
of earlier values. We have to know something about the statistics of
how d varies over time if we are to compute anything exact about
p(t1).

We can, however, determine some limiting conditions.



--------------------------

Stage 4. Limits to control if there is loop delay



Let us assume (against all practical possibility) that qo(t0) would

have been precisely the right value to counter d(t0) exactly if
there had been no loop delay. In other words, if there hadn’t been
any delay, control would have been perfect, giving qo(t0) = r-d(t0).
That’s probably better than the best that can be done by a control
system that does not use any prediction.

The first line in the derivation in Stage 3 above was



p(t1) = s(t1) = d(t1) + qo(t0)



which, using the impractical assumption that qo(t0) = -d(t0),

becomes

p(t1) = s(t1) = dt(t1) - d(t0)



Since the index of the quality of control is



Q = var(d) / var(p)



The interesting question is the distribution of values of p.



If p(t) = d(t) - d(t-T) then



var(p) = var(d(t) - d(t-T)), giving



Q  = var(d) / (var(d(t)-d(t-T))



What does the denominator of this expression represent? It is the

variance of the amount by which the disturbance value changes over
the duration of the loop delay, T. If the loop delay is zero, its
value is zero, and Q is infinite (remember we made the assumption
that qo(t0) would have precisely cancelled the effect of the
disturbance, which would mean infinitely good control).

How can we find var(d(t)-d(t-T))?



Assume we know var(d(t)), the overall variance of the disturbance

influence. Assume also that the statistics of the disturbance are
stationary, so that var(d(t-T)) is the same as var(d(t)) for all
delays.

Here we can take advantage of the geometrical interpretation of

correlation http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Geometric_interpretation .
If we have two vectors, X = {x1, x2, …, xn) and Y = (y1, y2, …,
yn), the correlation between the two vectors is the cosine of the
angle between them. Two vectors always define a plane in N-space, as
suggested in the figure.

<img alt="" src="https://mail.google.com/mail/?ui=2&amp;ik=f4dcc9166a&amp;view=att&amp;th=12ceb5d01ecca941&amp;attid=0.1.2&amp;disp=emb&amp;zw" width="238" height="298">

If the two vectors X and Y are taken as two sides of a triangle, the

vector X-Y = {x1-y1, x2-y2, …, xn-yn} is the third side of the
triangle. In the case of interest, X is successive samples of d(t),
Y successive samples of d(t-T), and X-Y is the sample-by-sample
difference d(t)-d(t-T), which is the variable p whose variance we
seek. The lengths of the vectors are the RMS value of the vector
elements, or sqrt(var(vector_elements)). |X| = |Y| = sqrt(var(d)),

X-Y| = sqrt(var(p))

According to the law of cosines, if the sides of a triangle are a,

b, and c, then the length of side c is related to the lengths of the
other two sides and the angle θ between them by the relation

c^2 = a^2 + b^2 - 2ab*cosθ



In our triangle, a is |X|, b is |Y|, and c is|X-Y|.



>X-Y|^2 = |X|^2 + |Y|^2 - 2|X||Y| cosθ



But |X|^2 = |Y|^2 = |X||Y| = var(d),



and cosθ is the correlation between X and Y, which is the

autocorrelation of d(t) at lag = loop_delay, which gives

>X-Y|^2 = var(p) by definition



var(p) = var(d(t) - d(t-T))

          = var(d) + var(d) - 2*var(d)(cosθ)

          = 2*var(d)(1 - cosθ)



Since X is the disturbance waveform and Y its delayed value, cosθ is

the autocorrelation of the disturbance waveform at lag T. Here’s
what one autocorrelation function selected more or less at random
looks like. The full function is symmetrical about zero lag, and
only half of it is shown. This one happens to be the autocorrelation
function of the error in a real tracking run, but the same sort of
shape is found for many real signals that don’t have sharp spectral
peaks. The abscissa numbers happen to be samples at 60 per second.

<img alt="" src="https://mail.google.com/mail/?ui=2&amp;ik=f4dcc9166a&amp;view=att&amp;th=12ceb5d01ecca941&amp;attid=0.1.3&amp;disp=emb&amp;zw" width="362" height="218">



Finally, from the equations above, we can write the upper bound to

the quality of control for a control loop with loop delay = T.

Qmax = var(d)/(2*var(d)*(1-autocorrel(d at lag T)))

          = 1/(2*(1- autocorrel(d at lag T))



No linear control system without prediction can control better than

this (if my assumptions and calculations are reasonable). However,
so long as the disturbance autocorrelation is non-zero at the loop
delay, some prediction is possible, and a control system that does
predict might control better than this formula would suggest.
Whether people predict is a matter for experiment.

The autocorrelation function is the inverse Fourier Transform of the

spectral density function (the spectrum) of the disturbance. So if
you know the spectrum or the autocorrelation function of the
disturbance together with the loop delay of the control system, you
can place an upper bound on the achievable quality of control for a
control system that uses no prediction.

--------Implications--------



If the above analysis is correct, one implication of the result is

that for any lag for which the autocorrelation of the disturbance
waveform is less than 0.5, any attempt to control will be
counterproductive and will result in a perceptual signal that has a
variance greater than that of the disturbance alone. If you think
about it, this makes sense. Consider the extreme case, a loop delay
long enough that the disturbance has zero autocorrelation for that
lag.

To have zero autocorrelation at delay T means that the disturbance

at time t0+T is unrelated to whatever might have happened before
time t0. But any effect of the control system’s output on the CEV
has to have been based entirely on what happened before time t0.
Since s(t) is the sum of the current disturbance and the delayed
output, this means that the variance of the output adds to the
variance of the disturbance rather than reducing it. If qo(t-T)
would have exactly cancelled d(t-T), and d(t) is uncorrelated with
d(t-T), the resulting variance of s is double the variance of d.

What of delays for which the autocorrelation of d is less than 0.5

but greater than zero? One might think that some control would be
possible. To examine this question, let us go back to the geometric
representation of correlation. Here is the same figure as above, but
drawn flat on the plane. As before, X and Y are proportional to the
square root of the variances of the disturbance and its lagged
version, and cosθ is the correlation between them for some lag. X-Y
represents the sensory variable s if the assumptions above are used.
It is proportional to the square root of the variance of the
sample-by-sample difference between the disturbance and its lagged
version.

<img alt="" src="https://mail.google.com/mail/?ui=2&amp;ik=f4dcc9166a&amp;view=att&amp;th=12ceb5d01ecca941&amp;attid=0.1.4&amp;disp=emb&amp;zw" width="197" height="198">

If cosθ = 0.5, the triangle is equilateral, and the variance of s is

the same as that of the disturbance. In other words, when the
autocorrelation of the disturbance at lag T is 0.5, control is
ineffective at reducing the variability of the perceptual signal,
but is not damaging. For cosθ less than 0.5, the variance of X-Y
exceeds that of the disturbance, and attempts to control would be
counterproductive.

---------Experiment-----------



Experimental sanity check of the theory



To test for the sanity of the above analysis, I used Bill P's

“TrackAnalyze” tracking task from LCSIII. I ran 15 tracks using
three different disturbances at each of the five difficulty levels,
and used Excel to determine the disturbance variance and the
perceptual variance, as well as the autocorrelation function of each
disturbance. The autocorrelation values found by plugging the delay
T found by the TrackAnalyze model fit into the data analysis were
mostly in the region of 0.99, far above the “no control” limit of
0.5.

The table shows the results, the left part using variances, the

right part using RMS values. “dif” is the nominal difficulty level
in TrackAnalyze. Q is the ratio of the disturbance variance to the
CEV variance, where the CEV is the target-cursor difference. The
column headed “eff” means “efficiency”, defined to be the actual Q
divided by the maximum possible Q,

Qmax = 1/(2*(1- autocorrel(d at lag T))



from the algorithm above, where T is the model fit loop delay. CR

means the control ratio, the ratio of the RMS values of the
disturbance and the CEV. CR is the square root of the variance ratio
I use in computing Q, and is often used to indicate the quality of
control, since it indicates the ratio of the amplitudes of the
disturbance and the CEV.

The table includes a column headed "L", which will be explained

later.

`      dif  delay    Q        Qmax    Eff        CR     CRmax  CR/CRmax
  L

  1   10        36.6    73.8    0.49        6.05    8.59    0.70    

12.0

      11        32.8    55.2    0.59        5.73    7.43    0.77    

12.2

      11        32.6    55.2    0.59        5.71    7.43   

0.77 10

  2   11        14.6    28.4    0.52        3.82    5.33    0.72    

10

      10        15.0    35.3    0.42        3.87    5.94    0.65   
10.5  

      10        15.6    36.5    0.42        3.95    6.04   

0.65 10.7

  3    9        7.1     23.2    0.31        2.66    4.82   

0.55 10.5

      10        5.8     18.1    0.31        2.41    4.25    0.57   
10.5

      10        6       17.5    0.34        2.45    4.18   

0.59 10.5

  4   9        2.7     13.0    0.21        1.64    3.61   

0.46 10.6

      9        2.9     12.9    0.22        1.70    3.59    0.47     

10.5

      9        2.8     11.4    0.24        1.67    3.38   

0.50 10

  5   9        1.47    6.87    0.21        1.21    2.62   

0.46 9.2

      9        1.54    6.4     0.24        1.24    2.53    0.49   
 10.1

      9        1.58    6.8     0.23        1.26    2.61    0.48    
9.3        `



It is encouraging to note that the Q value is reasonably consistent

across runs at the same difficulty level but with different
disturbances, and is always well below the theoretical maximum
possible Q. This provides a sanity check on the theoretical
analysis. A difference of one unit in the loop delay estimate makes
an appreciable difference in Qmax, sufficient to account for all of
the discrepancy in line 1 of difficulties 1 and half the discrepancy
of line 1 of difficulty 2. If the model fit delay in line 1 of
difficulty 3 had been 10 rather than 9, the computed efficiency
would have been about 0.34.

Two trends are apparent in these data. Firstly, the model fitting

suggests that the loop delay is longer for the less difficult tasks.
Even though the same subject (me) did all the tracks (on two
different days), it does make some sense that this should be so.
When the target moves slowly, one is slower to see that a movement
or a change of movement has occurred than when the target moves
quickly.

The other strong trend is that the computed efficiency trends

sharply downward as the difficulty increases. I have not
investigated this, but several possible reasons suggest themselves.
It could signal that the theoretical maximum is too loose, and that
a tighter maximum might be possible to derive. It might be that the
noise inherent in my control actions is relatively more important at
the more difficult levels where control is relatively poor (remember
that the theoretical analysis was based on a completely noise-free
control loop). It could be because the analysis is predicated on a
Gaussian distribution of disturbance values, an assumption that
might be less valid for the more difficult tasks (casual sampling of
the distribution of a few of the disturbance waveforms suggests that
this is not likely to be the problem). It could be because the
analysis assumes a fixed loop transport lag, whereas the transport
lag in the real control system may vary over time. Other reasons may
occur to you.

Variable transport lag seems a viable possibility, since there are

times in the difficult tracks when the target is moving as slowly as
it does most of the time at the easy levels. The noise possibility
also seems plausible and may be worth following up, because the
spectrum of the target-cursor difference waveform (the CEV waveform)
changes across difficulty levels. The following are approximate,
because each CEV waveform is different, and the numbers are by-eye
estimates taken from noisy curves. But the trend is clear for the
CEV waveform to be of higher peak frequency and wider bandwidth as
the difficulty level increases. (Note: these values may be
mis-stated by a factor of 2. I’m not 100% sure of the scale factor
of the Fourier analysis I used).

`Difficulty   Spectral peak  6db bandwidth

  1                 0.6 Hx        1.2Hz

  2                 1.2 Hz        1.8Hz

  3                 1.6 Hz        2.0 Hz

  4                 1.6 Hz        2.2 Hz

  5                 2.0 Hz        2.6 Hz   `



I make no interpretation of this trend.



Now I can explain the column "L" in the table. It is the lag in

samples that brings the CEV waveform autocorrelation value to 0.5. I
don’t know whether it is significant that this value is close to the
loop delay estimated by the best fit model, but considering the
importance of autocorrelation = 0.5 in the main analysis, I thought
it worth mentioning in this essay. Further consideration may suggest
that it is a simple coincidence, or perhaps that it has some
rational relationship with the quality of control.

------------------

Further development



All of the above is predicated on a Gaussian distribution of

disturbance values. The reason for this restriction is to permit the
variance analysis. Other distributions require other methods, though
it is often true that using variance analysis gives results that are
not too far wrong. In some future episode I hope to continue this
further.

Martin


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2010.12.17.13.24]

[From Rick Marken (2010.12.17.1000)]

        [Martin Taylor

2010.12.04.17.46]

        This is the second in a series relating to information and

control. The first was [Martin Taylor 2010.11.18/17/35]. The
present essay does not use information theory concepts, but
assumes that all variabilities have a Gaussian distribution,
meaning that variance analysis is appropriate. It deals with
the limits of control for a noise-free control unit that has
some precisely specified transport lag around the loop
(called “loop delay” in this essay). All physically
realizable control units necessarily have some delay and
some noise in every pathway, so the kind of noise-free
control system considered here is not physically realizable.
It provides an idealized upper bound on the control
realizable in practice.

      I think you might get more attention to this if you could

write up a short (250 word) abstract describing 1) the
background to the problem (why is interesting) 2) a simple
description of the analysis 3) a simple description of the
results of the analysis and 4) your conclusion (explaining why
this is important to know). This is a lot of stuff to go
through and much of it is very difficult to follow. And the
results are not that easy to understand, partly because the
table that contains those results wraps in a way that makes it
difficult to see which numbers are in which columns.

Thanks for the comments. I'm not sure how to address them, so I'll

start with the last first, the table.

In the copy you quoted back to me, the table is nicely organized

with the columns properly aligned. Are you using a very narrow
window for reading the mail? If not, I don’t know what to do about
it.

As for the others, I can try, but I thought my preamble did cover

those points, and in order to make sense of it you really should
follow the development. What part was difficult to follow? maybe I
can expand on it, because you really do have to understand this if
you are to make any sense of the following episodes, which move on
from Gaussian signals and variance analysis to informational
analysis, first in a noise-free system, and later including the
possibility of noise.

Here's a capsule summary. As such, it has lots of technical elisions

and imprecise statements, but you can get a more correct version by
working through the essay itself.

1. The background to the problem:

Perceptual control is almost a redundant phrase, useful because not

everyone knows that the only variables any organism can control are
internal to the organism – in PCT those variables are called

perceptions". But for the survival of the organism, the only

variables that need to be controlled are its intrinsic variables,
which, in PCT are not called “perceptions”. Control of perceptions
is important only because perceptions derive from conditions in the
outer world, and it is conditions in the outer world that matter for
the control of intrinsic variables, even though the intrinisic
variables are not themselves functions of states of the outer world;
perceptions are.

The problem, then, is the degree to which control of perception

implies control of the corresponding variable in the outer world.
“The corresponding variable” is a function of states in the outer
world defined by the perceptual input function of the control system
in which we are interested. That function can be treated as a single
variable we label the “CEV” or “Complex Environmental Variable”. If
you hold the perception close to its reference value, how will the
CEV vary?

2. The analysis

I presented the analysis in stages, first asking what happens if the

perceptual value is offset from the CEV value by some intervening
transformation such as prism spectacles that shift everything
leftward from the normal position. When the perception is at its
reference, the CEV is not, and as a result, the chicken fitted with
such spectacles pecks at the ground away from the seed, failing to
get something to eat.

Next I asked what would happen if the offset varied around a value

of zero. This stage was included in order to get across the idea
that the CEV and the perception need not necessarily have the same
variance. Neither of these first two stages are logically required
in the analysis. They were included to make the reader comfortable
with the concept of failure to control the CEV.

The important part of the analysis, the part that matters, deals

with the way loop delay limits the ability to control the perception
itself. The simple idea is that by the time the perceptual signal
has been compared with the reference value and resulted in an output
that compensates for the disturbance, the disturbance no longer has
the value that it did when the sensory data used to generate the
output were available. The output always compensates for an old
value of the disturbance, not the present value.

Assuming that on average the perceptual value equals the reference

value, it will vary around the reference value by an amount related
to how rapidly the disturbance value changes. For a given loop delay
(transport lag) that amount is represented by the autocorrelation
function of the disturbance at the given lag. The assumptions and
analysis are given in detail in the essay. The end result is that
the quality of control Q, measured by the variance ratio of the
disturbance to the perception (the square of the “Control Ratio” you
often use) is limited to Qmax = 1/(2*(1-A(T)) where A(T) is the
aurocorrelation of the disturbance at lag T.

3. Results of the analysis

The end result is that the quality of control Q, measured by the

variance ratio of the disturbance to the perception (the square of
the “Control Ratio” you often use) is limited to Qmax =
1/(2*(1-A(T)) where A(T) is the aurocorrelation of the disturbance
at lag T. No real or mechanical control system that does not use
prediction can control better than this.

I tested the sanity of the analysis by running some trials using

TrackAnalyze, just to see whether I could control better than the
theoretical Qmax. If I could, then something must have been wrong
with the analysis. But I couldn’t, and the efficiencies (Q
actual/Qmax) of my runs were consistent across different
disturbances at the different level of difficulty, but were smaller
the greater the difficulty. I didn’t graph the efficiencies, but I
attach a plot of the 15 runs, Efficiency versus difficulty level.

4. Why it's important to know

Easy answer: so far as I know, within PCT nobody has presented a

theoretical limitation on the degree of control possible in a
noise-free situation. So this is a new tool in the box of PCT tools.

More extensive answer: PCT matters in the wider world, not just as

an esoteric discipline. But so long as its engineering analysis is
limited to simple continuous tracking tasks, it will be treated as
metaphor, an mistreated more than it is well treated (as recent
CSGnet discussion illustrates). I hope to be able to provide an
engineering-style description in the more general case of control.
It seemed to me that to start by analyzing the standard tracking
task, using the conventional measures, would make it relatively
easier for people to follow a more general non-metaphoric analysis
using related concepts that could be applied in social situations
and with highly non-linear controls, as well as in a well-defined
specific case. Limits on possible control are as important to know
for the analysis of general situations as are the actual parameters
of control in specific situations.

I hope this makes it more intelligible.

Martin

[From Bill Powers (2010.12.17.1430 MDT)]

Martin Taylor 2010.12.04.17.46

Finally reading your second essay on information and control.

Your equations don't match the model that is used in TrackAnalyze. I don't know if this makes a difference for your conclusions, but here is the actual model:

     mPerc := (mCurs - TargetVal[T]); // model perception
     mDelP := TransportLag(mPerc,TimeLag); // model delayed perception
     mErr := ModelRef - mDelP; // model error
     mHand := mHand +(Gain*mErr - Damping*mHand)*dt; // model hand(= cursor)

The critical difference from your equations is in the last line, which computes the cursor position mCurs ( = mHand). This is a leaky integrator. The size of the leak is called "damping." With Damping set to zero, the output function is a pure integrator (but with a delay of dt).

In your equations you have a G(e) which is equal to qo (mHand above). If this is functional notation as indicated in the diagram as G(), it's fine, but you treat it as G*e. So your equations don't include an integrator.

In order to have a stable control system, there must be one integration in the loop. If you try your equations in Vensim, it will reject them and refuse to run the model, telling you you have "simultaneous equations." In fact, what your equations give when solved is the steady state condition with r and d both constant (when there is no delay). That doesn't tell you how the system gets to the steady state, or if it does. But if the control system does have a steady state with r and d constant, the equations will give the correct steady-state values of the variables when solved simultaneously.

With a delay but no integrator, the system will run away for any loop gain greater than 1 (try G = 1.1). Try it in Vensim, which has a delay function. With with your output function, qo = G*e, and using the DELAYFIXED function with a delay of 0.1 second (plot 100 seconds in steps of 0.1), the system will run away with exponentially increasing oscillations. Without a delay the system will be stable for any value of gain (but the iteration time counts as a delay, so at high gains, you have to iterate very fast in physical time -- dt very very small -- to simulate zero lag). With a delay and an integrator, the model will be stable up to some limit of the gain and damping -- see Richard Kennaway's Appendix in the back of LCS III for the exact numbers.

You will notice that in TrackAnalyze, the best-fit damping factor rises monotonically and rather radically as difficulty increases. I don't know what to make of that yet, but it probably indicates a missing nonlinearity in the model. Some Day.

Best,

Bill

[Martin Taylor 2010.12.17.17.33]

[From Bill Powers (2010.12.17.1430 MDT)]

Martin Taylor 2010.12.04.17.46

Finally reading your second essay on information and control.

Your equations don't match the model that is used in TrackAnalyze.

I don't have a model. I have a generic control system that includes any linear model you want to use. The output function can have an integrator, a band-pass filter, a tuned-spectrum filter, or whatever you want, so long as it is linear.

In your equations you have a G(e) which is equal to qo (mHand above). If this is functional notation as indicated in the diagram as G(), it's fine, but you treat it as G*e. So your equations don't include an integrator.

Yes they do. I mentioned that the variables and operators could be Laplace transforms or (I think I mentioned) Heaviside operators. G can be an integrator if you want it to be. In fact, a simple integrator is what I had in mind most of the time when writing the essay, as a sanity check on what I was saying, but I didn't want to make the analysis in any way specific to a particular kind of output function.

The output function can be any other linear function you want. What matters for the equations is the assumption that control is good (and at one point, that the unattainable ideal is that control is perfect). If

G> is small, the approximations get very loose, but not in a way that

is likely to make real control better than the computed optimum. I don't think even the linear assumption is necessary, but I did use it in the calculations, so i have to limit my claims to linear systems.

The only place where I use the TrackAnalyze model is to get an estimate of loop delay for each experimental run. I'm not sure that it is accurate, but it's probably close. Maybe the varying damping factor is involved in the different trends I see in the data, such as the change in fitted loop delay, which is rather larger than the change in the delay for which the error autocorrelation function is 0.5. But I don't know yet whether the similarity of the fitted loop delay and the 0.5 autocorrelation delay is meaningful or a red herring.

I may try reanalyzing these runs with a fixed delay assumption, using the 0.5 error autocorrelation lag instead of the TrackAnalyze fitted delay, and see whether some of the other trends across difficulty level go away or increase. But I'd rather figure out a theoretical reason why it should be relevant before trying such a "suck it and see" approach.

Martin

Martin

[From Bill Powers (2010.12.17.1856 MDT)]

Martin Taylor 2010.12.17.17.33 --

BP:Your equations don't match the model that is used in TrackAnalyze.

MMT: I don't have a model. I have a generic control system that includes any linear model you want to use. The output function can have an integrator, a band-pass filter, a tuned-spectrum filter, or whatever you want, so long as it is linear.

BP: But how can I use any linear model I want? It's only for particular linear models that the block diagram even represents a control system. If all the functions are constant multipliers, we surely have linear functions, but we don't even have a behaving system: the equations are then valid only in the steady state. I don't think that the condition of linearity is sufficient to make the diagram into a control system.

Show me how your equations represent the program steps in the TrackAnalyze model. Then I might understand.

MMT: In your equations you have a G(e) which is equal to qo (mHand above). If this is functional notation as indicated in the diagram as G(), it's fine, but you treat it as G*e. So your equations don't include an integrator.

MMT: Yes they do. I mentioned that the variables and operators could be Laplace transforms or (I think I mentioned) Heaviside operators. G can be an integrator if you want it to be. In fact, a simple integrator is what I had in mind most of the time when writing the essay, as a sanity check on what I was saying, but I didn't want to make the analysis in any way specific to a particular kind of output function.

BP: In that case I have to conclude that it doesn't apply to a control system, because a control system must specifically include an integrator somewhere in the loop. Your equations can't distinguish a spontaneously oscillating system from a control system, according to what you appear to be saying.

The basic problem here is that you're pulling rank on me, using esoteric mathematical concepts of which I know nothing, or not enough to check up on you. So either I just have to say "Yes, Martin" and go on to something else, still not understanding (or seeing any reason to care), or you have to find a way to communicate. I'm not going to agree with you until I understand what you're trying to say. and I'm not going to understand until you make yourself clear, which you haven't done. I'm not going to believe that "the variables and operators could be Laplace transforms or (I think I mentioned) Heaviside operators" until you can show me that that is true and the equations you use still make sense in all of those cases. I don't even know what a Heaviside operator is. All I know is that the output function or the feedback function (but not both) must be an integrator, and whether you express that as 1/s or in any other way, that's what it has to be. And if that is not the case, the solution of your equations will not be what you say it is.

MMT: The output function can be any other linear function you want.

BP: See? My immediate reaction is to say that this is nonsense. If the output function is qo = 10*e and all the other functions are also of that form, the result will not be a control system. If I reverse one sign, the system will go into positive feedback and blow up. So how can you say I can use any other linear function that I want?

MMT: What matters for the equations is the assumption that control is good (and at one point, that the unattainable ideal is that control is perfect).

BP: But that requires that you make the system up to have just the right functions, not any old functions you want to put in it as long as they're linear. That contradicts your requirement that there is a control system and that its control is good. Your statements of what is required contradict each other.

MMT: If |G| is small, the approximations get very loose, but not in a way that is likely to make real control better than the computed optimum.

BP: If the output function is an integrator, the size of the gain factor determines not how good the control is but how long it takes for the controlled variable to be brought to a match with the reference signal after a perturbation. The final error is always zero. You can't treat proportional control as if it is integral control, or either one as if it is derivative control.

My prognosis for this discussion is pessimistic.

Best,

Bill P.

[Martin Taylor 2010.12.17.23.06]

[From Bill Powers (2010.12.17.1856 MDT)]

Martin Taylor 2010.12.17.17.33 --

BP:Your equations don't match the model that is used in TrackAnalyze.

MMT: I don't have a model. I have a generic control system that includes any linear model you want to use. The output function can have an integrator, a band-pass filter, a tuned-spectrum filter, or whatever you want, so long as it is linear.

BP: But how can I use any linear model I want? It's only for particular linear models that the block diagram even represents a control system.

Then you wouldn't want to use other kinds of linear models, would you? But you can if you want. The analysis result shows a limit on how good the best possible controller might be, not how bad a controller you might construct using badly chosen output functions.

As you know from having gone through the equations, computation of the limit doesn't even use the form of the output function G. It substitutes the performance of an idealized controller that would be perfect if the loop delay had been zero, knowing that no such controller can exist. It uses G( ) only to reach the point where that substitution can be made. I do that because I don't know how to compute the result of the infinite equation with an infinite sequence of G( ) operating on the results of earlier applications of G( ). If I knew how to do that, I could compute the best performance for a particular given output function (such as a pure integrator, for example). But I can't, so I don't.

I repeat. What we are talking about is a control system. One loop. We do not specify that the control system even works. What the analysis says is that the _best possible_ linear control system that uses no prediction has limits on its ability to control when there is loop delay, as there is in any practical control system. If you want to choose some function for which the system immediately loses control, that's fine. Your contraption won't control as well as the best possible controller, and so the analysis works for it.

If all the functions are constant multipliers, we surely have linear functions, but we don't even have a behaving system: the equations are then valid only in the steady state. I don't think that the condition of linearity is sufficient to make the diagram into a control system.

Show me how your equations represent the program steps in the TrackAnalyze model. Then I might understand.

The equation steps are the same ones we have always used in descriptions of control systems. You have always approved them before, including the substitution of Laplace transforms for the time waveforms. So why object now that a new result has been demonstrated by using them?

  I'm not going to believe that "the variables and operators could be Laplace transforms or (I think I mentioned) Heaviside operators" until you can show me that that is true and the equations you use still make sense in all of those cases. I don't even know what a Heaviside operator is.

Heaviside operator first (You can look it up on Wikipedia if you want). Heaviside showed that if you write d/dx as D in an equation (and integral as 1/D) and do normal algebra with them, the equations work. Often this makes solving differential equations easier to solve than working through the calculus. In the present case, that means you replace G(x) in the control equations by x/D.

As for Laplace functions, I think you know that you can represent time functions in linear equations by their Laplace transforms. You were quite happy with that when I did it years ago. Using that, the Laplace representation of an integral is 1/s, so if the Laplace transform of the error signals is e, its integral is e/s. But if you don't want the output function to be an integrator, so be it. The Laplace transform will be something else. You probably won't have much of a controller, and it certainly won't control as well as the best possible controller for the given loop delay.

MMT: The output function can be any other linear function you want.

BP: See? My immediate reaction is to say that this is nonsense. If the output function is qo = 10*e and all the other functions are also of that form, the result will not be a control system. If I reverse one sign, the system will go into positive feedback and blow up. So how can you say I can use any other linear function that I want?

If you use one for which the system blows up, it won't be as good a controller as the best possible, will it? But you are still at liberty to try it if you want. My analysis deal actually with a limit that I believe to be better than the best possible controller could achieve, but it's the tightest limit I can at the moment derive.

My prognosis for this discussion is pessimistic.

Is that still the case? If so, why?

Martin

[From Rick Marken (2010.12.17.2120)]

Martin Taylor (2010.12.17.13.24)--

1. The background to the problem:

... Control of perceptions is important only because
perceptions derive from conditions in the outer world, and it is conditions
in the outer world that matter for the control of intrinsic variables, even
though the intrinisic variables are not themselves functions of states of
the outer world; perceptions are.

The problem, then, is the degree to which control of perception implies
control of the corresponding variable in the outer world.

I think I see. So you are saying that control of a perceptual signal
could be good while control of the environmental correlate of that
signal is poor. Is that it?

If you hold the perception close to its reference value, how will the CEV vary?

So I have it right above, yes?

2. The analysis

I presented the analysis in stages, first asking what happens if the
perceptual value is offset from the CEV value by some intervening
transformation such as prism spectacles that shift everything leftward from
the normal position. When the perception is at its reference, the CEV is
not, and as a result, the chicken fitted with such spectacles pecks at the
ground away from the seed, failing to get something to eat.

There's something wrong here. This doesn't seem to be a case where
there is a disconnect between CEV and perceptual signal (p). The CEV
when a chicken pecks at a seed must be something like the distance
between beak and seed. The prism spectacles don't distort this
distance. If the beak is next to the seed the perception will be of
zero distance from beak to seed whether the prism is on or not. What
the prism does is it changes the feedback function relating movements
of the beak to the CEV (distance from beak to seed). With the prisms
you mention the chicken would have to move itmore to the right in
order to get it's beak next to the seed.

So it's not a disconnect between perception and CEV that keeps the
chicken (apparently permanently) from being able to control the
perception of distance between beak and seed; it is the change in what
the chicken would have to _do_ in order to control this perception
that creates the problem. And apparently a chicken cannot learn to do
something different when the mapping from beak movement to resulting
distance between beak and seed is changed.

The important part of the analysis, the part that matters, deals with the
way loop delay limits the ability to control the perception itself.

By loop delay I presume you mean transport lag. So is the idea that by
varying loop delay you can change the relative amount of control
exerted on the CEV and p?

The
simple idea is that by the time the perceptual signal has been compared with
the reference value and resulted in an output that compensates for the
disturbance, the disturbance no longer has the value that it did when the
sensory data used to generate the output were available. The output always
compensates for an old value of the disturbance, not the present value.

Assuming that on average the perceptual value equals the reference value, it
will vary around the reference value by an amount related to how rapidly the
disturbance value changes. For a given loop delay (transport lag) that
amount is represented by the autocorrelation function of the disturbance at
the given lag.

Why not just run a simulation and measure the variance of CEV and p directly?

3. Results of the analysis

The end result is that the quality of control Q, measured by the variance
ratio of the disturbance to the perception (the square of the "Control
Ratio" you often use) is limited to Qmax = 1/(2*(1-A(T)) where A(T) is the
aurocorrelation of the disturbance at lag T. No real or mechanical control
system that does not use prediction can control better than this.

But this is just quality of control of p, right. I thought this was
about figuring out what factors make for a difference in the control
of p vs CEV.

4. Why it's important to know

Easy answer: so far as I know, within PCT nobody has presented a theoretical
limitation on the degree of control possible in a noise-free situation. So
this is a new tool in the box of PCT tools.

Again, this doesn't seem to pertain to the difference between control
of CEV vs. p.

Limits on possible control are as important to know for the analysis
of general situations as are the actual parameters of control in specific
situations.

I can't believe that an important "limit on control" wouldn't be the
nature of the perception controlled. My intuition suggests that
control of p = kqi would be better than control of p=k.1qi.1+
k.2qi.2^2- k.3qi.3^3, for example.

I hope this makes it more intelligible.

Yes, somewhat, though I seem to have missed how the final Quality of
Control analysis related to your original interest in the relationship
between control of p and control of CEV.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.12.18.0420 MDT)]

Martin Taylor 2010.12.04.17.46 –

BP: Sometimes my mind works so slowly! I look right at things and don’t
see the implications. Consider this:

MMT: As noted above, p = r +
(d+v)/G, so if r is constant, var(p) = var(d+v)/G

1/Q = var(p)/var(d)

= (var(d) + var(v)) / (G*var(d)) 

= (1 + var(v)/var(d)) / G

`

Q = G/(1 + var(v)/var(d))

BP: I was so wrapped up in following the derivation that I never stopped
to ask what you mean by var(d). How do we measure the variance of some
experimental variable? We repeat the experiment again and again and look
at the way the record of the variable changes from one repetition to the
next. What we mean by a random variable is something that changes
unpredictably even when everything else is repeated exactly. You roll the
dice over and over exactly the same way as far as you can control it, and
get different numbers. You then look at the frequency with which the
different numbers show up, and from that decide whether the frequencies
of appearance of the numbers are equally distributed, showing that the
way you roll the dice is not having any systematic effect on the outcome.
That’s what you mean by “fair dice”. I’m sure you see where I’m
going with this but I’ll go through it anyway.
All right, try that with TrackAnalyze. Do the same tracking experiment
over and over, keeping everything constant that you can. Use disturbance
1, difficulty 4 each time, recording the target (d) and mouse ( = cursor)
positions.
You will find that the record of mouse positions is not the same every
time. You can compute the mean waveform by adding all the waveforms
together point by point and dividing by the number of repetitions. Then
you can compute the variance of the waveform by subtracting the values of
the mean waveform from the corresponding values of each individual
waveform and accumulating the square of the deviations.
You will find that the variance of the target position is zero to the
limit of resolution of the data. The target shows the same variations on
every repetition of the experiment (this is like seeing that the way the
dice are rolled is the same every time). So var(d) = 0. You can’t divide
by var(d).
Also, var(d + v) = var(v), so your equations should read
p = r + (v+d)/G, so if r is constant and var(d) = 0,
var(p) = var(v)/G
In other words, the only random component in the variations of p is due
to the addition of the noise v to the CEV. There is no random component
due to the disturbance, which is repeated exactly the same way on every
run. And of course we can’t tell from the experimental data exactly where
inside the control system the random variable v is actually added; it
could be added to the perceptual signal, reference signal, error signal,
output of the output function or even mouse position (random slippage on
the table). Or all of those.
This confusion is my fault. My early tracking experiment results with
sine-wave disturbances were discounted by psychologists I showed them to;
they said the subjects were recognizing the regularity and using it to
predict the next value of the waveform, so no control system was
necessary. Then when I combined several sine waves (as the old
engineering psychologists used to do) to make the disturbances less
predictable, the psychologists said that the subjects were memorizing the
waveforms, or were still predicting them by looking at derivatives. That
was when I went to generating random waveforms and smoothing them just to
the limits of the control bandwidth, making sure the variations didn’t
repeat within an experiment, and making them different on every
repetition of an experiment. To no avail, of course, but I
tried.
Then I tried to show that the correlations were strange: that the CEV
correlated less with d and o than d correlated with o. Professor Duncan
became furious and walked out on me, saying I was making up the data.
Professor Hill more kindly said I was probably making some simple
mistake. These were my teachers while I was in graduate school in 1960.
Janet Spence, another one, wouldn’t even look (I forget her maiden name;
this was before she married Spence).
When I showed the correlation between d and Qo, I made my big mistake.
That made my professors think of random variables, but of course d was
not a random variable. It was a systematic repeatable waveform, at least
until I started using a different one on every trial to prove that the
subjects couldn’t be memorizing it. And now you’re doing the same thing,
treating d as a random variable when it’s a perfectly repeatable waveform
that doesn’t change at all from one run to the next – unless you decide
to change it, which obscures the whole point of the
demonstration.
I wish I had never used correlations at all, or that I had thought of
showing that only Qo had a random component while d repeated exactly. But
I didn’t anticipate the criticisms that people would come up with and
painted myself into a corner. I varied d on every run, so of course then
it didn’t repeat exactly and its variance was determined by its
variations within each single run. I didn’t realize that I had to
emphasize the repeatability of d, not its variability. But I was in a
logical fork between “memorization” and
“prediction.”

When you realize that d repeats exactly but Qo doesn’t, you can then see
how much actual noise signal there is. I’m going to have to change the
way data are recorded in TrackAnalyze so you can actually compute the
variance of d and Qo over many runs of the experiment. Then it will be
clear that the actual amount of noise is very small, probably no more
than one or two per cent of the range of variation of Qo – and possibly
very much less than that if the noise is in the perceptual function,
because that noise would be multiplied by the gain of the output function
before showing up in Qo. On the other hand, the integration in the output
function would smooth the noise at the higher frequencies – we’ll just
have to put some noise in the model in various places and see.

I hope this finally settles the problem. The disturbances, or the target
movements, are not random variables. They are (or can be made) exactly
the same on every run. But the mouse movements do NOT repeat exactly from
one run to the next even if the target movements do, so that shows some
random variation. That is the random variable v, or some function of
it.

Best,

Bill P.

[From Bill Powers (2010.12.1i8.0528 MST)]

Rick Marken (2010.12.17.2120) --

RM: So it's not a disconnect between perception and CEV that keeps the
chicken (apparently permanently) from being able to control the
perception of distance between beak and seed; it is the change in what
the chicken would have to _do_ in order to control this perception
that creates the problem. And apparently a chicken cannot learn to do
something different when the mapping from beak movement to resulting
distance between beak and seed is changed.

BP: I think that's it. The chicken can't reorganize in that way, and has to rely on an inherited control system, the way a colt learns to walk or a spider learns to create a web. They don't learn the control system, they're born with it and can't change it if they have it at all. Look how hard it is to teach some horses how to trot, or rack, or change leads; only "pacer" breeds can learn that simple coordination.

A human being, as you and I know, can actually learn to reverse the sign of feedback and do it in four tenths of a second, so they don't even have to be taught that feedback should be negative.

Bill

[Martin Taylor 2010.12.18.11.05]

[From Rick Marken (2010.12.17.2120)]

Martin Taylor (2010.12.17.13.24)--
1. The background to the problem:

... Control of perceptions is important only because
perceptions derive from conditions in the outer world, and it is conditions
in the outer world that matter for the control of intrinsic variables, even
though the intrinisic variables are not themselves functions of states of
the outer world; perceptions are.

The problem, then, is the degree to which control of perception implies
control of the corresponding variable in the outer world.

I think I see. So you are saying that control of a perceptual signal
could be good while control of the environmental correlate of that
signal is poor. Is that it?

Yes, in the first stages of the argument. On rereading what you quoted, I can see that my last sentence could be misleading. It could be read as saying that although perceptual control is perfect, the corresponding CEVs might vary. Although that is true, it's not the implication I intended. I did not intend the reader to think of "control of perception" as meaning "excellent control of perception" but intended rather to think of the fact of perceptual control, however imperfect that control might be. If you are controlling a perception, how well will the corresponding CEV be kept close to its desired value, that value being the value of the CEV that brings the perceptual value to its reference value? I probably should have said "The problem is the degree to which the fact that we are controlling our perceptions implies that the corresponding real-world variables are kept close to their desired values" or something like that.

If you hold the perception close to its reference value, how will the CEV vary?

So I have it right above, yes?

Yes, with the same caveat.

2. The analysis

I presented the analysis in stages, first asking what happens if the
perceptual value is offset from the CEV value by some intervening
transformation such as prism spectacles that shift everything leftward from
the normal position. When the perception is at its reference, the CEV is
not, and as a result, the chicken fitted with such spectacles pecks at the
ground away from the seed, failing to get something to eat.

There's something wrong here. This doesn't seem to be a case where
there is a disconnect between CEV and perceptual signal (p). The CEV
when a chicken pecks at a seed must be something like the distance
between beak and seed.

That's one CEV. Control of the perception of that distance should send a reference value to the unit that controls the location of the beak. The location of the beak when it hits the ground is the CEV that I'm dealing with. I'm guessing that when the beak is near the ground, the chicken can't see where it is, rather in the way that a gunner who is shooting at a target but can't see where the bullet hits will not correct for an offset in his sighting marks. Otherwise, why would the chicken not correct the reference level for the beak placement control unit, the way a person wearing prism spectacles does for the hand at first missing the cup handle?

  The prism spectacles don't distort this
distance. If the beak is next to the seed the perception will be of
zero distance from beak to seed whether the prism is on or not. What
the prism does is it changes the feedback function relating movements
of the beak to the CEV (distance from beak to seed). With the prisms
you mention the chicken would have to move itmore to the right in
order to get it's beak next to the seed.

Apparently chickens don't notice this.

So it's not a disconnect between perception and CEV that keeps the
chicken (apparently permanently) from being able to control the
perception of distance between beak and seed; it is the change in what
the chicken would have to _do_

All action is the control of perception, and one controlled perception is where the beak is placed when pecking. One presumes another controlled perception is the relation between this placement and the location of seeds on the ground. What seems not to happen is a change of the output of the higher-level system that provides the reference value for the lower level system.

in order to control this perception
that creates the problem. And apparently a chicken cannot learn to do
something different when the mapping from beak movement to resulting
distance between beak and seed is changed.

Right. But remember, we are working within a PCT paradigm, in which outputs are not controlled. The word "do" implies that you might be thinking of controlling outputs, rather than providing reference values to lower-level systems, which, when you get low enough, have muscle tensions as their outputs. I don't think we are low enough for that when the perception is at the relative position level.

The important part of the analysis, the part that matters, deals with the
way loop delay limits the ability to control the perception itself.

By loop delay I presume you mean transport lag. So is the idea that by
varying loop delay you can change the relative amount of control
exerted on the CEV and p?

If I read you correctly, I think you mean what I mean. I don't think of control as applying to a CEV. An external observer can see what happens to a presumed CEV, but control is only of the perception, p. So I wouldn't word it as you did, but I think we are probably talking about the same thing.

The
simple idea is that by the time the perceptual signal has been compared with
the reference value and resulted in an output that compensates for the
disturbance, the disturbance no longer has the value that it did when the
sensory data used to generate the output were available. The output always
compensates for an old value of the disturbance, not the present value.
Assuming that on average the perceptual value equals the reference value, it
will vary around the reference value by an amount related to how rapidly the
disturbance value changes. For a given loop delay (transport lag) that
amount is represented by the autocorrelation function of the disturbance at
the given lag.

Why not just run a simulation and measure the variance of CEV and p directly?

Because the simulation would use a specific control system that might not be the best possible. I'm asking about the best possible control. Your question is rather like asking Sadi Carnot why he didn't just run a steam engine and measure its performance to develop the Carnot Cycle.

3. Results of the analysis

The end result is that the quality of control Q, measured by the variance
ratio of the disturbance to the perception (the square of the "Control
Ratio" you often use) is limited to Qmax = 1/(2*(1-A(T)) where A(T) is the
aurocorrelation of the disturbance at lag T. No real or mechanical control
system that does not use prediction can control better than this.

But this is just quality of control of p, right. I thought this was
about figuring out what factors make for a difference in the control
of p vs CEV.

No. Quality of control of p is the only objective. Those first two stages are in the essay only to get you used to the idea that this kind of variability between p and CEV can occur. Since in the case of transport lag the old CEV used in creating the compensating output differs from the current perceptual signal in the same way that the current CEV differs from the current perception when there is a varying prism offset, the first two stages were intended to open up your thinking to the idea of differences between p and CEV, so that the real effects of transport lag in limiting the possibility for perceptual control in a noiseless system would not come as a shock. I guess they didn't have the desired effect.

4. Why it's important to know

Easy answer: so far as I know, within PCT nobody has presented a theoretical
limitation on the degree of control possible in a noise-free situation. So
this is a new tool in the box of PCT tools.

Again, this doesn't seem to pertain to the difference between control
of CEV vs. p.

True. But it does pertain to the science of perceptual control.

Limits on possible control are as important to know for the analysis
of general situations as are the actual parameters of control in specific
situations.

I can't believe that an important "limit on control" wouldn't be the
nature of the perception controlled. My intuition suggests that
control of p = kqi would be better than control of p=k.1qi.1+
k.2qi.2^2- k.3qi.3^3, for example.

I'm not clear what you are saying here. Are you saying that the value of a complicated function is intrinsically less well defined than that of a simple function? Mathematically that's not true. But even if it is true in practice, the difference would offer a lower limit than the theoretical limit I provided. My limit applies to every perception controlled by a linear control system (and I think, but haven't proved, to a perception controlled by a non-linear control system as well). If there are reasons why a particular perception should be less well controlled, that's not an issue.

On the other hand, if you found a non-predictive linear control system, simulated or real, controlling any perception better than my limit, you would prove my analysis wrong.

I hope this makes it more intelligible.

Yes, somewhat, though I seem to have missed how the final Quality of
Control analysis related to your original interest in the relationship
between control of p and control of CEV.

The argument is that when the perception that leads to the output that counters the disturbance is not reliably the same as the CEV, the relation between the reference and the perception is not the same as between the desired state of the CEV and its real state, so the corrective action may well be misapplied. That's how "the final Quality of Control analysis related to your original interest in the relationship between control of p and control of CEV."

I guess I should repeat that I don't see "control of CEV" as anything other than a red herring. It's important to the organism that the CEV take on an appropriate value, but it's always only the perception that is controlled. Failure of the CEV to have the value that the perception would imply could be fatal. Consider: I perceive that the road I am about to cross is empty of traffic. I cross and get hit by a car that I did not perceive. I had no problem with perceptual control, but I am dead. That's fairly important.

The CEV cannot reliably be kept closer to the desired value than the quality of control of the relevant perception permits. So it is important that we know how good perceptual control can be, and it's important that we are clear on the effects of delay in the ability to control. Delay is only one factor, but so far as I know, it has been considered only in connection with oscillating or runaway "control" systems when the phase lag at a particular frequency leads to a loop gain exceeding +1. Hitherto, I have not come across a general statement about how loop delay relates to the best possible control.

Martin

···

On 2010/12/18 12:21 AM, Richard Marken wrote:

[From Rick Marken (2010.12.19.0950)]

Martin Taylor (2010.12.18.11.05)--

Rick Marken (2010.12.17.2120)]

There's something wrong here. This doesn't seem to be a case where
there is a disconnect between CEV and perceptual signal (p). The CEV
when a chicken pecks at a seed must be something like the distance
between beak and seed.

That's one CEV. Control of the perception of that distance should send a
reference value to the unit that controls the location of the beak.

The prism's effect on the lower level beak perception is the same as
on the seed. Your analysis seems wrong to me. Sorry.

If I read you correctly, I think you mean what I mean. I don't think of
control as applying to a CEV.

That's weird. We always measure control in terms of the CEV. How else
could you measure it?

Why not just run a simulation and measure the variance of CEV and p
directly?

Because the simulation would use a specific control system that might not be
the best possible.

But shouldn't your analysis apply to all control systems?

The argument is that when the perception that leads to the output that
counters the disturbance is not reliably the same as the CEV, the relation
between the reference and the perception is not the same as between the
desired state of the CEV and its real state, so the corrective action may
well be misapplied. That's how "the final Quality of Control analysis
related to your original interest in the relationship between control of p
and control of CEV."

But that's just saying that, to the extent that there is noise in the
system, control will not be as good as it could be. And we know, from
experimental test, that there is noise in the system (I accidentally
came up with a way of measuring the noise level -- though not it's
source -- in my repeated disturbance experiment). So what's the big
deal here?

I guess I should repeat that I don't see "control of CEV" as anything other
than a red herring.

I think I'll continue following this red herring unless you can show
me how to measure control without measuring control of the CEV.

It's important to the organism that the CEV take on an
appropriate value, but it's always only the perception that is controlled.

But once we get the CEV right we are presumably looking at the
perception. That's what the test for the controlled variable is all
about. Once we have correctly identified the controlled variable --
the CEV -- we are presumably seeing, in our own perception (as the
CEV) the perception that the controller is controlling. In the
baseball research, for example, when we find that the best definition
of one controlled variable (CEV) is vertical optical velocity (dy/dt)
then dy/dt is presumably equal to one controlled perception, p.

Failure of the CEV to have the value that the perception would imply could
be fatal. Consider: I perceive that the road I am about to cross is empty of
traffic. I cross and get hit by a car that I did not perceive. I had no
problem with perceptual control, but I am dead. That's fairly important.

I think that's called trying to cross the road while blind, a very
well understood phenomenon.

The CEV cannot reliably be kept closer to the desired value than the quality
of control of the relevant perception permits.

Something seems very wrong here but I just can't put my finger on it.
Maybe Bill can set me straight. I think there is something wrong with
the idea that there can be a mismatch between the CEV and p.erhaps the
problem is that you are assuming that there is an objective entity out
there called a CEV that the system is designed to detect. This is not
the way I understand the control model to work. The CEV of a control
system is _defined_ by the perceptual function. So p always
corresponds to a CEV that is defined by the function than transforms
sensory input into perceptual signal variations. I have a feeling that
you have been hoist on your own epistemology;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.12.19. 1220 MDT)]

Rick Marken (2010.12.19.0950) --

MMT: > Failure of the CEV to have the value that the perception would imply could be fatal. Consider: I perceive that the road I am about to cross is empty of traffic. I cross and get hit by a car that I did not perceive. I had no problem with perceptual control, but I am dead. That's fairly important.

RM: I think that's called trying to cross the road while blind, a very
well understood phenomenon.

MMT: The CEV cannot reliably be kept closer to the desired value than the quality of control of the relevant perception permits.

RM: Something seems very wrong here but I just can't put my finger on it.
Maybe Bill can set me straight. I think there is something wrong with
the idea that there can be a mismatch between the CEV and p.erhaps the
problem is that you are assuming that there is an objective entity out
there called a CEV that the system is designed to detect. This is not
the way I understand the control model to work. The CEV of a control
system is _defined_ by the perceptual function. So p always
corresponds to a CEV that is defined by the function than transforms
sensory input into perceptual signal variations. I have a feeling that
you have been hoist on your own epistemology;-)

BP: Everyone is a little elevated on epistemology here, but the main problem is Martin's extremely timid view of the signal-to-noise ratio in perception. I think the truth will turn out to be that perception is quite consistent and free of uncertainty except under unusual conditions (the kind that one sets up to measure the effects of uncertainty). If it weren't we would have great difficulty controlling anything. This doesn't mean that Real Reality looks just like the one we experience (except to another human being) but only that our perceptual transformations are quantitative and consistent so the noise level is quite low.

I would estimate that the noise level in a perceptual signal, relative to the CEV measurable by an onlooker, is no more than 1% under normal conditions in which control is easy. The frequency with which we see a clear road instead of the actual road with traffic on it is not zero, but neither is it large enough to worry about. With all the millions of people crossing roads in cars every hour, the accident rate due to not seeing traffic that is really there is so small that it's negligible, except by insurance companies and police departments.

In the TrackAnalyse results, the unpredictable component of the CEV (relative target and cursor positions) is no more than 10 to 20 pixels out of 500 to 1000 on the screen, and may be less if (as I'm sure is true) there are still differences between real and model CEV that repeat from one trial to the next. Only the variations that differ randomly from trial to trial can be considered to be noise.

Rick, how about averaging the waveforms of the variables (point by point) over 10 or 20 trials of a tracking run using exactly the same disturbance every time? Synchronize the measurements to the disturbance . Then subtract that average waveform from the individual ones to see how much actual noise there is. We should have tried this long ago.

Best.

Bill P.

[Martin Taylor 2010.12.19.14.30]

[From Rick Marken (2010.12.19.0950)]

Martin Taylor (2010.12.18.11.05)--

  Rick Marken (2010.12.17.2120)]
There's something wrong here. This doesn't seem to be a case where
there is a disconnect between CEV and perceptual signal (p). The CEV
when a chicken pecks at a seed must be something like the distance
between beak and seed.

That's one CEV. Control of the perception of that distance should send a
reference value to the unit that controls the location of the beak.

The prism's effect on the lower level beak perception is the same as
on the seed. Your analysis seems wrong to me. Sorry.

Yes the prism's effect is the same for the beak and the seed, and that's why there is a problem for the chicken. The chicken has reorganized so that when a seed is perceived at location X, a certain reference value is sent to some pecking control system. That reference value normally makes the peck land at location X and the chicken eats the seed. With prism spectacles, the chicken sees the seed at location X+delta, and the relationship control system sends to the peck control system the appropriate value for the peck to land at X+delta, with the result that the peck lands at X+delta, giving the chicken a taste of earth instead of seed.

If it's a human trying to pick up a cup instead of a chicken pecking for seed, what we see initially is that the hand goes toward the wrong position, and then the relationship control unit varies the hand position reference until the hand connects with the handle. It doesn't take too long for the human to reorganize so that the first move is correct, but that doesn't seem to happen with the chicken. Maybe the chicken can't actually see where the beak lands relative to the seed; possibly the pecking motion is too fast and the visual system shuts off in the way bats shut off their auditory system while they emit their loud beeps, or humans more or less shut off their visual system during saccades. I don't know why chickens don't learn to correct for the prism, but it's clear that the pecking muscles aren't controlled directly by the relationship control system. There are lots of levels below that before you get to systems that output muscle tensions.

If I read you correctly, I think you mean what I mean. I don't think of
control as applying to a CEV.

That's weird. We always measure control in terms of the CEV. How else
could you measure it?

I'm not sure what you are saying here. Are you denying that control is control of perception? Or are you just bemoaning the fact that we have no way of measuring the perceptual signal value and must therefore use the CEV (the presumed CEV) as a surrogate. Of course we measure control in terms of the CEV. It's the nearest thing to the perceptual signal to which we have access. That doesn't mean that the controller controls the CEV, does it?

Why not just run a simulation and measure the variance of CEV and p
directly?

Because the simulation would use a specific control system that might not be
the best possible.

But shouldn't your analysis apply to all control systems?

Yes, it does. It says that for all control systems using linear components and not using prediction, combatting disturbances that are reasonably Gaussian, control performance Q is going to be worse than the theoretical maximum. To measure thousands of control systems, all of which give some measure of Q at the level of the CEV says nothing except that these particular control systems achieve their individual levels of control. If, however, you find even one control system, real or simulated, that uses linear components and does not use prediction, but that nevertheless has a Q higher than the theoretical maximum, you have shot down the theory. I'd love to have that happen, but I'd like it better if there were lots of simulations under different conditions, all of which had Q values just below the theoretical optimum.

Real systems could have higher Q if, for example, they used prediction, so tests with real systems are interesting but not decisive. I did my trials with the real TrackAnalyze so as to provide a sanity check on the theory. I would have been a little disturbed if the efficiencies had been very much less than, say, 0.1, or if they had been very different within a difficulty level for which my control ability was similar across disturbances. The fact that the efficiencies were pretty consistent across disturbances within each difficulty level, and that the efficiency trend was smooth across difficulties suggested to me that any problems with the analysis are likely to be minor, and most probably would be related to the finite approximation for the infinite series in the in equation.

If you don't believe that loop delay affects the quality of control, you can test it yourself if you have Turbo Delphi. You could modify the TrackAnalyze program to put a delay setting of N 1/60 second samples between the mouse movement and the cursor movement, and record the results. You would have to use Excel to do the analysis, but I could send you a template, or you could send me the tracking data (all three files). Just see how much worse control becomes at the easy levels if you add 1, 2, 3... samples delay between mouse and cursor (you probably won't be able to control very much at all at levels 4 and 5 with more than a couple of samples added delay). Actually, that would be a good exercise to test the model fit, because the fitted loop delay ought to increase by exactly the added number of samples.

The argument is that when the perception that leads to the output that
counters the disturbance is not reliably the same as the CEV, the relation
between the reference and the perception is not the same as between the
desired state of the CEV and its real state, so the corrective action may
well be misapplied. That's how "the final Quality of Control analysis
related to your original interest in the relationship between control of p
and control of CEV."

But that's just saying that, to the extent that there is noise in the
system, control will not be as good as it could be.

No. It is saying that with NO NOISE in the system, control has a limit that depends on the relationship between the loop delay and the speed of variation of the disturbance.

How many times must I repeat: THIS WHOLE ANALYSIS IS FOR A NOISELESS SYSTEM.
THIS WHOLE ANALYSIS IS FOR A NOISE-FREE SYSTEM
THERE IS NO NOISE ANYWHERE IN THE SYSTEM ANALYZED.
NOISE IS NOT THE REASON FOR THE VARIATION.

I guess I should repeat that I don't see "control of CEV" as anything other
than a red herring.

I think I'll continue following this red herring unless you can show
me how to measure control without measuring control of the CEV.

I don't think you can. But you do have to realize, and keep well in mind, that the control unit does not, and cannot, control the CEV. You put up with measuring the variation of the presumed CEV because you can't measure the perceptual signal. But you should never imagine that your measurement of what you presume to be the CEV is actually a measure of the perceptual signal.

I seem to remember that you tried several different combinations of x and y as possible CEVs when you asked people to control (I think) the size of a rectangular area (my memory is hazy as to what you actually asked them to control). Could it be true that ALL these combinations were controlled? I think not. You presumed you had found the CEV when you found that combination of x and y that seemed most stable, but you could never be sure that what you had found was the true CEV for what the subject thought of as "size" (if that's what you had asked them to control).

Measuring the CEV is the best you can do in measuring control, but it is not the true measurement of how well the perception is controlled, and to say it is is to pervert the very idea of perceptual control.

It's important to the organism that the CEV take on an
appropriate value, but it's always only the perception that is controlled.

But once we get the CEV right we are presumably looking at the
perception.

Who makes this presumption? What this analysis is dealing with is change over time. We know that it takes a certain time for a perception to form. If nothing else, the Pulfrich phenomenon demonstrates the fact. But even if we didn't have the Pulfrich effect, we know that neural signals take time to get from A to B. So the most you can legitimately say is "Once we get the CEV right, we can presumably say that the perception is a function of some past value of the CEV".

Even then you would probably be wrong, because (as you yourself have argued) it is probable that the perceptual signal is something like a logarithmic function of the value of the CEV.

Failure of the CEV to have the value that the perception would imply could
be fatal. Consider: I perceive that the road I am about to cross is empty of
traffic. I cross and get hit by a car that I did not perceive. I had no
problem with perceptual control, but I am dead. That's fairly important.

I think that's called trying to cross the road while blind, a very
well understood phenomenon.

It's an extreme example, to be sure. But to make it more realistic, modify it to a failure to perceive accurately the speed of the approaching car that you do see. You are still dead if the car is coming faster than you perceive it to be.

The CEV cannot reliably be kept closer to the desired value than the quality
of control of the relevant perception permits.

Something seems very wrong here but I just can't put my finger on it.

I wish you could. A serious discussion of the flaws of the argument would either strengthen my perception that it is right, or lead me to understand in what way it is wrong.

Maybe Bill can set me straight.

Maybe. I look forward to a valid argument from anyone, you, Bill, Richard K., any CSGnet reader or consultant. I want to know about how control works, not win an argument. But I do want the arguments to deal with the analysis, not with the failure of the analysis to agree with preconceptions.

  I think there is something wrong with
the idea that there can be a mismatch between the CEV and p.

The idea is that there is a mismatch between the CEV at time t0 and p at time t0+T. That shouldn't cause you to lose sleep, should it?

erhaps the
problem is that you are assuming that there is an objective entity out
there called a CEV that the system is designed to detect. This is not
the way I understand the control model to work. The CEV of a control
system is _defined_ by the perceptual function. So p always
corresponds to a CEV that is defined by the function than transforms
sensory input into perceptual signal variations. I have a feeling that
you have been hoist on your own epistemology;-)

Well, it's your epistomology, too. Right at the beginning of the essay the CEV is defined by the perceptual function, and in the analysis itself there is no necessary mismatch between the CEV and the corresponding perceptual signal at any given moment. I didn't assume a delay between the CEV and the perceptual signal, even though we know there must be some delay. I put all the delay into the environmental feedback path, for convenience. It doesn't matter where in the loop this delay is (until we start talking about varying reference signal values). The statement you quoted above still holds, no matter for what reason control is less than perfect.

"The CEV cannot reliably be kept closer to the desired value than the quality of control of the relevant perception permits." Doesn't this assumption underlie your use of the CEV variation as a measure of perceptual control? If the CEV could reliably be kept more stable than the perception, measuring the CEV variation would provide no information about the quality of control. But since it is actually perception that is controlled, you can say that for a given variation of some experimentally measurable quantity you think might be a CEV, the perceptual control is at least that good.

Martin

···

On 2010/12/19 12:48 PM, Richard Marken wrote:

[Martin Taylor 2010.12.19.15.46]

[From Bill Powers (2010.12.19. 1220 MDT)]

Rick Marken (2010.12.19.0950) --

MMT: > Failure of the CEV to have the value that the perception would imply could be fatal. Consider: I perceive that the road I am about to cross is empty of traffic. I cross and get hit by a car that I did not perceive. I had no problem with perceptual control, but I am dead. That's fairly important.

RM: I think that's called trying to cross the road while blind, a very
well understood phenomenon.

MMT: The CEV cannot reliably be kept closer to the desired value than the quality of control of the relevant perception permits.

RM: Something seems very wrong here but I just can't put my finger on it.
Maybe Bill can set me straight. I think there is something wrong with
the idea that there can be a mismatch between the CEV and p.erhaps the
problem is that you are assuming that there is an objective entity out
there called a CEV that the system is designed to detect. This is not
the way I understand the control model to work. The CEV of a control
system is _defined_ by the perceptual function. So p always
corresponds to a CEV that is defined by the function than transforms
sensory input into perceptual signal variations. I have a feeling that
you have been hoist on your own epistemology;-)

BP: Everyone is a little elevated on epistemology here, but the main problem is Martin's extremely timid view of the signal-to-noise ratio in perception. I think the truth will turn out to be that perception is quite consistent and free of uncertainty except under unusual conditions (the kind that one sets up to measure the effects of uncertainty).

I suppose I really shouldn't say this again, because it is redundant with the reply I just posted to RIck.

THE ANALYSIS IN THE ESSAY DEALS WITH A NOISE-FREE CONTROL LOOP.

THE ASSUMPTIONS IN THE ANALYSIS INCLUDE THAT THERE IS NO NOISE IN THE PERCEPTUAL SIGNAL, NOR ANYWHERE ELSE IN THE LOOP!

NOISE NOWHERE ENTERS THE ANALYSIS.

THE LIMITS ON CONTROL WITH LOOP DELAY ARE NOT RELATED TO NOISE.

Please, please, I really would like some serious analysis of my analysis. Does it have some fatal flaw? Have you created a model in simulation that controls better than the theory says is should be able to control? Is there some way the analysis could be improved.

But please remember that the analysis is for a loop that uses no prediction and is WITHOUT NOISE.

Martin

···

On 2010/12/19 2:55 PM, Bill Powers wrote:

[From Bill Powers (2010.12.19.1420 MDT)]

Martin Taylor 2010.12.19.15.46 --

I suppose I really shouldn't say this again, because it is redundant with the reply I just posted to RIck.

THE ANALYSIS IN THE ESSAY DEALS WITH A NOISE-FREE CONTROL LOOP.

THE ASSUMPTIONS IN THE ANALYSIS INCLUDE THAT THERE IS NO NOISE IN THE PERCEPTUAL SIGNAL, NOR ANYWHERE ELSE IN THE LOOP!

NOISE NOWHERE ENTERS THE ANALYSIS.

THE LIMITS ON CONTROL WITH LOOP DELAY ARE NOT RELATED TO NOISE.

Hmm. Is someone saying that there is no noise in this analysis? Gosh, maybe...

Please, please, I really would like some serious analysis of my analysis. Does it have some fatal flaw? Have you created a model in simulation that controls better than the theory says is should be able to control? Is there some way the analysis could be improved.

But please remember that the analysis is for a loop that uses no prediction and is WITHOUT NOISE.

So what are the limits on control imposed by the delay? In the appendix of LCS3 starting on page 175, Richard Kennaway does an extensive analysis of this question, unfortunately omitting a definition of "to" which may be the time constant of the output function. The "live block diagram" demo 3-1 shows how adjusting the time constant of the output function will stabilize the system for any input delay, so various criteria of optimum control can be met, such as minimum-time or no-overshoot (for a step function).This model uses no prediction and is without noise. I don't know how it relates to your results, but I'm sure you can work that out.

Best,

Bill P.

···

Martin

[Martin Taylor 2010.12.19.17.04]

[From Bill Powers (2010.12.19.1420 MDT)]

Martin Taylor 2010.12.19.15.46 -- Please, please, I really would like some serious analysis of my analysis. Does it have some fatal flaw? Have you created a model in simulation that controls better than the theory says is should be able to control? Is there some way the analysis could be improved.

But please remember that the analysis is for a loop that uses no prediction and is WITHOUT NOISE.

So what are the limits on control imposed by the delay? In the appendix of LCS3 starting on page 175, Richard Kennaway does an extensive analysis of this question, unfortunately omitting a definition of "to" which may be the time constant of the output function.

Thanks. That's the kind of answer I have been looking for. Now I will look at Richard's analysis and see whether it supersedes mine. I'll get back on this.

The "live block diagram" demo 3-1 shows how adjusting the time constant of the output function will stabilize the system for any input delay, so various criteria of optimum control can be met, such as minimum-time or no-overshoot (for a step function).This model uses no prediction and is without noise. I don't know how it relates to your results, but I'm sure you can work that out.

That's addressing a slightly different issue, but interesting nevertheless.

Martin

[Martin Taylor 2010.12.19.17.18]

[Martin Taylor 2010.12.19.17.04]

[From Bill Powers (2010.12.19.1420 MDT)]

So what are the limits on control imposed by the delay? In the appendix of LCS3 starting on page 175, Richard Kennaway does an extensive analysis of this question, unfortunately omitting a definition of "to" which may be the time constant of the output function.

Thanks. That's the kind of answer I have been looking for. Now I will look at Richard's analysis and see whether it supersedes mine. I'll get back on this.

I have looked at Richard's analysis, in the version contained in the review copy of LCSIII. If it was changed between there and the published copy, please forgive inappropriate comments.

I agree with your suggestion that t0 seems to be the time constant of the output function. At any rate, it is proportional to that time constant.

Richard analyzes the stability criteria for a control loop with a loop delay. His delay is inserted between the CEV and the perceptual signal, whereas I consider it as being in the environmental feedback section of the loop. I doubt that the difference matters. We both treat linear systems, though I think my analysis probably holds also for non-linear systems.

The problem I address is not concerned with the stability of the loop, because if the loop is unstable, control is obviously worse than the best possible. Richard does not consider the disturbance at all, whereas my analysis is based on the relationship between the loop delay and the autocorrelation function of the disturbance.

So we are treating independent questions. Richard deals with the limits on stability of the loop in the presence of loop delay, and the approach to asymptote after a step change in the reference value. I deal with the limits on control when there is loop delay.

I may possibly be able to use some of Richard's results to tighten the theoretical upper bound on control quality, but that remains to be seen. Meanwhile, I would love to have some direct critique of my analysis. I'm not concerned about Stage 1 and 2, or about the prism spectacles. I included them only to prepare the reader's mind for Stage 3, and any disagreement you may have with them should not carry over into the main argument.

Martin

···

On 2010/12/19 5:07 PM, Martin Taylor wrote:

[From Bill Powers (2010.12.19.2014 MDT)]

Martin Taylor 2010.12.19.17.18 --

I agree with your suggestion that t0 seems to be the time constant of the output function. At any rate, it is proportional to that time constant.

Richard analyzes the stability criteria for a control loop with a loop delay. His delay is inserted between the CEV and the perceptual signal, whereas I consider it as being in the environmental feedback section of the loop. I doubt that the difference matters. We both treat linear systems, though I think my analysis probably holds also for non-linear systems.

In the TrackAnalyze data you use, the delay is definitely NOT in the environment feedback section: that delay is less than 1/60 second (half of that on the average). The measured loop delay is around 8/60 second.

The problem I address is not concerned with the stability of the loop, because if the loop is unstable, control is obviously worse than the best possible. Richard does not consider the disturbance at all, whereas my analysis is based on the relationship between the loop delay and the autocorrelation function of the disturbance.

You have to consider the stability of the loop because there must be an integrator in the output function to stabilize the control system. The integrator time constant and the delay determine the maximum allowable loop gain, and that loop gain determines the best achievable steady-state accuracy of control. Dynamical accuracy then does depend on the waveform of the disturbance, generally falling off with increasing frequency. Perhaps that is what your analysis is showing. Usually the best possible control is achieved with the disturbances that vary the slowest, but for a system that must work with a range of real disturbances it is the loop gain and the integration time constant that determine the best possible control.

Best,

Bill P.

[Martin Taylor 2010.12.20.11.08]

[From Bill Powers (2010.12.19.2014 MDT)]

Martin Taylor 2010.12.19.17.18 --

I agree with your suggestion that t0 seems to be the time constant of the output function. At any rate, it is proportional to that time constant.

Richard analyzes the stability criteria for a control loop with a loop delay. His delay is inserted between the CEV and the perceptual signal, whereas I consider it as being in the environmental feedback section of the loop. I doubt that the difference matters. We both treat linear systems, though I think my analysis probably holds also for non-linear systems.

In the TrackAnalyze data you use, the delay is definitely NOT in the environment feedback section: that delay is less than 1/60 second (half of that on the average). The measured loop delay is around 8/60 second.

Quite so. To quote from the introduction to the analysis in the essay: "For simplicity in writing the equations, I assume that all the delay is in the environmental feedback path. It could be anywhere around the loop for the purposes of the following analysis."

The theoretical analysis is agnostic as to where in the loop the delay occurs. The analysis is exactly the same if it is in the error pathway between the comparator and the output function, in the computational process of the perceptual input function, or anywhere else in the loop. I put it in the environmental feedback path because that was the most convenient place for the notation. If your scrutiny of the analysis tells you that I am wrong, and that it does matter where in the loop the delay occurs, such a criticism would be most welcome, as would any criticism of the substance of the analysis. Just let me know what difference it makes to the analysis if the delay is any particular place in the loop, or if it is distributed around the loop, as must be the case in any physical system.

You track faster than I do. My "measured" (i.e. fit by model) loop delay ranges between 9 and 11 samples, longer for the easier difficulty levels. Is yours the same at all difficulty levels?

The problem I address is not concerned with the stability of the loop, because if the loop is unstable, control is obviously worse than the best possible. Richard does not consider the disturbance at all, whereas my analysis is based on the relationship between the loop delay and the autocorrelation function of the disturbance.

You have to consider the stability of the loop because there must be an integrator in the output function to stabilize the control system.

I don't have to consider the stability of the loop, because any unstable "control" system will control worse than the ideal control system. Nor do I have to be concerned as to whether the output function incorporates an integrator or any other combination of linear operators.

The ideal control system is obviously stable, and the analysis makes no statement about how that best possible control system might be constructed. In fact, the ideal used in the analysis controls better than the best possible practical system, which is why I consider the theoretical limit to be probably a rather loose upper bound.

The theoretical ideal control system would have Q = 1/(2*(1-A)), where A is the autocorrelation of the disturbance at a lag equal to the loop delay. This formula implies that the ideal control system controls perfectly if the loop delay is zero. That's what makes it "ideal", and it's better than any control system using a leaky integrator as an output function could do, since the Gain would have to be infinite (integrator time constant would have to be zero).

  Usually the best possible control is achieved with the disturbances that vary the slowest, but for a system that must work with a range of real disturbances it is the loop gain and the integration time constant that determine the best possible control.

I'm sure you are right, but the analysis is concerned with the idealized best possible control as a function of the rate of change of the disturbance, which is determined from the autocorrelation function of the disturbance. If someone were to invent some exotic structure that controlled better than one with a simple leaky integrator output function, the analysis would still say that it could not control better than Qmax = 1/(2*(1-A)).

All you have to do to refute the result of the theoretical analysis is to create a linear model that uses no prediction and that for some disturbance with a more or less Gaussian distribution, such as those in the TrackAnalyze setup, controls better than Qmax = 1/(2*(1-A)).

It would be really neat to see a listing of Qactual versus Qmax for a model that is best fit to the target track rather than to the human track, but that uses the actual human loop delay, and to compare the model Q against both the human Q and Qmax for the same tracking conditions. Such a fit would get rid of any noise introduced by the human, and would show how well the human might have controlled if there were no noise in the human's processes. You could use my results table as the human base, or you could use your own tracking results.

It would be neat also to run models best fit for each tracking condition, using a range of loop delays from 1 to N samples, and check out their efficiencies. If any of them gives an efficiency greater than 1.0, the analysis would be proven wrong.

Since your delays are shorter than mine, would it be possible for you to send privately the three files corresponding to each of the difficulty conditions I reported on (disturbances 2, 3, and 4 for each of the five difficulty levels?

It would be interesting to compare our efficiencies, as well as to see whether yours do go above 1.0. If yours do go above 1.0, it would be a blow against the analysis, but not a fatal one, because it might indicate you are unconsciously using prediction. We have occasionally seen in the past that models fitted using prediction have matched human tracks better than have models that didn't use prediction, but we could never be sure whether that meant anything because the fit took advantage of an extra free parameter. A real test requires simulation models that we can be sure don't use prediction. But it would nevertheless be interesting to compare our relative human efficiencies on the same tracking tasks, since our tracking skills are obviously different.

In my reading of Richard's work, I did not see an estimate of the RMS error as a function of the RMS of the disturbance when control is optimum. It may be possible to use his work to generate such an estimate, using a superposition of disturbance step functions, but I have not seen this done as yet.

Martin