[Allan Randall (930317.1200 EST)]

Bill Powers (930315.0700)

This posting deals with the challenge, not the information
in disturbance or definitional stuff - I will probably
not have time to get back to that until I return from my
trip early next week.

Okay, this challenge thing. I decided from your last response
not to formally accept your challenge, since it seemed
directed at Martin specifically, and not at myself or other
Ashby-type information theorists. The reason is that you
pretty clearly stated that information theory would have to
supply a prediction that control would occur, from information
theory and Ashby's diagrams alone. If this is your position,
then I have no disagreement with you.

However, Martin has encouraged me to try one more time to
arrive at a mutually agreeable form for the challenge, as
he suggested I may be misinterpreting you. So here goes.

>You seem to be admitting here that information theory might
>have something useful to say about control systems.

Insofar as information theory could predict the limits of
performance given signals and signal-handling devices with
certain characteristics and in a known organization, sure.

Hmmm, again maybe we have no argument. Anything that does what
you just described sounds pretty darn relevant and applicable
to me. Perhaps we just value different things. It would seem
hard to believe that something that could tell you about limits
of performance would not also be useful in designing a control
system. Ashby's Law could be viewed as a statement about limits
of performance. Statements about such limits can be quite
fundamental. So if this is your position, then we differ only
in the degree to which we think information theory is relevant.
This is hardly a fundamental disagreement, and so the challenge
is indeed not directed at me, but solely at Martin.

>So I guess I still don't understand exactly where you stand. Is
>information theory completely wrong-headed or is it correct,
>but of little use to PCT?

Information theory rests on definitions and mathematical
manipulations...It's unlikely to be "incorrect" in those
terms... I don't yet see how IT is
actually linked in any rigorous way to specific physical

Oops. Now you seem to question its validity again (at least as
something that can be applied to physical situations). Is it
valid to talk about information transmission in a control
system as the mathematical measure called entropy? That is
the question. If using information theory in this sense is
not valid in the first place, then any "limits of performance"
measures you get will be utterly useless. So I am *still*
confused as to where you stand.

The prediction I'm asking for is
not how much control is required, but how much control there will
be in the two situations. To use a theory to derive the fact that
control will result from either arrangement means to make
predictions by manipulations that follow the rules of the theory.

What information theory will actually tell you is that there is
*more* control in the compensatory system. In fact, there is so
much control going on in the compensatory system that it would
be ludicrous to even suggest a real device or organism achieving
it. Information theory could tell you that the compensatory system
will do very poorly because it cannot be given the output
capacity or the processing power it requires. This kind of
prediction is what I think you have ruled out by saying:

> ... from Ashby's diagrams + information theory, one cannot
>predict what exactly R, the regulator, is doing. You cannot
>predict that R is going to oppose the disturbance. Whether this
>will meet your requirements for the challenge is the main point
>I'd like clarified before accepting.

If you stick with these conclusions, the challenge is unnecessary
because you have agreed to my original claim. You are agreeing
that information theory can't provide the predictions of behavior
that control theory provides, but can only be applied once those
predictions are known and verified.

Not quite. It can be applied before anything is verified. But it
cannot be applied to predict that control *will* happen - only
that it could (or could not).

So my version of the challenge would take Ashby's compensatory
and error-driven control systems and, assuming they were both
designed to control, make a prediction concerning which would
control better. I would not be able to say that either system,
from Ashby's diagrams alone, *would* control. Maybe they will
both play "Mary Had A Little Lamb," and completely ignore their
inputs, for all I know. But I *can* tell you which is more *capable*
of control.

If this does not satisfy your requirements for the challenge,
then I think we can all agree that you are specifically
challenging Martin's derivation claim and *not* Ashby's
quantification claim. However, I will do my version of the
challenge all the same, as I think it could be useful. I'm just
trying to determine here whether I can give you a formal
acceptance or not.

Allan Randall,
NTT Systems, Inc.
Toronto, ON

i.kurtzer (072397)

on the thread of learning Bruce A. cast "function" as a contigent relation
to be experimentally determined and clearly suggested the noted
contingencies would better our understanding of that subject matter. I
feel that the logic of contingency without qualifiers is severely
open-ended, giving all facts equal priority, and even leading to
questionable deductions. For that I gave and still give the example of a
bird's wing having the 1/16 function of entering a snake's mouth--an
example clearly ludicrous but acceptable by a logic without principles.
Instead i ask anyone to suggest a principle that might delineate our
studies from the infinate cataloguing of contingencies, so that some
contingencies are more worthy/significant/truth-revealing than others.
Any takers?

p.s. First one right gets a beer on me at the conference.

[From Bill Powers (950913.0825 MDT)]

Hans Blom (950913) --

     If I remember right, the question arose originally because my
     model-based controller could not handle the "unmodelled dynamics"
     of an extra sinewave. There remained a -- known/computed -- sys-
     tematic (sinusoidal) discrepancy between setpoint (reference level)
     and controlled value.

Yes, that is how I remember it, too.

     In my previous discussion, I gave a method of how to model this
     discrepancy and thus improve control. And I showed -- to your
     satisfaction, I hope -- that high-quality control would ensue if
     the disturbance ("unmodelled -- but now modelled -- dynamics") had
     the character of those in the MAINDIST file.

Your model simply included d(t) in the world-model and also in the
output function that created u, doing so in a way that made d(t) cancel
out in the world-model loop (that is, the same disturbance pattern was
added in the world model and subtracted in the function that generated
u). Nothing was said about how this disturbance pattern became known to
the control systems -- presumably it was put there by the inventor of
the model since it wasn't created by an adaptive process.

     Remains the problem: who does the designing? An external creator?
     Or can the system "create" a high-quality controller itself from
     its perceptions -- or from correlations between its perceptions and
     actions -- only?

Yes, that is the problem. Preferably the system itself would do the
designing on the basis of information available to it. However, I'm not
requiring that you actually produce an adaptive system capable of
deducing the form of the disturbance; I will be happy to allow you to do
that for the system, by any means you please that does not involve
direct foreknowledge of the form of d(t).

It is easy to know that k is constant without being able to
predict d [in your example].

     Reread your post and check the complex reasoning process that
     underlies this statement...

I was a little too brief; I should have said "it is easy for a designer
to know that k is constant without being able to predict d." I gave an
example where this is true. The value of k is a property of the part of
the environment the lies between the output u and the controlled
variable x. It simply expresses the partial derivative of x with respect
to u. The disturbance d, on the other hand, arises from independent
parts of the environment and acts on x in parallel to the variations in
u. It is possible for a designer to know the kind of influence that u
has on x (which is k) without knowing what perturbations are going to
arise from the rest of the environment.

I agree with your implication that the problem is different from the
standpoint of the behaving system, where there is no a priori way to
distinguish the effect of a change in k from the effect of an additive
independent disturbance, at least over the short term. The controlling
system must assume a model of the world if it is required to make such a
distinction. The control model I proposed is not required to do that.

     You, too, have an equation somewhere that says

        u (t+1) = u (t) + ...

     so the u (t) must be known to the controller.

Well, I suppose you could say that. The output function in my proposed
model is a time integrator, the output of which is the cumulative sum of
all previous inputs.

u := u + b*(r - x)
r = reference signal, fixed or varying.
The value of b is adjusted [in steps of factors of two].

     Clever method, but kind of coarse :-).

The value of b can be increased in steps as small as you please. But
when computational oscillations begin, you know that b is somewhere
between the optimal value and the value at which runaway begins, which
is twice the optimal value. So if you then divide the value of b by 2,
you know you will be somewhere below, but not far below, the optimal
value. This is a quick-and-dirty way of getting good control; better
methods are obviously possible. However, I am not proposing to use that
method, except when I determine the best value of b by hand. I mentioned
it only to show that some degree of self-adaptation would be possible in
connection with the model I proposed.

     But what you describe is not MY problem. I want to do away with the
     human designer who is required to invent new control laws again and
     again. I want to design a single mechanism that lets the controller
     automatically tune itself to a range of environments that it may
     find itself in. That is the problem of finding the laws underlying
     learning or adaptation, not control laws.

I want that too, and I think your approach is a valuable start toward
that goal. However, none of this is germane to the single problem on
which my challenge is focussed, which is your claim that all control
systems must contain a model of disturbances if they are to control

      ... since d (t) - d (t-1) will also be zero on average, we also

       ^ x (t) - x (t-1) dx
       k = average (---------------) = average (--)
                    u (t) - u (t-1) du

     or something similar, depending upon the specific method you use.

OK, that gives an estimate of the partial derivative of x with respect
to u and will be a useful approximation of k.

     The next step is to compute an estimate of d (t), i.e. to
     "recreate" the disturbance.

       x (t) = k * u (t) + d (t)

     Modelled is:
               ^ ^
       x (t) = k * u (t) + d (t)

     where x and u are known, but nothing else. So the disturbance can
     be "recreated" from

       ^ ^
       d (t) = x (t) - k * u (t)

Fine. This gives you a way of computing past values of d from past
values of x and u.

     where we use the computed estimate of k and where the estimate of
     d (t+1) is based upon the previous estimates of d (t) and d(t-1),
     as in the method that I described in my recent "prediction" post.
     At time t, the control u (t+1) can of course still be chosen

     Finally, setting the estimate of x (t+1) equal to r (t+1) and
     solving for u (t+1) completes the computations.

Sounds like a great plan. Don't forget, however, that you must also
estimate r(t+1), because that is an input from outside the control
system; the next value of r(t) is just as unknown as the next value of


So now all that remains is for you to implement this plan as a Pascal
program using MAINDIST as the disturbance.

So we could both get started right away, I sukggest we generate
disturbances on the fly in the same way that MAINDIST was created; this
gets around working out the same method of reading thme MAINDIST table.

function dist;
x1 := 10000*(random - 0.5);
x2 := x2 + 0.2*(x1 - x2);
x3 := x3 + 0.2*(x2 - x3);
dist := x3;

x1, x2, and x3 should be global variables initialized to 0.0.

I suggest that we use the "dist" function to generate two integer
tables: one for the reference signal and the other for the disturbance.
I think two tables each 10000 integers long will fit into memory without
needing to allocate space on the heap. In the long run this kind of
disturbance would average to zero, although over any given period it
might not. This will make no difference to my model, but in deference to
your method let's adjust the tables to have an average value of zero.

The simplest test would be to run the models for 10,000 iterations and
compute the RMS value of reference - perception, or sqrt(sqr(r -

The "environment" model will be

x = u + dist (in other words, k = 1).

You can run your model as many times as you need to achieve convergence
before the final run that counts. My model, of course, needs only one
Note to Bruce Abbott (private post) --

Yes, I did get the data tables and will get to the modeling Real Soon

Bill P.

[Hans Blom, 950913b]

(Bill Powers (950913.0825 MDT))

... you must also estimate r(t+1), because that is an input from
outside the control system; the next value of r(t) is just as
unknown as the next value of d(t).

Pseudo-problem: change all t-1's into t's. Are you serious? A control
system that does not know its goal AT THE TIME it applies its output?
I've never seen a system of yours that doesn't.

The "environment" model will be
x = u + dist (in other words, k = 1).

Don't change the rules in mid-game. We had agreed on an unknown k!



[From Bill Powers (950914.0540 MDT)]

Hans Blom (950914b) --

Below is the code for my test program.

     Bill, I have solved THIS problem already in my "prediction" post,
     where I showed you that the RMS error would be something like 0.7.

Humor me, Hans. I really need to see the whole program you would write,
so I can understand how your approach and mine fit together. I am
probably not as smart as you are; things that are immediately obvious to
you aren't immediately obvious to me. If you could just plug your
control-system model (with a function call) into the line in my code
labelled "model" I could then compare how the two models work and
perhaps come to understand that they are the same, or that they do the
same thing.

I do understand that if you are given a record of past values of the
disturbance, you can predict the next value quite accurately. However,
the model of my control system is

u = b*integral(r - x),

and it does not involve the calculation d[i] = x - u as yours does. I
really don't see how your approach and mine could be equivalent. If you
say they are equivalent I must believe you, but please, just
_demonstrate_ that your control model and mine control equally well. I'm
a somewhat concrete thinker, so actually seeing your program run would
help me to understand.



Bill P.

[From Bill Powers (950914.0845 MDT)]

Hans Blom (950914b)--

     Can we think of something new? An unknown "world gain", for instance?

I puzzled over this a bit. We have to give the real plant some specific gain,
don't we? I gave it a gain of 1, but if you like some other gain, just say
what value you prefer.

Perhaps I didn't explain my program properly. Expanding the labels a bit,

for i := 0 to maxtable do
  x := u + d[i]; { ENVIRONMENT -- the real "plant")}
  u := u + b * (r[i] - x); { MY CONTROL SYSTEM MODEL}
  sum := sum + sqr(r[i] - x); { ACCUMULATE FOR RMS CALCULATION}

That one line in the middle is my entire control system model. The line above
it is the actual environment that the model has to control. If you like, I
can write a little routine that will start with b = 0 and gradually change it
until we get a minimum RMS error between x and r. But we both know that will
work, so what's the point? It's easiest for me just to fiddle with b for the
smallest error. The optimum value will be about 1/k (including sign) if you
make the environment equation x := k*u + d[i]. I hope we can leave the
question of adaptation for later, after we've settled the matter before us.

Appended is a new version of my program that shows the first 640 points of a
graph of r, x, u, and d. The reference signal r is in white, x is in green, u
is in yellow, and d is in red. Hitting a key lets the disturbance and
reference signal be recomputed with a new random pattern and the next slowing
factor, with the slowing factor stepping up from 0.01 to 0.1 in steps of
0.01. You can see that the controlled variable x tracks the reference signal
quite well and that the disturbance has almost no effect.



Bill P.

Program chalng1;

Uses dos, crt, graph, grUtils;

const maxtable = 9999;

var d,r: array[0..maxtable] of integer;
    x1,x2,x3,sum,u,x,b,slow: real;
    i,j,vcenter: integer;
    ch: char;
    outfile: text;
    numstr: string[50];

function dist: real;
x1 := 10000.0*(random - 0.5);
x2 := x2 + slow * (x1 - x2);
x3 := x3 + slow * (x2 - x3);
dist := x3;

procedure maketables;
var maxr,maxd: real;



x1 := 0.0; x2 := 0.0; x3 := 0.0;

for i := 0 to maxtable do
d[i] := round(dist);
sum := 0.0;
for i := 0 to maxtable do
sum := sum + d[i];
sum := sum/(maxtable + 1);
for i := 0 to maxtable do
d[i] := d[i] - round(sum);


x1 := 0.0; x2 := 0.0; x3 := 0.0;

for i := 0 to maxtable do
r[i] := round(dist);

for i := 0 to maxtable do
sum := sum + r[i];
sum := sum/(maxtable + 1);
for i := 0 to maxtable do
r[i] := r[i] - round(sum);


maxd := 0.0;
maxr := 0.0;

for i := 0 to maxtable do
  if abs(d[i]) > maxd then maxd := abs(d[i]);
  if abs(r[i]) > maxr then maxr := abs(r[i]);

for i := 0 to maxtable do
  d[i] := round(1000.0*d[i]/maxd);
  r[i] := round(1000.0*r[i]/maxr);





for j := 1 to 10 do
slow := 0.01*j;

sum := 0.0;
u := 0.0;
b := 1.00;

vcenter := (getmaxy + 1) div 2;

               {RUN THE MODEL MAXTABLE + 1 TIMES}
for i := 0 to maxtable do
  x := u + d[i]; { ENVIRONMENT}
  u := u + b * (r[i] - x); { CONTROL SYSTEM }
  sum := sum + sqr(r[i] - x); { ACCUMULATE FOR RMS CALCULATION}
  if (i < 640) then
   putpixel(i, vcenter - round(r[i]/4.0),white);
   putpixel(i, vcenter - round(x/4.0),lightgreen);
   putpixel(i, vcenter - round(u/4.0),yellow);
   putpixel(i, vcenter - d[i] div 4,lightred);
   putpixel(i, vcenter,white);
numstr := 'slowing factor = ' + numstr;
setcolor(lightgreen); outtextxy(600,420,'X');
setcolor(yellow); outtextxy(600,440,'U');
setcolor(lightred); outtextxy(600,460,'DIST');
setcolor(white); outtextxy(600,400,'REF');
outtextxy(300,450,'PRESS KEY TO CONTINUE');

sum := sqrt(sum / (maxtable + 1));
numstr := 'RMS ERR / peak = ' + numstr;
ch := readkey;

writeln(outfile,'Slowing = ',slow:5:2,' RMS ERROR = ',sum:4:1);

end; { of loop for changing slowing factor}


[From Bill Powers (950915.2100 MDT)]

Hans Blom (950915) --

I've spent a good part of the day trying to understand your program. I
still can't claim to understand why it works, but part of the
explanation seems to have something to do with my choice of k = 1 in the
environmental feedback function. Here is your code segment modified to
allow changing k:

k := 1.0;

               {RUN HANS' MODEL MAXTABLE + 1 TIMES}
for i := 0 to maxtable do
  x := k*u + d[i]; { ENVIRONMENT}
  {at this point, the observation x[i] and the control u[i] are known}
  dnew := x - k*u; { recreate disturbance d[i] }
  dpre := 2.0 * dnew - dold; { predict next disturbance d[i+1] }
  u := r[i] - dpre; { compute control u[i+1]; note that
                                 u[i+1] is applied in the next iter-
                                 ation, at time i+1, when r [i+1] is
                                 known }
  dold := dnew; { save previous disturbance }

  if i > 1 then { allow for learning start up }
  sum := sum + sqr(r[i] - x); { ACCUMULATE FOR RMS CALCULATION}

  if (i < 640) then
   putpixel(i, vcenter - round(r[i]/4.0)-2,white);
   putpixel(i, vcenter - round(x/4.0),lightgreen);
   putpixel(i, vcenter - round(u/4.0),yellow);
   putpixel(i, vcenter - d[i] div 4,lightred);
   putpixel(i, vcenter,white);

I hope I have followed your procedure in making the calculation of the
disturbance into

dnew = x - k*u,

which would seem to follow from x = k*u + d[i].

If you run the program with k = 1.0, you will get an RMS error of 6 or
8, which is very small. Setting k = 0.3, the RMS error becomes about
600. At k = 3.0, the RMS error is about 1200 (larger than peak excursion
of the reference signal). From the plots, it is obvious that x is not
following r EXCEPT when k = 1.0.

Can you modify your program so it will work with all (reasonable) values
of k?



Bill P.