Adaptive control

[From Bill Powers (950622.0810 MDT)]

Hans Blom (950622)--

     I suggest that we end this discussion about my demo of model- based
     control. Partly because in the mean time we lost the interest of
     most, and partly because too many misunderstandings still crop up
     -- where I simply don't have the time or the inclination to go back
     to the basics of this approach to explain all.

I agree that it's time to end the discussion for the reasons you name.
Another reason is that I'm getting impatient and starting to throw
criticisms at you that are probably not deserved. I will try to get hold
of the paper you mention.



To me, this example indicates that we ought to find out how
this type of prediction [discovery of regularities and their
use in control] might come about.

But that's an old problem, solved long ago.

     Solved? Where? When?

The example was the tracking of a regular square wave by human beings
who synchronize their control actions to the square waves and thus
achieve approximately a zero average delay time. The problem of
synchronizing one regular square or sine wave to another was solved long
ago. One method involves phase-locked loops, commonly used now in
digital tuning of radios. Another method, gated comparison of reference
and actual frequencies, has been used for about 40 years to achieve
separate vertical and horizontal synchronization of sweep frequencies in
television sets. Every oscilloscope contains at least one control system
for synchronizing the sweep frequency to arbitrary repetitive input
waveforms at the fundamental or subharmonics, although the control
system does not match the sweep _waveform_ to the arbitrary one. Many
years ago I built a box for timing lunar occultations that synchronized
an oscillator to the one-minute time-ticks of WWV, automatically
adjusting the oscillator drift rate (via temperature compensation) as
well until the average corrections were zero to about one part in
100,000. Probably the pinnacle of this art is to be found in the methods
of synchronization used in the Very Long Baseline Array, in which
recorded signals from radio telescopes thousands of miles apart are
brought into synchronism with an accuracy of, I believe, about 1 part in
10^12. In all these cases, negative feedback control was used.

The general problem of recognizing _and matching_ arbitrary repetitive
waveform patterns in detail has not been solved to my knowledge,
although a number of methods suggest themselves (recording the waveform
and using it as a reference signal, etc.). There are also impractical
methods (Fourier transforms followed by inverse transforms) which might
be done on a large fast computer with very high computing precision, but
which are unlikely to be models of real neural processes. In
electronics, there has not been any call for such a system, but if a
practical application were suggested, no doubt any competent electronics
engineer would eventually come up with a workable system. And it would
probably involve negative feedback control.
RE: dynamics

     What I did was to cut out the "world" from the feedback loop to
     demonstrate how it behaves by itself when excited with a step
     function. That was to show that the "world" of the demo has
     dynamics, remember? And to show that the "world" will not stabilize
     in one iteration if there is no control.

With the system operating normally, a square wave causes a square-wave
change in xt because u become a large number for a single iteration and
then becomes constant again after each transition. This spike instantly
changes xt to a new constant value. So your "real" system does not
contain any actual physical dynamics; no real system could do this
without the application of infinite forces.
As to the rest, our conceptions may share one or two points in common,
but our basic approaches to modeling real behaving systems are mostly in
different universes. It is evident from recent posts that you have never
spent much time trying to learn the logic of PCT or HPCT, so the
interchange is too one-sided to amount to a real comparison of
theoretical approaches. You have congratulated me for being able to
assimilate, at my age, a model that is new to me. I suppose that people
just age faster in the Netherlands.

Bill P.

[From Rick Marken (930913.1300)]

Hans Blom (9309813)--

Let me show you your diagram "translated" into my dialect.

Here is my equivalent diagram:

       >handle B |handle A |
       > > >
      \|/ \|/ |
noise ----- ----- c ------- -------- |
----->| + |----->| + |----->|- | | | |
     ----- ----- | C | | | |
                           > O |----->|handle|----
target ----- t | A | | |
---------------->| + |----->|+ | | |
                ----- ------- --------
r [ = 0 ] /|\

This control system obviously controls for c = t

This is not obvious to me. It looks like the reference
(from the environment???) misses the comparator by one
box -- so I have no idea what the reference setting is
for the comparator. The r you've drawn is just doing
something (adding a zero?) to the target.

In all of your drawings you've left out the perceptual
function that transforms the objective situation involving
c and t into a controllable perceptual signal. Just a
minor point for an engineer, I suppose; but kindda
important to a PCTer.

re: adaptive control

None of your diagrams shows an adaptive controller; just
controllers (with the adapting presumable done by the
person looking at the diagrams -- the engineer). An
adaptive controller is a control system that controls
by adjusting the parameters of another control system
-- Tom called them the "adaptor loop" and the
"primary loop", respectively.

Tom said:

As modified for my tracking tasks, the "adaptor loop" has a perceptual
function that computes the present change in the error signal in the
primary control loop in model B. In the adaptor loop, the computational
steps, which approximate the time integral of error, (1) calculate the
present CHANGE in the error signal, (2) add that change to the previous
total of changes, (3) determine if the new total > a criterion value, (4)
if the new total > the criterion, re-set the total to zero and produce a
random change in "something."

You reply:

In (1) you talk about a derivative (CHANGE in error signal), but the
remainder is unclear to me.

I think that's because you don't understand that the
"adaptor loop" is a control system that is controlling
the change in the error signal in the "primary" control
loop. The adaptor loop does this by making random changes
in the parameters of the primary loop; "random" because
the adaptor loop cannot possibly know how to change these
parameters in order to bring the perceived error in the
primary loop to the adaptor control loop's reference for
it. If the adaptor loop did know how to change parameters
(non-randomly) -- that is, it knew at least the polarity
of the effect of its output on its input (perception of
primary loop error) then we would not call it
reorganization -- it's just another control system in
the control hierarchy carrying out business as usual.



From Tom Bourbon [930915.1440]

[Hans Blom, 9309813]

(Tom Bourbon [930902.0900])


... if the words and names can be repeated on the net, I look
forward to learning what I was "really" doing!

All right, I will give you my impressions. Non-engineers, beware!

Tom now:
Here, Hans reproduced the first figure (with the surprising !!!s)
and its description from my post. The figure showed a single
elemental PCT control loop. He included my comments about that


(Hans, notice that I have rearranged the diagram of the control

system in >canonical PCT form, one of the differences between PCT
and standard >control engineering. At first glance, this
difference in diagrams might >seem trivial, but I believe that I
can show how differences in the >diagrams are suggestive of
differences, some slight and others more >significant, in our
interpretations of control.)

I don't care much for "interpretations". Quantum mechanics has a
number of "interpretations". The followers of each single
interpretation seem to have to denounce the others as if quantum
mechanics were a religion. All quantum mechanicians use the same
formulas, however. Call me pragmatic rather than ideologic. If it
works, you are right. If reasonable, knowledgeable others cannot
be convinced, even in the long run, you are not. That's science,
alas. But that aside.

Tom now:
I did not intend to use a word that offends you. For
"interpretation," substitute anything you wish, so long as it
preserves my intended meaning, which was to say: your diagrams
and ours differ in identifiable ways and I believe the
differences exist because each of us thinks at least a little
differently about control and control systems, and about the best
ways to represent them in drawings. That is all I had in mind.
I think the remainder of your post confirmed my belief.

Not for the first time I find myself unfamiliar with what you
call a "canonical" CSG model. Each time I find that I have to
redraw it into my own type of schematic. It is not that I object
to the CSG notation, I am just not used to it; compare it to
speaking a different dialect.

Tom now:
Hans produced his translation of my first drawing. (I have
complete sympathy with your comments about the perceptual
strangeness of unfamiliar drawings.) Then Hans discussed his
drawings of adaptive control systems and compared them to my
first figure.

Bill Powers (930913.1045 MDT) and Rick Marken (930913.1300) have
already commented on the important ways Hans's figures differed
from mine (and from most figures showing PCT models). They made
detailed comments about differences in the ways you and I
depicted things such as (a) the distinction between the inside
and outside of a control system, (b) the source of the reference
signal, (c) the functions that convert environmental variables
into perceptual signals, (d) the location of the comparator and
the identification of which signals it compares, and several
other topics. I will not repeat all of the discussion by Bill
and Rick, who seemed to agree with my original remark that there
are differences in the figures and that they reflect differences
in the ways we think about control systems (differences in the
ways we implement them, teach others about them, model them --
pick any non-offensive word or phrase you like, Hans). I am not
claiming that either "side" is right or wrong, just that there
are genuine differences, along with genuine and important

Hans, I notice that in the remainder of your reply you continued
to refer to my first figure and its description. That was the
simplest of my examples of PCT models and it was not intended to
depict an adaptive model. I wonder if some of the questions and
problems you identified later might have arisen because you
continued to refer to the wrong figure. Was the !! version of my
figure for the adaptive system too garbled? If so, I will send
you a version with --- and ||| in all of the right places. I
appreciate the opportunity to explore the subject of control, in
general, and of adaptive control, in particular, with someone far
more qualified and experienced on those subjects than I am. If a
more easily seen version of my drawing of the adaptive controller
might help, I will post one.

After presenting several additional modifications of his original
figure, Hans concluded that mine might represent a partial
implementation of a PID controller, with k (the integration
factor in our PCT models) representing the amplification factor I
in a PID controller. He continues:

Bill Powers' "slowing factor" is a partial implementation of this
idea. Your scheme is also a partial implementation of a
PID-controller (called a PI-controller), that is if your I-term
was chosen to be fixed. PID-con- trollers are not, however,
considered to be "adaptive controllers". The reason is that the
values of the terms P, I and D are chosen a priori (on the basis
of simulations or tests) and do not vary in time. If, on the
other hand, your I-term was made to vary IN REAL TIME in some way
using some extra mechanism, then you would have some type of
adaptive control.

TOM now:
That was the point of my post, so I must have presented the idea
poorly. In the simplest case, shown in my first figure, we
calculate an estimate of the k that produces a best fit (least
squared error) between the person's data and those of the PCT
model. In most PCT modeling, that value of k is used to
calculate modeled predictions of the person's data under later,
altered, experimental conditions. Our experience shows that the
fit of the PCT model to the person's original data is usually
very good, often accounting 99% or more of the variance. The
same degree of fit is obtained when we compare the modeled
predictions with the person's later performance. (I routinely
observe that level of agreement -- over 99% of the variance
"explained" -- whether the predictions precede the person's runs
by one minute, one year, or five years.)

In the example of my first try at adaptive modeling, the "adaptor
loop" was shown in my *third* figure as part of Person B, the
person on the left of the figure. The adaptor loop sensed the
error signals in the primary control loop in Person-system B and,
when certain criteria were met, the adaptor loop introduced a
randomly selected small change in the integration factor, k, of
the output function of Person B.

This is too hard for me to describe without the figure. Here is
(what I hope is) a clearer version of the figure from my earlier

The revised diagram, including the adaptor loop in model B:


                 r := A-[/\/\]=0 r := [c-t=0]
                 > >
                 > r=[if q > Q, |
                 > > dk := rand dk] |
                 > > >
                 > \|/ |
                 > >---|---|---| |
                 > > I | C | O | |
                 > >---|---|---| |
                \|/ /|\ | \|/
              >-----| | dk |-----|
present ,->| C |--*' | present ,->| C |--,
value of | |-----| | | value | |-----| |
A-/\/\ := p e := p-r | of c-t := p e =p-r
           > \|/ | | \|/
        >-----| |-----| | |-----| |-----|
        > I | | O |<----' | I | | O |
        >-----| |-----| |-----| |-----|
          /|\ | /|\ /|\ |
           > > > > >
           > > t | |
           > > /|\ | |
           > > T | |
           > > > >
           > '----> B ---------------> c <-- A <-'
           > /|\ |
           > D |
           > >

The adaptor loop is shown as three adjacent boxes, labeled I,C
and O, with r=[if q > Q, dk = rand dk].

In the adaptor loop, q = present sum (integral) of the error
signal, e; dk = random change added to k (which is located in the
output function, O, of Person B). B has become an adaptive
controller. In the runs I did, now a couple of years ago, B
converged within a few seconds on a value for k that kept
A-/\/\/\ near zero, and A kept c-t=0. I have not run the model
since that time.

Hans, if the model of Person B, which includes what I call the
"adaptor loop," functions as I say it does, would it meet your
criteria for designation as a model of an "adaptive controller?"


From your description ["k = integration factor estimated from

person's data"] I cannot reconstruct which of the two, adaptive
or not, applies in your case.

Tom now:
I believe your uncertainty must have arisen because I did not
adequately describe how the adaptor loop monitors error signals
in the primary control loop of person- system B and alters k in
the output function of person-system B. Bill and Rick both
addressed this topic in their replies to you. Is the process any
clearer to you than it was, or do problems remain?

A question. You chose A = A - k (e). Does the minus sign mean
that, when the integral of the error becomes large, loop gain is
REDUCED? That would be opposite to common practice. A significant
integral error is usually obtained if the controller's gain is
insufficient to bring the distance between reference and
perception (here: c and t) to a small enough value. Indeed, in a
noise-free system, the I-term will insure a zero offset (c - t =
0) in a steady state situation.

Tom now:
Bill Powers answered your question. He said:
"This is a program step, not an equation. It is simply an
integrator with a negative coefficient. For a positive e and k, A
will go more and more negative on each iteration by the amount
k*e. And e is an absolute value or square when used for
reorganization. Tom, you'd better check over the details here."

Tom now:
I was lazy and it created confusion. I should have used the form
of the program steps, written in Turbo Pascal. In that case, the
ambiguous "equation" is:


A := A - k (e), where := is the replacement operation. The value
of A at time t+1 the value of A at time t minus the product of k
and the error signal.

Bill, I did check the details, and I come up with the same step.
I will risk looking really bad by testing my reasoning in
public. (I know this will be tedious and unnecessary for many
people on the net, but I imagine there are others who have never
worked their way through one cycle of the program steps in a PCT
model. This exercise is for them, and it provides an opportunity
for someone to catch me in a blunder.)

As an example, in Person A the perceptual signal, p, is the
present value of c-t, which is the difference between positions
of the cursor and target. A is the present position of handle A.
If the present value of c = 4, and t = 6, then p = 4 - 6 = -2,
which means the cursor is two units below the target on the
screen. The reference signal, r, is the value of p that Person A
intends "should" exist. The error signal, e, is the difference
between p and r. If r=0, then Person A intends that distance
c-t=0. When p = -2, we have e = p - r, or e = -2 - 0 = -2. This
says the cursor - target distance is two units below the
reference distance. (There is no necessity that p* = 0. For
example, if r = -2, then when p=-2, e = (r - p) = -2 - (-2) = 0
and there is no error.) If e = 0, handle A should not move. If e
not= 0, then handle A should move, changing position by k(e)
units, but in which direction?

In the present example, c is below t, but it should be even with
t. The handle A must move upward to move c up toward t. To move
A upward, we must make a *positive* change in its present
position, by the amount k(e). The present value of k(e) is
negative, so we must *subtract* the value of k(e) from A to
obtain the new value of A.

If c is *above* t, say for example that c-t = 8 - 6 = +2, and if
r = 0, as before, then e = +2 - 0 = +2 and k(e) is positive. But
c is above T and A must move down which means that if we subtract
the positive value of k(e) from the present value of A we obtain
the required new value of A -- A moves down, which moves c down
toward t.

If, instead of setting up the computation of the error signal as
e = p-r, I had set it up as e = r-p, then the step for
determining the new value of A would be: A := A + k(e), and not
A := A - k(e).


Tom then:

As modified for my tracking tasks, the "adaptor loop" has a

perceptual >function that computes the present change in the
error signal in the >primary control lop in model B. In the
adaptor loop, the computational >steps, which approximate the
time integral of error, (1) calculate the >present CHANGE in the
error signal, (2) add that change to the previous >total of
changes, (3) determine if the new total > a criterion value, (4)

if the new total > the criterion, re-set the total to zero and

produce a >random change in "something."

In (1) you talk about a derivative (CHANGE in error signal), but
the remainder is unclear to me. In (2) you integrate again, so
the result of (2) should simply be the error again. In (3) and
(4) you state, that whenever the error becomes too large,
"something" should be done. Randomly? Why? Is there no better
way? In my very optimistic way of thinking, evolut- ion ought to
have found a better way if it exists (and it does), given its
billions of years of experimentation.

Tom now:
Bill and Rick have addressed several of the multiple questions
and comments you presented here. When I said "something" should
be done, I wanted to imply that in the random model of
reorganization or adaptation that I wanted to test (with Bill
Powers's encouragement), the adaptor loop would not "know what it
was doing." Why randomly? Because that was the procedure I
wanted to test. Rick and Bill had already shown that the random
"E. coli" procedure is a highly efficient one for rapidly
achieving precise control using an otherwise *very "dumb"

As for whether there might be better ways to design an adaptive
controller, I know there are. But I am more interested in
testing a process for adaptation that assumes the adapting system
possesses no prior information about what it should change, or by
how much and in which direction it should change. Earlier in my
post to which you were replying, I said I first tested an
adaptive model in which the adaptor followed a rule that produced
*systematic* (sequential, serially ordered) changes in the value
of k, either increasing or decreasing k as a function of the sign
of the error signal in the primary loop of Person-system A. I
guess my initial test of a rule-following adaptive controller
confirms your belief that there are other ways to model
adaptation. :slight_smile:

I do still question your assumptions that, given billions of
years for "evolution" to conduct "experiments" on adaptive
control, it (evolution) *must* have found a "better" way. On
what grounds do you judge the random process less good? By which
criteria do you deem a process "better" or "worse?" Which tests
and which data do you use that lead you to rule out a "random"
procedure as part of adaptive control *by living systems*? All I
have seen so far are your assertions on those issues.

I hope this clarifies your question about whether your schematic
shows an adaptive control scheme. First, I think that it is a
(partial) PID- controller. Second, I cannot establish what your
"random" adaptivity does nor what it achieves. It would be
interesting to test whether it improved performance more than is
possible by a non-adaptive PID-controller.

Tom now:
Thanks, for trying to place my model somewhere in your taxonomy
of control systems. I hope that, after the replies by Bill and
Rick, and this one from me, you have a better idea of what the
random PCT adaptive model does. As for comparing its performance
with that of a non-adaptive PID-controller, please, be my guest.
As I have said several times, I am more interested in working on
less-than-optimal models of less-than-optimal living systems.


                                   When they act as designers,

modelers >look for high correlations between the actions of real
systems and model >systems; but viewed as control systems, the
actions of designer-modelers >will correlate very poorly with the
results of their actions and the >actions of the designer-modeler
will correlate highly and negatively with >anything that disturbs
the perception of "model's actions matching real >system's

As to "the actions of designer-modelers will correlate very
poorly with the results of their actions": I hope not!

Tom now:
Hans, your comment reveals that, along with the *many* things you
and the PCT modelers have in common, there are still some
interesting differences between us. My comments, which you quote
and dismiss with, "I hope not!" were made after I read a similar
reaction from you to a post in which Rick also said something
about actions not correlating with their results.

Rick and I were talking about the distinction between the
movements of the body parts of a person (organism, system) and
the consequences of those actions. I wrote that "the actions of
designer-modelers will correlate very poorly with the results of
their actions and the actions of the designer-modeler will
correlate highly and negatively with anything that disturbs the
perception of "model's actions matching real system's actions.""
By that, I meant that the person's movements, muscle contractions
and the like would correlate very poorly, if at all, with the
consequences those actions produce. The actions feed back from
the person, modeled as a control system, into the environment.

It is the interaction or combination of actions and other
influences in the environment that determines the consequences of
the actions. When I drive a car, the actions of my body parts do
not look anything like what I am "really doing," which is keeping
my perceptions of the states of many different variables at my
references for those perceptions. My actions and the influences
of all other environmental influences that act on the variables
I care about act together to determine the consequences. That is
also what a modeler-designer must do; produce whichever actions
are necessary at the moment. The consequences are not solely
determined by the actions, or by the intentions of the actor.
Often, the correlation between actions and the controlled state
of the consequences is at or near zero. In a variable
environment, that is the only way a control system can control.

Once again, this is far too long.

Until later,