unaccountable disturbances; A.C. model

[From Bill Powers (960730.0530 MDT)]

Hans Blom, 960729b --

When disturbances do occur, I know of them through obvious changes in
my perceptions.

     I missed this the first time. The question is: how do you
     "separate" your perception into two parts, the "disturbance" and
     the non- disturbance? One example is:

     The television set may unaccountably become dimmer.

     Explain the "unaccountable". Is it not a discrepancy from an
     (internally generated) prediction/expectation? If not, where does
     it come from?

To answer in reverse order:

An "accountable" change would involve perceiving a relationship between
cause and effect. For example, I could perceive my own hand reaching out
to the control knob, reducing the brightness of the TV picture, or I
could perceive someone else operating the brightness control. If someone
else did it, the brightness might not end up matching my preference; if
I did it myself, it most probably would match.

To say that the change is "unaccountable" means that I experience a
difference between the actual brightness and the brightess I intend to
perceive, but not the cause of the difference. That is, the perceived
brightness no longer matches my preference, but I see nothing that could
be interpreted (at a higher level) as the cause of the mismatch. I can
still, of course, operate the brightness control to restore the picture
to the brightness I prefer. Correcting an error doesn't depend on
knowing what is causing it.

This should answer your first question. Controlling brightness, under
PCT, doesn't involve separating it into "disturbance" and "non-
disturbance." It depends only on an error signal, a mismatch between
perception and reference. So the process of adjusting brightness to
match a reference signal does not depend on knowing the cause of a
change in brightness (such as a change in the output of the transmitting
camera or a drop in the high voltage applied to the picture tube). At a
higher level, one can sometimes explain such a change in terms of an
perceivable agency that independently affected the brightness, but
control of brightness does not require knowing the cause of a change.

In MCT, I believe, this would not be true. One would need a model of the
television set, including all the factors that could produce a change in
brightness of the picture. The "preferred" level of brightness is not
treated as a goal, but as a command signal. Only the adaptive part of
the control process is goal-seeking in the negative feedback sense, and
the only goal in the feedback sense is the actual perception of
brightness, to which the model's output is matched. The model's output
is thus dependent on the perception, and is altered by the adaptive
process only in relation to the actual output of the plant (here, the TV
set).

This is one of the basic differences between the MCT model and the PCT
model.

···

----------------------------------
     Another thing: I cannot get your Artificial Cerebellum code to run.
     In one version there is an .obj file missing, in the other a number
     of units are missing. Also, I get a fair impression of _how_ you do
     it. What I cannot reconstruct is why you do it _this way_. What is
     the "highest level goal" of the program?

I will send the missing elements, now that I know you succeeded in
decoding the MIME files. I don't know what platform you're running these
programs on, and assumed that PC-specific .obj files wouldn't help you
(that's why I sent the assembler source code). But if you're doing this
on a PC, of course you can use the .obj files. Life has been rather busy
here, so I haven't yet translated the assembler portions into Pascal
functions, which could be compiled on any machine.

If you're using a PC, presumably you can run the executable programs I
sent. Any comment on them?

The "highest level goal" of this program depends on whether you're
talking about performance or learning. The highest goal of the learning
process at each level is to make the local error signal be zero. The
highest goal of the performance process is to make the perceptual signal
representing position match a varying reference signal defining the
desired position. The position control system works by varying the
reference signal for the subordinate velocity control system.

As to why I do it "this way," that's a motivational question that's hard
to answer. I have been, of course, interested in how control systems
adapt themselves to an environment with changing properties. I wanted a
method that didn't depend on symbolic calculations and that could be
accomplished with neurons having only very simple properties. The most
complicated part of this method is the delay function, which can be done
by using a neural signal that takes time to pass from one computing
element to another (as in the parallel fibers of the cerebellar cortex).
The f(tau) function is distributed along these elements; its entries are
simply the sensitivities of synapses on cells that receive both the
delayed and undelayed versions of the error signal (like an
autocorrelation function).

In the demonstrations you have, there is no output until the adaptive
functions begin to have non-zero entries. However, this method also
works if you start with a crude fixed output function converting error
to output in a preset way, and run the adaptive process in parallel with
it. In this way you get an initial crude control capability, which is
not very stable or precise; as adaptation proceeds, the properties of
the overall output function change to eliminate instability and increase
loop gain, reducing the mean error closer to zero. This seems to be a
suggestive model of cerebellar function in relation to the main
brainstem amnd midbrain functions.

This system, of course, can't run without a feedback signal -- but this
is also realistic, as the corresponding real human motor functions
can't, either. If the output chain is intact it is possible for the real
motor system to be used with other feedback paths -- visual, for example
-- but all fine control is lost when the kinesthetic position and
velocity sensing are lost. If cerebellar function is lost, motor
behavior is still possible, but it becomes unstable.

Incidentally, this model shows that characterizing a negative feedback
control system as a "PID" (proportional-integral-derivative) system is
too narrow a way of thinking about it. The PID aspect only concerns
stability. That is quite aside from the basic architecture involving
perception, comparison, and action. There are many ways of achieving
stability, including changes in the dynamics of the output function and
the input function. The Artificial Cerebellum model shows a way that
doesn't depend on thinking of the output as combining proportional,
integral, and derivative responses to error.
----------------------------------------------------------------------
Best,

Bill P.