Anderson's comments on neural models

[From Bill Powers (950507.1640 MDT)]

From your reply to Rick Marken:

     But activity in the control loops of a real brain is carried by
     neurons, is it not? And neurons are essentially input-output
     devices, albeit with tremendous convergence of inputs and
     divergence of outputs. So I think it is ok to talk about their
     "reactions" to input. It seems to me that if PCT is based in
     biology, it has to live with the properties of biological neurons,
     including their inherent input-output nature.

The components of the nervous system are input-output devices (although
many of them even at the level of individual neurons are strongly
modified by local feedback connections). However, the whole system, in
the PCT view, is _not_ an input-output device, because feedback from
output to input (through the environment) is universal, fast, and
strong. From the whole-system standpoint, the basic operation is not a
conversion of (sensory) inputs into (motor) outputs, but variations in
motor output that keep the sensory inputs under control. This
interpretation is what we keep harping on in PCT, and what the various
demonstrations, models, and experimental tests are meant to show.

The main "control loops" of which we speak pass through the environment;
they are not internal to the brain. I know there are internal loops, but
they have no direct behavioral functions that we can observe directly;
they would explain such things as imagination, and would serve as parts
of individual functions such as perceptual functions. Anyway, we have no
experimental way to test these internal loops, so we have to use models
that are functionally equivalent to whatever those internal loops may
accomplish. For example, there is an internal loop that uses a small
muscle to alter the sensitivity of hearing, by tightening or relaxing
the eardrum (much as the iris modifies the neural response to light
intensity). The effect of this loop would be to modify the relationship
between perception of sound intensity and actual physical sound waves
entering the ear. In modeling auditory control phenomena, we simply have
to assume an overall form for perceptual sensitivity consistent with
whatever the detailed effects of this internal control loop are; we
can't measure the characteristics of that loop separately.


     Without sensing changes in the controlled quantity caused by the
     disturbance, how are the control systems to vary responses to
     control the controlled quantity? And how can the animal compensate
     for disturbances without detecting them?

A control system does sense the state of the controlled quantity; it
compares what it senses with a reference signal, and the difference is
converted into actions that directly affect the controlled quantity. The
result is that external disturbances are automatically resisted, but
there is no (necessary) direct sensing of the causes of disturbances; it
is unnecessary for good control. The only way to understand why it is
not necessary to sense the causes of disturbances themselves is to grasp
the way a control loop works, or see demonstrations and models operating
without any way of sensing the causes of disturbances. I suggest a study
of the Demo 1 and Demo 2 programs.
     "Bandwidth" is often mentioned on CSG-L; what does it mean?

Bandwidth means the way a system (or component of a system) responds to
inputs of different frequencies. For very slow variations of the input,
the output will vary in a corresponding way, scaled up or down by some
amount. If the frequency of input changes is gradually raised, the
output variations will at first continue to be proportional to the input
variations. But at some frequency, with the amplitude of input
variations held exactly constant, the output variations will begin to be
less than proportional, and will get smaller and smaller until at some
frequency we declare that the frequency has reached the limit of the
system, defining its bandwidth. For all frequencies in a band extending
from the lowest frequencies to this limit, we say that the input signal
is "within the bandwidth" of the system or component. The limit is
somewhat arbitrary, because as frequency increases, there is simply a
steady decrease in the output variations; at high input frequencies they
may have only 1/1000 of the amplitude they have at lower frequencies,
but they are still there in most kinds of systems. One common definition
of bandwidth, selected for mathematical reasons more than physical, is
the input frequency at which the output waveform has decreased to 0.707
(the reciprocal of the square root of two) of its low-frequency

It is well to remember that in PCT models (as in most connectionist
models), the "frequency" of a signal is not the frequency of firing of
individual neurons, but the frequency at which the firing frequency
changes. A constant firing frequency is treated as a constant value of a
neural signal. If this firing frequency starts to rise and fall in a
slow and regular way, we would say that the signal has started to show
variations at some low frequency. The highest frequency that can be
represented in a neural signal is the frequency at which the spacing
between impulses varies between long and short on successive pairs of
impulses where the least interval is the least possible recovery time
between impulses: ** ** **. Between zero frequency and this maximum
possible frequency, we have the range of frequencies that can be
represented as neural signals: the bandwidth of the neural signal
considered in terms of its envelope.

Bandwidth is closely related to time-domain analysis, in which we
consider signals in terms of their actual waveforms rather than in terms
of their frequency content. The higher the bandwidth, the more
accurately can the system represent instantaneous changes in its input
from one level to another level. All real systems have bandwidth limits,
which tranlates in the time domain into a minimum transition time
between one signal level and another (measured in terms of frequency of
firing). Confusing? Yes, but these are common distinctions in the world
of signal-handling systems, and they can be sorted out without too much
     I think a source of confusion might be that the models and the real
     organisms already have their "delays" optimized, the real systems
     through evolution, and the model systems through parameter
     adjustment to fit data from real systems.

I think it highly likely that living control systems can adjust their
parameters to fit the circumstances of the current environment. When you
pick up a pencil with a mass of 0.01 kilogram, you can wiggle it around
under very tight control. But if the control system retained exactly the
same parameters when you picked up a 0.5 kg soldering iron, the whole
system would start to become unstable, oscillating spontaneously. What I
think happens is that higher systems alter the damping coefficients in
the spinal control loop to provide the highest degree of control
consistent with stability.

While delay or transport lag is largely determined by the physics and
chemistry of neural conduction, the effects of transport lags depend
greatly on the (adjustable) sensitivity of control loops and on the
nature of external loads being moved around. Some compensation for lag
is needed even with light loads; paradoxically, it is achieved through
the _integral_ lag found as viscosity of muscles. When loads become more
massive, less of this stabilizing "viscosity" effect is needed, and the
neural damping component can be increased as well. Words are not really
a very good medium for discussing these quantitative relationships.

     It might be instructive to construct a control model using
     simulated biological neurons, with biologically-realistic delays,
     instead of the formal approach now used.

Yes, this can be done and would be very useful. However, neural models
leave a great many parameters open for the modeler to adjust, so the
question comes down to what input-output function you want to model, not
what the inherent properties of neurons are. Our approach in effect
defines the functions that we want to model; given those functions, you
can adjust the parameters of a neural model to approximate them, so you
haven't really learned anything new. All you've done is to prove that
your neural model can be made to have a preselected input-output
function. The question remains open as to what functions we should
model. The answer to that question can be found only by fitting models
to real behavior, to see what sorts of functions are needed to explain
what we observe. Once we know those functions, we can always design a
neural network that will reproduce them.

If there were one and only one kind of neuron, and if the parameters of
neurons had one and only one set of values, the story would be
different. In that case, we could deduce the kinds of functions that are
possible, and that would put a constraint on our models at more global
levels of organization. But that is not the case: even individual
neurons can have a wide range of properties, which are reflected in the
adjustable parameters of neural models. So we are quite justified in
approaching the problem at an intermediate level of complexity, where we
decide on behavioral-experimental grounds what functional relationships
should exist between neural signals, and then -- if anyone wants to --
construct a neural net that will imitate those functional relationships.
I repeat: we don't learn anything new from doing this. At best, we
demonstrate that the functional model is neurologically feasible.
     When I told my neurologist collaborator, Vinod Deshmukh, who has
     had an interest in control systems for some time, about PCT, and
     how PCT considered the nervous system to be a hierarchy of simple
     control systems, one of the first things he said was, "But this is

What Dr. Deshmukh may not realize is that there are sharp differences of
opinion about exactly what constitutes a control system. The term
control system is commonly used (especially in the literature where you
find the writings of Kelso) to mean commanding outputs that are
calculated to have specific environmental effects. This is worlds apart
from the PCT model of control, in which there is no calculation of
outputs and no computing of the effects that certain outputs should
have. In fact, in the literature of motor control, there is no sign that
the contributers realize that there is a kind of control system that can
work without any of the elaborate computations that are assumed
necessary, and what's more, that work under a much wider variety of
circumstances, including circumstances where the motor-control model
will not work at all, and do it faster and more accurately. The lack of
awareness of this type of closed-loop control is not the only problem;
there are also myths being passed around in the motor control literature
that make the closed-loop solution seem unfeasible -- for example, the
myth that straight-through command-driven systems are faster than
(negative feedback) control systems, and less complex. The opposite is

This sorry state of affairs reflects the fact that workers in the field
of motor control have no personal acquaintance with the field called
control engineering, and thus have only a limited idea of what control
systems are and what they can do. This is not entirely their fault,
because basic control engineering seems not to be understood in many
modern undertakings that bill themselves as "control system analysis."
This situation is made even worse by the fact that these modern
investigators somehow have the idea that they have improved on control
theory and have a far more sophisticated concept of it than the old
engineers of the 40s and 50s had. The truth is that they have forgotten
or have never learned much of what these old-time engineers worked out,
and many of the modern models are marvels of unnecessary complexity.
Many of them are also simply poor designs for achieving a given purpose.

So just because someone says he is using "control theory," don't
conclude that it is PCT he is talking about, or even control theory as
many control engineers understand it.

Just so we'll know how much explaining to do, what have you read from
the PCT literature?

Bill P.