[Martin Taylor 2014.12.09.23.05]
[From Rick Marken (2014.12.09.1930)
RM: ... As I've
said before, I think PCT is a discipline, like calculus,
that has to be learned. And the best way to learn PCT is by
reading the texts, running the models, and doing the demos. Like calculus, I think
one has to have a grasp of at least algebra to really
understand how the PCT model works. And like calculus, there
are right and wrong answers to questions about how the PCT
model works and how it applies to behavior.
One thing Bill said from
time to time was that when control systems become more than a
simple loop, it is often quite hard to intuit what they will do.
Algebra isn’t enough to understand anything much more than
steady-state, asymptotic, or near-equilibrium conditions in linear
systems. You need dynamic analysis, but to get closed-form
solutions for non-linear systems (which real biological systems
almost surely are) is next to impossible except in the simplest
cases. So, although, in principle, I support Rick in what he says
here, yet there is a lot of possible variation right at the
heart of PCT.
Demonstrations and parametric model comparisons with observable
behaviour are an approach that can help intuition, and Bill and
Rick, among others, have a lot of experience with them, but again
largely in very simple structures. So far as I know, Arm2 is the
largest set of parameters to have been reorganized by the e-coli
method, and we are dealing with tens rather than the millions or
trillions of weights in a human brain.
Even using the "neural current on a wire" simplification used in all
the diagrams, there are many largely unexplored directions of
variation for even a single control loop. For example, is there a
standard form of the output of a comparator as a function of (r-p)?
Is there normally a tolerance zone for which the output error is
zero for |r-p| small enough? Is the error output linear,
logarithmic, or even non-monotonic, with |r-p|? It takes
high-precision experimentation to distinguish these possibilities
with ordinary tracking studies (as I know from experience), and
although such questions have been raised, and non-linear errors with
a tolerance zone seem likely, how will we learn whether they occur
always, usually, often, or never in living systems?
Then let's ask about the output function. In most demo studies the
environmental feedback path that contains the output function has
been assumed to be linear, and the output function a simple linear
leaky integrator. But the real world contains its own integrators,
in the simplest case as when an output force only accelerates a
free-moving weight. Our friends who have been building real live
robots have had to consider this. The environmental feedback
function is almost always nonlinear, for example when static
friction has been overcome, the dynamic friction is less.
The environmental feedback function has its own dynamics. Bill
invented the Artificial Cerebellum, which could serve as a general
purpose output function that would adapt to a non-white
environmental feedback spectrum and quasi-periodic disturbances. Are
all output functions of this type? Do biological systems adapt to
ringing environmental feedback and periodical disturbances by using
some component of that kind? If so, how would we discover its
properties? By analyzing neuron maps, by modelling learning
behaviour in systems with non-white environmental feedback
functions? How would such a system interact with the non-linearity
of the error function, if it is nonlinear?
I am deliberately expressing a little of the detail we don't know
about biological control systems, and what basic models of even a
single simple control loop actually should be used in modelling, in
order to go to the opposite extreme, to say that all this might
matter in the end, but one can do a lot working with less precise
approaches, just as one can do high-school chemistry without being
able to solve the Schroedinger equation for a system.
There are general things one can say about control systems. For
example, loop transport lag determines how wide a bandwidth of
disturbance can be countered. No matter what happens in the
structure of the control loop, if adding the output to the
disturbance doesn’t reduce the variability of the perception,
control fails. If the transport lag is too long and the disturbance
changes unpredictably over that time span, the disturbance will have
changed too much for the output to counter. So, one expects evolved
control systems to be structured to minimize transport lags when
they are concerned with countering fast disturbances.
One can say that if there is some kind of filter that smooths out
fast disturbances before they influence the CEV that corresponds to
the controlled perception, control will be better than if the
disturbance comes through unfiltered. One can say that providing
more accurate perception (e.g. by using a microscope) one can
control more finely. There’s a lot one can say if one understands
the principles of control. Rick and others have designed
interfaces to equipment using these principles, but AFAIK not using
detailed mathematics. Kent has used both simple demonstrations and
the principles of control to theorize about the structures and
problems of society, which mathematically lie far outside the
legitimate range of extrapolation of the demos. The demos do,
however, suggest principles that seem as though they should apply,
and when they are applied, they seem to predict phenomena seen in
real societies.
So, yes, I agree with Rick that one should at least work through the
equations in their algebraic form, and if possible go further to see
how the dynamics of at least linear systems function. One should
study all of Bill’s and Rick’s (and anyone else’s) demonstrations.
One can get to understand control pretty well without doing those
things, I guess, but it’s an easy way to get into a position in
which one can reason in one’s own mind about what control might do
in different situations.
Martin