[From Bill Powers (970415.0710 MST)]
Bruce Abbott (970414.1835 EST)--
You may recall a similar model I put together and published on CSGnet, in
which error in the bacterium's on-board nutrient level ("hunger") altered
the _gain_ of the lower-level system. This two-level system became more
"interested" in going up the nutrient gradient as "hunger" increased, and
when on-board nutrient level exceeded reference, the animal actively avoided
an increasing nutrient gradient, as if repelled by the food (satiety?).
Yes, that was an interesting model. It's another way to add a second level
of control to E. coli. In fact, in the "E. coli" method of reorganization
that has evolved over the last few years, it seems that the best basis for
varying the interval between random changes is the first derivative of the
error-signal-squared: d/dt (e^2), which is 2*e*(de/dt), the "2" being
absorbed into overall gain. Note that this has the effect of making the gain
relative to the rate of change of the controlled variable depend on the
magnitude of the error, so the gain is zero at zero error. This gives the
reference signal two roles: it sets the desired rate of change, and also
makes the gain depend on the error.
I've alway avoided getting into the subject of high level control through
variation of parameters, primarily because the math gets extremely hard
(nonlinear), but also because it obscures the organization of the basic
model, making it much harder to teach. Obviously (to me, anyhow), using
higher-level systems to control the performance of lower level systems by
changing their parameters is a way of achieving what looks like adaptation
(it's really not, since it involves a higher system with a fixed
organization). This would mean that a higher-level system senses something
that is not just a function of the lower perceptual signal, but something
that indicates how the lower system is performing -- the mean error signal,
for one simple example, or the rate of change of error signal, or even more
complicated functions such as the degree of damping. This might even involve
changing the sensitivity of the perceptual function or the gain of the
output function. Anything you can imagine is possible; the question is, what
is _necessary_ to explain what we observe in real behavior?
The only real attempt to use this kind of adaptive model was the "reversal"
experiment Rick and I did, in which a higher system could switch the sign of
the lower system's output function between positive and negative. If the
sign of the external feedback function was suddenly reversed, this
higher-level system reversed the output function's sign to stop the runway
that otherwise would occur. Even here, we did not work out a real _control_
model, that would always make the appropriate change in sign. To do that, it
would be necessary to sense the _relationship_ between direction of handle
movement and direction of change of the perception, so the higher system
could sense the sign of the feedback. Then a reference signal could specify
that negative feedback is to be maintained, and the sign of the lower
system's output function would then always be set correctly. To build this
model, you'd have to specify where this system got its information, with
appropriate pathways drawn into the diagram, and how it did the required
computations.
There seemed to be such a higher-order parameter-control system in the real
human controllers. After a reversal in the external feedback path, the human
subject's handle position would start to run away, along a positive
exponential. After about 0.4 seconds, control would be abruptly restored and
the error would start declining swiftly toward zero. The only way for this
to happen would be for some function inside the controller to have reversed
its sign, restoring negative feedback. We assumed it was the output function
that changed sign, and modeled accordingly.
To me, the problem is never that of finding A model that will achieve a
given performance. There's always some way to do it, and often 10, if you
put no limits on the connections you can draw in the diagrams or the
complexity of the computations you assume the system to be carrying out.
What I'm always looking for is the model that does the job in such a simple
and direct way that I feel "Yes, that's just GOT to be how it's really done
-- there's no simpler way." This is why I really like the Little Man's
kinesthetic control systems, which are simply copies of the way the reflexes
actually are wired up. There's not an extra pathway or an unnecessary
computation in that model; it just couldn't be any simpler. Yet it
stabilizes the jointed arm in three dimensions in a way that would be very
hard to figure out using Jacobians and all that stuff. Obviously, in the
spinal reflexes, there's no place to put the machinery that would be needed
to compute Jacobians or all the inverse kinematic and dynamic computations
that are needed to do it the way some people think it must be done. These
systems work with the brain chopped off at the thalamic level!
All this may explain why I feel sort of resentful about models that merrily
assume the most fantastically complex computations and bypass all questions
of how they get the necessary input of information. When you put no
restrictions on the complexity, all connection with the real system gets
lost. If you can solve the equations, you've got a model! To me, that seems
like just plain cheating: it's doing the easy part without considering
implementation at all. What's most frustrating is that you have to admit
that the model would work, if all those computations could actually be done
and all the required information were actually available with infinite
precision, and if the brain were so large that the organism would have to
carry it around on a fork-lift.
I think that any PhD course in PCT should involve a full set of courses on
circuit design, with analog being given at least as much attention as
digital. The problem with modeling is that when you're just learning how to
do it, coming up with even ONE model seems like a major triumph. But after
you've designed and built a lot of circuits, you begin to see that this is
only a minor part of the problem, especially when you're reverse-engineering
an existing system. There is a whole array of designs that would do the job;
the real problem is to pick a design that handles the problem simply,
efficiently, and in a way that could be expanded without starting over from
scratch. If you need a Cray (or Fujitsu) computer to simulate limb movements
in real time, you've got the wrong model, no matter how proud you are of
having solved the problem. There ain't no Cray computer in the spinal cord.
Just because you can write an equation doesn't mean that there is anything
in there solving that equation.
Best,
Bill P.
ยทยทยท
The behavior of the system was interesting to watch; it appeared as though
the bacterium became interested in "exploring" its environment when it had
had its fill of nutrient. When two nutrient sources were placed some
distance apart on the screen, this exploratory behavior often allowed the
bacterium to "discover" the second source and take advantage of it when
on-board nutrient level fell and the organism was once again in need of
food. On might have thought that a rather complex system would be required
to explain such a pattern. It is quite possible that a similar mechanism
may be at work in animals that under one set of conditions are, e.g.,
phototropic, and under another, photophobic. Also, one characteristic of
human hunger is that inputs such as the smell of dinner cooking on the stove
become far more attractive as hunger-level goes up. Perhaps this also may
be explained in terms of an increase in system gain.
Regards,
Bruce