2 kinds of `model-based' control

[Avery Andrews 950627, 10:06 Eastern Oz Time]
(various postings on `Hans' Model'

It strikes me that one way of looking at Hans' model is to see it as
incorporating a `smart output function' which looks at more than just
the error signal for the perception that its controlling. In
particular, it looks at the reference signal as well. suppose W is the
function (determined by the environment, including disturbances, and
the perceptual capacities of the system), mapping output onto the
perception x which is to be controlled, then the system tries to find
F s.t. x_ref=x=W(F(x_ref, x_ref-x)). The `smartness' has two aspects,
first that it looks at more than just the error signal, second, that
it is adaptive.

I think that `smart output functions' should be carefully distinguished
from `smart input functions', which may be difficult because both
arrangements have been referrred to as `model-based control'; smart
output functions try to produce a mathematical model of the relationship
between output and perceived effect; while smart input functions try to
generate ersatz perceptions in cases where the best evidence is
unavailable or too expensive to maintain continuously (e.g., detect info
about the motion of an object, and use it to generate an ersatz
perception of its position, until you can manage to look at it again).

It seems to me to be a naive prediction of the standard PCT model, using
`dumb output functions', where output depends solely on error, that when
a large disturbance is being opposed, the error will be consistently
larger than when a small one is being. This appears to be true for some
of the tasks studied by motor control people in the late 70's; the
subjects were supposed to push levers to attain a (previously learned)
position, and the greater the disturbing force, the greater the
deviation from the target position. Unfortunately, the researchers
didn't realize that that is just what you'd expect out of a feedback
system with dumb output functions (as long as the gain wasn't cranked up
too high) (The source for this, I think, in Stelmach (1980), _Tutorials
in Motor Behavior_, but I may be wrong about this; I don't have the
material here).

On the other hand, I find that I can keep the corner of a very light
object, such as the cardboard box that Doom 2 arrived in, visually
aligned with something else with pretty much the same precision as a heavier
object, like the big edition of the Lidell & Scott Greek Lexicon,
suggesting that things are a different with a visually-controlled task.
I wonder if some experiments could be or even have been done that would reveal
something about what happens when people have to exert significant force to
maintain a reference against disturbances; if a smart output function
is involved, one might be able to see its `footprint', in that there
might be a quick & approximate adjustment phase, while the ordinary
feedback loop stabilizes, followed by a slower phase where the smart
part of the output function begins adapting.

Avery.Andrews@anu.edu.au

[Avery Andrews 950728]
  (Avery Andrews 950727)

Thinking a bit more about what I posted yesterday, plus Bill Power's
diagram of Hans' model, I've become skeptical of the utility of
feeding x_ref into output functions. What I'd suggest instead
for total error-correction against a constant disturbance is:

                       >
                       > x_ref
                       v
               ------> C ----------
               > > x_err
               > x_perc O O(x_err) =g*x_err + K
               > >
               > v
       - - - - P - - - - - - - - - E - - -

Where P is the perceptual function, E the effector, and O the smart
output circuit, whose smartness is supposed to consist of a capacity to
slowly adjust K until x_err vanishes.

Yesterday I was thinking that feeding x_ref into O might be useful
for dealing with changes in x_ref, but overnight reflection makes me
doubt that this would actually be true.

I think it might help if the models and sims people produced came with
some more into about the real-world problem that they were addressing;
feature of Hans' approach might look quite odd to people who are mostly
concerned with motor control problems, but might seem quite sensible if
one knew more about the actual context in which his system operates
(which means, I will not criticise it without reading his paper).
What the system above is supposed to be modelling is, say, holding an
object in a fixed position against gravity, or maintaining a fixed position
or velocity while swimming/flying against a current.

I do have a simulated `fish' model that some of this could be explored in,
but unfortunately it's in C++ (Turbo, 3.0).

Avery.Andrews@anu.edu.au