Feedforward complexities

[Martin Taylor 931115 17:20]
(Bill Powers 931115.0740)

Points of agreement:

(1) If one were to try to create an open-loop structure that performed
almost any behaviour as accurately as a human does, one would have to
live an awfully long time generating the design--in other words, we
agree that it usually can't be done.

(2) Models that have been attempted for open-loop operation have been
either very complex or very faulty, or more commonly both.

(3) There might possibly exist some behaviours that are executed open-loop,
but if they do, we don't know of them. (In one posting, I think I said
something like we could not know of them, but I would modify that to say
that we could know of them but only from the viewpoint of the outside
observer).

Point of disagreement:

I argue (abstractly, not abstractedly) that the simplicity or otherwise
oa structure can and should be considered independently of the simplicity
of the components of the structure. Therefore I argued that IF an open-loop
system could do a job using components no more complex than those used by
a control system to do the same job, then the open-loop system would
have to be considered simpler. I did NOT imply that such an open-loop
system could be built for any particular behaviour. See the points of
agreement.

So, I reject the implication you impute to me in

I deny your implicit claim that in the single-
function system, the function can be just as simple as those in a
control system that produces the same behavior.

Another few points of agreement.

(4) >>That does not mean that there exist no circumstances in which

open-loop elements exist, with or without closed-loop support.

They can exist with closed-loop support but not without it.

(5) >All these behaviors can be (and often are) described in a way

that makes it seem that a simple open-loop system could easily
accomplish them. But that's because of confusing outcomes with
outputs, and because of forgetting about the kinds of
disturbances that are always present in the real world.

(I might quibble about "always." Does a fish perceive the existence
of water?)

(6) >The only meaningful criterion here is the precision with which

real human behavior occurs. I claim that real human behavior,
even that which looks approximate at first glance, requires
control in order to be anywhere near as precise as it is.

(7) >...it doesn't follow that you can gradually increase the

amount of feedforward and gradually dispense with the feedback,
and end up with a system can can do ANYTHING as well as -- or no
more poorly than -- a real organism does the same thing.

(8) >As to the flying suitcase, I can imagine this happening once or

twice, but not as a matter of course.

Why I agree with this is that I agree with

(9) >Even the vestibular reflex is continually being recalibrated, with a

time constant of something like 15 minutes. And it is only an
approximate aid to fast control of the direction of gaze, useless
by itself for fixating on objects.

Precisely what the feedforward component of the Lang-Ham structure is
supposed to do. "Approximate" is much better than nothing. And as
I have said, I presume that recalibration happens in the same way as
the topologically continuous reorganization of perceptual function that
you have previously modelled. There's no need for complex special
mechanisms, which seem to bother you.

ยทยทยท

======================

Lang-Ham:

I started this discussion on Lang-Ham as a matter of curiosity, with no
committment to whether a like structure might exist in biological control
systems. As much as anything I wanted to hear from Hans or one of the
other "real" control engineers whether the idea had been further developed
or had been dropped as worthless. But in our interactions, especially
those over the weekend, I have been coming around to the idea that it
really might be a useful structure that has some biological relevance.

Now,

Also, as I told you in a private communication,
I modeled the Lang-Ham system and found that the feedforward
factor CANNOT be adjusted to provide the entire required output
by itself; that leads to a drastic overshoot when the feedback
connection is present.

Two comments, one of which I also made to you privately.

(1) What you modelled was a modification of the Lang-Ham structure in
which B() = 1. What Lang and Ham did was to put into B() a model of
the "expected" behaviour of the world in response to a reference step
applied to A(). They make a big point of this (Page 3 para 1), which
I ignored when I agreed with your previous analysis. What it does is
to decouple the feedforward and feedback characteristics. It ensures
that if the world behaves as "expected" there will be no error signal,
and if A() happens to provide the right actions to get the perception
to the reference, there never will be an error signal.

Since the effect of B(t) must go to unity after a long enough
time, this factor will not affect the control part of the loop--its
reference level will come to the same value as would a simple control
system. Any disturbances will be controlled in the normal way throughout
the period when B(t) differs from unity, and the feedforward connection
will not affect that. By setting B()=1, the reference signal applied
to the comparator is effective immediately, and if the world or the
output function impose any delay or finite duration impulse response
there will be an inappropriate perceptual error signal that will generate
unwanted corrections. I think that is what you modelled. (This is why
I wrote to Rick this morning that the question was still unanswered. I
expected you to try the simulation with non-unity B() before bringing
this up publicly).

(2) In the Lang-Ham formulation, A(optimum) = 1/EP. When E is a delay,
this condition is impossible to achieve, since A would be non-causal.
So A() must be something else, inverting aspects of the world that are
time-translatable. All A() needs to do is to provide some kind of output
that is nearer correct than to do nothing would be. But B() must model
what A() does. B() is AEP, whatever A might be. To the extent that
B's model of EP is wrong, to that extent will there be spurious error
signals in the control loop, slowing the achievement of effective control.
B() CAN incorporate any transport lag in E.

So I disagree with:

With the Lang-Ham model, the model MUST be switched if the
reference signal is to produce the right behavior by itself
(without disturbances -- it can't do anything correctly in the
presence of disturbances). The error signal must be cut off at
the instant that the feedback is lost, meaning that we need a
separate mechanism to detect the loss of perceptual signal and
institute the switching.

============

We still do not have ANY model of a single simple control system
that will behave correctly when its feedback signal is lost.

Is one conceivable? I should have thought it a contradiction in terms,
rather than a problem of inadequate theory.

More agreement

(10) >Remember that we have separated "disturbance" (the external

influence or set of influences tending to make the CEV change)
from "fluctuation" (the actual change in the CEV). If an arm
position control system detects a fluctuation in a position-CEV
to the left, it can't know whether that fluctuation was caused by
a single vector force aimed in that direction, or by two or more
simultaneously applied vector forces aimed in other directions.
There is no information in the fluctuation about the composition
of the disturbing forces, or what is causing each one.

How we know that a disturbance is present is exactly the problem.

The control
system, as presently conceived in our design or the Lang-Ham
design, can't know WHY the perceptual signal has suddenly become
zero. It will simply act, in proportion to the amount of error,
in the direction that tends to make the perceptual signal more
positive.

Right. I was wrong to suggest otherwise.

============

On getting to the bed in the dark. Are you arguing, as you seem to be,
that you can CONTROL the perception "relation between me and the bed"
when you can't detect whether or how the bed has been moved? That's
the only point at issue.

I disagree first with is the idea that you must perceive both the
bed and your position in order to bring the perceived difference
to zero. You can do it by affecting either component of the
relationship.

I didn't argue that. What I questioned was whether you could control
the perceived difference. At the very worst, you could randomly move
around the room until your tactile senses allowed you to perceive the
distance to be zero. But it wouldn't do much good to go to where the
memory model of the room said the bed was, and to throw yourself on it,
if someone had moved it. Your relation perception is not controlled.
But the CEV is a fact of the world, and when it is zero, you can safely
get into or onto the bed.

"The" CEV? You're talking about manipulating many CEVs at one
level to achieve a goal-CEV at another level (or of another
kind). As soon as your orientation relative to the bed comes
close enough for you to touch the bed, from then on the control
is taken over by nonvisual systems, the same ones that get you
between the sheets even when the lights are on.

My point exactly. There's only one of these many whose control is
in dispute--the one for which the contemporaneous perceptual signal
does not exist, and which must be manipulated using imagination and
memory.

We're back to my main point: when you look into the
HOW, you find problems that would be immensely hard to handle
without any feedback processes.

Forgetting the dozens or hundreds of perceptual signals that are still
present and controlled when the lights go out, the question is just
WHAT feedback processes there are that complete the closed loop
"present relative-position->perception of relative position->comparison to
desired relative position->action that affects relative position->
present relative position?" if there are disturbances to the unseen
position of one of the two entities that are "relatively positioned."

Martin