feed-back and feed-forward - equivalent!

[From Robert Kosara, 970414.2130 MET DST]

  What would a control system look like that consisted of both feedback
and feed-forward circuits? I want to show that a feedback controller can
contain feed-forward parts, without a change in its principle design.

  Say we have a system that can't react fast enough to stimuli (I know
you don't like that word, but what can I use instead?) in feedback
mode, so there has to be feed-forward. But we want to see what the
fed-forward action has led to, in order to control the whole system
using a higher level negative feedback loop.
  A feed-forward circuit can be built of a feedback circuit which gets a
constant value (0) as the feedback signal. The comparator will generate a
signal that changes with the input only: o = i-c, where o is the output,
i the input, and c the constant at the negative input of the comparator.

  But we want to have negative feedback for the higher level control
still. What do we have to do? We feed the input directly to the higher
level system, which can then generate new output values. But it can do
more: It can try to predict inputs and send appropriate actions ahead of
time, so that they will be performed at the time of the predicted changes
in the outside world.

  Let me give you an example: There is a large stone lying in front of
me and, for some reason, I want to lift it. So I reach down for it, and
apply a certain force to move it. But this special stone is different
from the ones I have lifted before: It's made of plastic, hollow, and
mainly consists of air. So the force I apply is too large. It takes me
some time to realize that, while I lift it over my head before I can
decrease the amount of force to that appropriate for this special
'stone'. The applied force has clearly been planned, and it took the
higher level negative feedback control some time to react (plus time
for the signals travelling to and fro the higher level cs). The next time
I want to lift a stone, I will take this experience into consideration,
and will try the smaller force first.

  This may be common knowledge anyway, but it's certainly new to me ...

  Robert

···

***************************************************************************
Remember: All things being equal, you are bound to lose.
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
*** Student of Computer Science and Medicine in Vienna, Austria, Europe ***
***************************************************************************

[From bruce Gregory (970414.1610 EST)]

Robert Kosara, 970414.2130 MET DST

  Let me give you an example: There is a large stone lying in front of
me and, for some reason, I want to lift it. So I reach down for it, and
apply a certain force to move it. But this special stone is different
from the ones I have lifted before: It's made of plastic, hollow, and
mainly consists of air. So the force I apply is too large. It takes me
some time to realize that, while I lift it over my head before I can
decrease the amount of force to that appropriate for this special
'stone'. The applied force has clearly been planned, and it took the
higher level negative feedback control some time to react (plus time
for the signals travelling to and fro the higher level cs). The next time
I want to lift a stone, I will take this experience into consideration,
and will try the smaller force first.

I follow your example, but don't see any need for
"feed-forward". All you would in practice do is set a lower
reference level for the tension in your bicepts and proceed
via the usual closed-loop negative feedback.

Bruce Gregory

[Avery Andrews 970414.1406PDT]
  (Robert Kosara, 970414.2130 MET DST)

I don't think there's any need for `feed-forward' in the stone-lifting
case. The lifting effort will be produced by an error-signal w.r.t the
stone's actual and desired positions; the force will be determined by
circuit-elements that map this error-signal into reference levels for
muscle-tensions; I think the observation is that this mapping is to
some extent influenced by observation of the nature of the object
being lefted. Of course you might want to call adjustments of gain
on the basis of perceptions of the environment a form of `feed-forward'
(I sometimes call them `smart output functions'), but this is maybe
an illustration to the effect that the term `feed forward' is probably
too vague to be useful.

  Avery.Andrews@anu.edu.au

[From Bill Powers (970414.1423 MST)]

Robert Kosara, 970414.2130 MET DST --

What would a control system look like that consisted of both feedback
and feed-forward circuits? I want to show that a feedback controller can
contain feed-forward parts, without a change in its principle design.

The first question I would ask is "what is a feed-forward circuit," and the
second question is "Why would you want to use such a thing?"

Say we have a system that can't react fast enough to stimuli (I know
you don't like that word, but what can I use instead?) in feedback
mode, so there has to be feed-forward.

For "stimuli" you could use "changes in the perceptual variable." In the
normal PCT control system, the perceptual input function is connected
directly to the comparator, which in turn is connected directly to the
output function. So the speed of the reaction to changes in the input
variable is determined by how long signals take to go from input to output.
How is a feed-forward connection going to improve on that, except perhaps by
eliminating the comparator (a millisecond or so delay)? You will need a
perceptual input function and an output function in any case, so you can't
shorten any delays they might involve. Just what saving of reaction time can
be accomplished?

But we want to see what the
fed-forward action has led to, in order to control the whole system
using a higher level negative feedback loop.
A feed-forward circuit can be built of a feedback circuit which gets a
constant value (0) as the feedback signal. The comparator will generate a
signal that changes with the input only: o = i-c, where o is the output,
i the input, and c the constant at the negative input of the comparator.

I presume you're using the term "input" where we use "reference signal" in
PCT -- rather than "sensory input", where is where we normally use the term
"input." I know that control engineers use "input" to mean reference signal,
but that's confusing because there is also a sensory input. It would be a
good idea to draw a diagram and label its parts.

But we want to have negative feedback for the higher level control
still. What do we have to do? We feed the input directly to the higher
level system, which can then generate new output values. But it can do
more: It can try to predict inputs and send appropriate actions ahead of
time, so that they will be performed at the time of the predicted changes
in the outside world.

Does this prediction function save any time when there is an unexpected
change in the perceptual variable? In a PCT system, since the sensory input
is connected to the output by almost the shortest possible path, no extra
computation time is required, whereas if some sort of prediction had to be
made, we would expect a longer time to be required in order to make it.

Let me give you an example: There is a large stone lying in front of
me and, for some reason, I want to lift it. So I reach down for it, and
apply a certain force to move it. But this special stone is different
from the ones I have lifted before: It's made of plastic, hollow, and
mainly consists of air. So the force I apply is too large. It takes me
some time to realize that, while I lift it over my head before I can
decrease the amount of force to that appropriate for this special
'stone'. The applied force has clearly been planned, and it took the
higher level negative feedback control some time to react (plus time
for the signals travelling to and fro the higher level cs). The next time
I want to lift a stone, I will take this experience into consideration,
and will try the smaller force first.

This is a very cognitive control system you're describing, and it's
certainly not a good way to lift objects. Your spinal control systems
already supply the fast loops needed to exert position control, which
automatically adjusts the force-control systems in a matter of 10
milliseconds or so, so there should be no problem with loads of varying
weights and masses.

If, however, you consciously decide that you're going to apply a specific
upward _force_, then you have to cognitively estimate the force and set a
reference signal for the sensed force that appears as you pull upward on the
object. If you have overestimated the required force, you will accelerate
the object upward without limit; if you underestimate it, the object won't
rise at all. It's much better to pick a _position_ to which you want to
raise the object, and then let the automatic lower-level systems provide the
necessary force all by themselves. They don't need to be told how much force
to exert (as long as it's not more than they can produce). You can't devise
a feedforward system that will work any faster than these control systems
already work.

The problem here is that you're dealing with a made-up example in which
imaginary data are used. If you did any experiments with lifting identical
boxes containing different weights of contents, you would quickly find that
the result you imagine doesn't happen. People who suggest this result are
really deducing what must happen according to the model of behavior they
believe in; they're not talking about actual observations.

If you want to see what actually happens, go to a place where they handle a
lot of packages of varying weights, like an airport baggage-handling
department, or a package-sorting department such as they have at United
Parcel Service (or its Viennese equivalent). Here people work all day
picking packages off a conveyor belt with no idea of whether they will be
heavy or light, and quite successfully move them from one belt to another or
onto a cart, never flinging the light ones over their heads and seldom
dropping the heavy ones (unless they belong to me). They use position
control (or, if they're throwing the packages, velocity control), not force
control.

It's only under a cognitive, top-down, command-driven model that we would
imagine computing forces in advance of lifting something. In a PCT model, no
such thing is required. Once the position reference is given, the error
signal enters the force-control systems, its magnitude being adjusted in a
matter of milliseconds to make the sensed position be the reference
position. No higher system has to specify the force. The weight of the
object determines the force, through the local feedback path.

If you're going to use PCT, why not use it from the beginning? When you try
to mix it with other approaches, you invariably start making assumptions
appropriate to other models, and imagining phenomena that don't actually
happen, even though other models lead you to expect them to happen. If you
start with the idea of a hierarchical control model, you can handle all the
phenomena -- even so-called feed-forward phenomena -- that other models
handle, and a lot more that they can't handle or that they predict incorrectly.

How long has it been since you picked up a box, thinking it was full, and
threw it over your head because it was actually empty? Have you EVER done
that? Or does it just sound like something that ought to happen?

While there are cases where we actually prepare for action ahead of time, I
don't know of any that can't be handled perfectly well in in a model that
uses a hierarchy of negative feedback control systems. Why give up on that
model before you've even tried it?

Best,

Bill P.

[Martin Taylor 970415 11:30]

Bill Powers (970414.1423 MST)]

While agreeing with most of what you say, I must demur on one point.

How long has it been since you picked up a box, thinking it was full, and
threw it over your head because it was actually empty? Have you EVER done
that? Or does it just sound like something that ought to happen?

Yes, it has happened to me (not over the head--the control systems correct
fast enough to avoid that, but enough to launch the case quite violently).
And a similar event happened last week. I was in the process of starting
to push on a door I wanted to open, when someone opened it from the other
side. I stumbled forward and had to be quite adroit not to crash into the
other person.

It doesn't happen very often. And it doesn't happen if one is uncertain
as to the weight of the box or whatever.

While there are cases where we actually prepare for action ahead of time, I
don't know of any that can't be handled perfectly well in in a model that
uses a hierarchy of negative feedback control systems. Why give up on that
model before you've even tried it?

I don't see why the notion that reference signals are supplied to a level
from the output at a higher level, coupled with the notion that higher
level controls are slower than lower level controls, is "giving up on"
the model of a hierarchy of negative feedback control systems. That
structure seems to have the right performance, doesn't it?

Martin

[From Bill Powers (970415.0936 MST)]

Avery Andrews 970414.1406PDT]

writing to Robert Kosara, 970414.2130 MET DST --

I don't think there's any need for `feed-forward' in the stone-lifting
case. The lifting effort will be produced by an error-signal w.r.t the
stone's actual and desired positions; the force will be determined by
circuit-elements that map this error-signal into reference levels for
muscle-tensions; I think the observation is that this mapping is to
some extent influenced by observation of the nature of the object
being lefted. Of course you might want to call adjustments of gain
on the basis of perceptions of the environment a form of `feed-forward'
(I sometimes call them `smart output functions'), but this is maybe
an illustration to the effect that the term `feed forward' is probably
too vague to be useful.

This is essentially my position. The problem is that the term "feed-forward"
isn't really defined. What direction is "forward?" If we try to deduce what
is mean by analogy with feed-"back", it would seem that feedforward would
require a chain or processes running counter to the direction of effects in
the feedback loop. In other words, from the output to the comparator to the
input and back to the output again.

The meaning of "forward" and "backward" could be interpreted in terms of a
flow between central processes and peripheral ones. A signal going from the
center toward the periphery would be a "feedforward" signal, while one going
from the perphipery toward the center would be a "feedback" signal. The
problem with this interpretation is that even in a single negative feedback
system we can find signals going both ways. The signal from the outside
world through the input function to the comparator is clearly a "feedback"
signal, but the signal going from the comparator to the output function (or
from the reference signal through to the output) is equally clearly a
"feedforward" signal. Under this interpretation, therefore, ALL control
systems use both "feedback" and "feedforward."

What are usually referred to as "feedforward" effects are really just
actions of higher-level feedback control systems. When you prepare to pick
up an object that you judge might be heavy, you don't decide on a force to
apply; what you do is position your body so it won't be unbalanced if the
load is heavy, and you choose whether to do the lifting with your arms or,
from a squat, with your legs. Those are higher-level control processes
concerned with controlling the side-effects of lifting something that might
prove to be heavy. You won't know how much force you're actually going to
apply until you actually start lifting. Then, as you lift the load through a
series of positions, or at some specific velocity, you will find your
muscles producing whatever amount of force is required by the real load.

The only way you can foul up this nice multileveled control process is to
use a high-level system to compute how much force to apply, and then apply
that amount of force. This amount of force will, except by the wildest of
improbable chances, be either too much or too little. If too little, the
load won't move at all. If too much, the load will accelerate upward
according to the equation A = F/M - g. Then, if you have greatly
overestimated the required force, you _will_fling the load over your head.

Best,

Bill P.

[Robert Kosara, 970417.2030 MEST]

[Avery Andrews 970414.1406PDT]

I don't think there's any need for `feed-forward' in the stone-lifting
case. The lifting effort will be produced by an error-signal w.r.t the
stone's actual and desired positions; the force will be determined by
circuit-elements that map this error-signal into reference levels for
muscle-tensions; I think the observation is that this mapping is to
some extent influenced by observation of the nature of the object
being lefted.

  But this observation is not in a closed-loop relation with the object!
This is what I would call a feed-forward circuit (see below).

... the term `feed forward' is probably too vague to be useful.

  and [Bill Powers (970414.1423 MST)] wrote

The first question I would ask is "what is a feed-forward circuit," and
the second question is "Why would you want to use such a thing?"

  Let me try to answer the first question with a simple block diagram:
This is my picture of PCT-type closed-loop negative feedback circuit:

  Reference Level ---->CMP-------> Real World (CV)
                        ^ |

···

             >

                        +--------------+
  The comparator CMP passes the difference between the Reference Level and
the CV to the next level (or a device that affects the real world).

This can be changed into an open-loop 'feed-forward' circuit:

  Reference Level ---->CMP-------> Real World ('CV')
                        ^
                        >
                        constant (maybe 0)

  The first point of my last posting was that a negative feedback loop can
act as if it were a feed-forward circuit, and is thus the more general
concept. Please note that I am not trying to use the feed-forward circuit
for anything other than forwarding the value to the next stage (which is
why it's called 'feed-forward' :wink:

  The second point (answering your second question) was that there is
planning in human behaviour, and it has a purpose. When you learn
something, or do it for the first time, you will be very slow at it.
Typing on a keyboard, for example. You probably never waste a thought on
how your fingers hit the correct keys in the right sequence, when you
write your postings. But there was a time when I (and every one of us)
had to think of every single key, find it, hit it, think of the next
letter in the word being typed, etc. We learned to type while thinking,
and we got quite fast, because we don't have to go through that process
any more. We are able to plan. When I am about to type in my password
(which I have been using for over six months now, and which I type in so
fast that people think I'm pulling their legs by hitting all the keys at
once ;-), my hands take on a certain posture in anticipation of the
sequence of movements to go through. This makes the every-day things we do
more efficient, and we can use our brain-power for other things!

For "stimuli" you could use "changes in the perceptual variable."

  Excuse me for writing this 'I don't know what to use instead of
stimulus'-nonsense ... I know a bit more about PCT now than my use
of language suggests ... at least I make myself think I do ...

How is a feed-forward connection going to improve on that, except
perhaps by
eliminating the comparator (a millisecond or so delay)? You will need a
perceptual input function and an output function in any case, so you
can't shorten any delays they might involve. Just what saving of
reaction time can be accomplished?

  The delay will become longer, of course, in the situation where the
predicted input (and the output that was thought to be appropriate) is
wrong, as in the case of the stone-lifting example. But I am not saying
that planning is a part of each and every behaviour. All I am saying is
that there is planning in _some_ of human behaviour, and this can be an
improvement. But it can be a disadvantage as well, and these are the cases
that make us ask ourselves, why we did that. When walking down a dark
staircase, for example, I once took a step expecting another step (of the
staircase), but there was the ground of the cellar already, which my foot
hit quite hard. The opposite can happen as well: expecting this
next step to be the last, but it isn't ... it feels like you're falling
into a deep hole for a moment. You cannot explain this with simple
position- or speed-control without planning (can you?)!

  So the delay, as I said, will become longer. But there is also an
advantage: The speed of the actions carried out can be higher, since the
planning can be done in less time (tighter circuits), and can be sent to
the effectors at that speed. And when the planning enables you to
anticipate a change in the CV and act _before_ the actual change occurs,
the longer initial delay will have no effect at all.

  In the password-example from above, I would never be able to hit the
keys so fast that not even I myself am able to follow the typing. I know
that I have made a mistake only several key-strokes after the wrong one!
This is like an assembly line: It takes the same time for a single car to
be built (or even longer), but during the same time-span, more cars leave
the assembly line than when each would be built separately. (and it takes
longer until the first car leaves the assembly line after switching it on.
But as soon as the first one is finished, the rate of cars coming out of
the factory is very large) And if you start the assembly line early
enough before you need the first car, you will have an advantage.

Does this prediction function save any time when there is an unexpected
change in the perceptual variable?

  Of course not! This is why a model that requires planning for every kind
of behaviour must fail. But for situations where the changes in CV can be
expected to be predictable (or at least deviate little from the predicted
value), planning will save time.
  It does not, when something unexpected happens, of course, which is why
I lift the hollow plastic 'stone' over my head: It takes time until the
error is recognized and corrected. But this delay (and the examples above)
made me think of planning in the first place!

  I only exchanged the object being lifted, by the way. This example (with
a fake laptop computer made of cardboard (a friend of mine got it at a
computer fair), but which looked so real, it completely fooled me)
actually happened to me. And the other cited examples did as well. It's
not made up.

Why give up on that model before you've even tried it?

  I am not giving up on PCT! I am, in fact, doing some work in that
direction at the moment (as you will see in a couple of weeks).

  What I was trying to do (although not very successfully) was to show
that feedback can be used to create feed-forward circuits as well, and is
thus the more general concept. And there are situations where planning
seems to be part of our behaviour.

  Regards,

  Robert

***************************************************************************
Remember: Don't relax! It's only your tension that's holding you together.
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
*** Student of Computer Science and Medicine in Vienna, Austria, Europe ***
***************************************************************************

[From Bill Powers (970420.0100 MST)]

Robert Kosara, 970417.2030 MEST --

This can be changed into an open-loop 'feed-forward' circuit:

Reference Level ---->CMP-------> Real World ('CV')
                       ^
                       >
                       constant (maybe 0)

The first point of my last posting was that a negative feedback loop can
act as if it were a feed-forward circuit, and is thus the more general
concept. Please note that I am not trying to use the feed-forward circuit
for anything other than forwarding the value to the next stage (which is
why it's called 'feed-forward' :wink:

This concept assumes that the reference signal can be converted into a
desired effect in the Real World without any feedback. A lot of people think
this is a feasible model, but I don't. If you say that the reference signal
is a specification for the _sensed_ state of the real world, you don't have
to worry about how "CMP" can compute the inverse kinematics and dynamics
that would be involved in converting a neural command into a specific
real-world effect, or how such a system could anticipate disturbances and
correctly compensate for them. As Hans Blom has shown us, it is possible for
such things to be done, but they involved very complex calculations of high
precision and (if they are to compete with the negative feedback
arrangement) _very_ high speed. Unless you assume that calculations by the
nervous system require no time, the negative feedback circuit will always
work faster than the open-loop arrangement you show.

The second point (answering your second question) was that there is
planning in human behaviour, and it has a purpose. When you learn
something, or do it for the first time, you will be very slow at it.
Typing on a keyboard, for example. You probably never waste a thought on
how your fingers hit the correct keys in the right sequence, when you
write your postings. But there was a time when I (and every one of us)
had to think of every single key, find it, hit it, think of the next
letter in the word being typed, etc. We learned to type while thinking,
and we got quite fast, because we don't have to go through that process
any more. We are able to plan. When I am about to type in my password
(which I have been using for over six months now, and which I type in so
fast that people think I'm pulling their legs by hitting all the keys at
once ;-), my hands take on a certain posture in anticipation of the
sequence of movements to go through. This makes the every-day things we do
more efficient, and we can use our brain-power for other things!

I agree with all you say. But planning does not necessarily consist of
planning outputs that are carried out blindly. In fact, I think it NEVER
does. When we plan something we are really planning perceptions, not
actions. Think about it. "I'm going to pick up the pencil and put it next to
the eraser." We plan perceived results, not the actions that will bring them
about. What are the actions that will result in the pencil being picked up?
What actions will result in it's being moved next to the eraser? This
depends a great deal on where the hand, the pencil, and the eraser are. We
don't plan these actions; the actions automatically occur once the reference
condition (pencil picked up) is specified.

While a plan is being carried out, we monitor the perceived results and
compare them against the plan. If someone picks up the eraser, we change the
plan: move the pencil next to the calculator, or ask the person to put the
eraser back where it was. The plan is being executed closed-loop; each
element specified in the plan is brought into perception closed-loop.

All I am saying is
that there is planning in _some_ of human behaviour, and this can be an
improvement.

I can agree that some kind of plan is involved in _most_ human behaviors.
The point I am trying to make is that plans are prescriptions for a series
of perceptions, not a series of actions.

But it can be a disadvantage as well, and these are the cases
that make us ask ourselves, why we did that. When walking down a dark
staircase, for example, I once took a step expecting another step (of the
staircase), but there was the ground of the cellar already, which my foot
hit quite hard. The opposite can happen as well: expecting this
next step to be the last, but it isn't ... it feels like you're falling
into a deep hole for a moment. You cannot explain this with simple
position- or speed-control without planning (can you?)!

No, it requires a plan as you say (even if erroneous). But what is planned
is the feeling of the foot touching the step, or the sensed position of the
foot (or both), not the muscle tensions that will accomplish this result. In
fact, the upset that occurs when you miscalculate occurs because the desired
degree of contact is not achieved when planned, and the lower-level systems
produce extremes of output in trying to achieve it. Also, if the plan calls
for perceiving your foot on the next step down, and it hits the floor
instead, the leg muscles can exert very large forces as they try to force
the foot to a position eight inches below the floor level.

So the delay, as I said, will become longer. But there is also an
advantage: The speed of the actions carried out can be higher, since the
planning can be done in less time (tighter circuits), and can be sent to
the effectors at that speed.

That's a brave assertion, but I think it's wrong, at least the way it came
out. There's no significant difference in speed between the open-loop and
closed-loop systems you drew. And higher-level systems must, in general, be
slower than lower-level systems to preserve dynamic stability. I think your
later remarks correct this assertion.

And when the planning enables you to
anticipate a change in the CV and act _before_ the actual change occurs,
the longer initial delay will have no effect at all.

I agree with this. But what is planned is for a certain _perception_ to
occur at the critical time. If the reference signal is issued at the right
moment, the perception (and the actual event) will occur just when needed.
There's no need to plan the _action_.

In the password-example from above, I would never be able to hit the
keys so fast that not even I myself am able to follow the typing. I know
that I have made a mistake only several key-strokes after the wrong one!

Yes, this also happens with very fast piano-playing. There is nothing to
prevent a higher-level system from issuing changes in reference signals
faster than a lower-level system can accurately match them with perceptions.
But control is poor; a disturbance occurring during a fast action simply
can't be corrected. Also, the reference signals have to be exaggerated in
magnitude, because the lower system would otherwise not experience a large
enough error to produce the action fast enough. Fast movements like these
are executed just at the edge of the maximum possible speed of control, with
accuracy sacrificed.

Of course "you" can't follow the fast typing consciously; "you" are normally
conscious from a higher-level, and slower, point of view. But your
lower-level systems can work much faster than "you" can.

This is like an assembly line: It takes the same time for a single car to
be built (or even longer), but during the same time-span, more cars leave
the assembly line than when each would be built separately. (and it takes
longer until the first car leaves the assembly line after switching it on.
But as soon as the first one is finished, the rate of cars coming out of
the factory is very large) And if you start the assembly line early
enough before you need the first car, you will have an advantage.

This is how I see it, too. At the sequence level we can reel off the
reference signals for lower systems in this assembly-line fashion; actually
we can achieve very high speeds if _different_ lower-level systems are being
used (different fingers, when typing or playing the piano). One system can
be given its reference signal before the previous system has finished
correcting its error. This is why, in either application, making repeated
strokes on the same key is much slower than using different finger-control
systems in sequence. If your password were eeeepppp, you couldn't type it
nearly as fast as epepepep, and you couldn't type that as fast as qwepoi.

Does this prediction function save any time when there is an unexpected
change in the perceptual variable?

Of course not! This is why a model that requires planning for every kind
of behaviour must fail. But for situations where the changes in CV can be
expected to be predictable (or at least deviate little from the predicted
value), planning will save time.

I hope you see now that if you think of planning as planning of perceptions
instead of actions, there need be no big difference between these
situations. The lower-level systems can handle disturbances much faster than
the higher ones can, so the planning system really doesn't have to take them
into account.

This example (with
a fake laptop computer made of cardboard (a friend of mine got it at a
computer fair), but which looked so real, it completely fooled me)
actually happened to me. And the other cited examples did as well. It's
not made up.

I stand corrected. How far over your head did you throw it? Next time, try
position control!

What I was trying to do (although not very successfully) was to show
that feedback can be used to create feed-forward circuits as well, and is
thus the more general concept. And there are situations where planning
seems to be part of our behaviour.

Planning and feed-forward are not, as I hope you can see now, synonymous.

Best,

Bill P.

[Hans Blom, 970421b]

(Robert Kosara, 970417.2030 MEST)

This is my picture of PCT-type closed-loop negative feedback
circuit:

Reference Level ---->CMP-------> Real World (CV)
                       ^ |
                       > >
                       +--------------+

To show equivalence between feedback and feedforward, first change
the diagram to

  Reference Level ---->CMP-------> Real World ----> CV
                        ^ |

···

                      >

                        +-----------------------+

Then collapse the closed loop part to a single element

  Reference Level ---->Some black box ----> CV

You now have an "open loop" system; at least, the loop is hidden.
That there is feedback _within_ the black box is not considered
important in the last diagram. The "some black box" could, for
instance, be an audio amplifier. That it has internal feedback stages
is not the important thing; that it delivers a CV accurately
resembling the (amplified) reference level is.

The difference between open-loop and closed-loop is sometimes in the
amount of detail :-).

This can be changed into an open-loop 'feed-forward' circuit:

Reference Level ---->CMP-------> Real World ('CV')
                       ^
                       >
                       constant (maybe 0)

This is not an equivalent diagram: it will behave differently from
the feedback diagram except when the Real World gain is zero, which
is hardly an interesting case.

Greetings,

Hans

[From Robert Kosara, 970424.2115 MEST]

[From Bill Powers (970420.0100 MST)]

Unless you assume that calculations by the
nervous system require no time, the negative feedback circuit will always
work faster than the open-loop arrangement you show.

  Not when typing on a keyboard, for example, where the number of possible
perceptions is very limited, and can be predicted very accurately. It
takes a lot of time to 'recover' from a typing error, and people usually
only interrupt their typing several keystrokes after the wrong one!

While a plan is being carried out, we monitor the perceived results and
compare them against the plan. If someone picks up the eraser, we change the
plan: move the pencil next to the calculator, or ask the person to put the
eraser back where it was. The plan is being executed closed-loop; each
element specified in the plan is brought into perception closed-loop.

  In this example, many different perceptions are involved on a number of
different levels. In a more simple example --- typing on the keyboard
again --- when I hit the wrong key, there is a difference between the
planned perception and the actual one. But it takes time until that is
recognized, a new plan developed, and the first signal of the new plan
reaches the fingers. During this time, I hit another two keys, and then my
fingers suddenly stop (no more signals in the queue). This is just an
ad-hoc explanation, of course.

I can agree that some kind of plan is involved in _most_ human behaviors.
The point I am trying to make is that plans are prescriptions for a series
of perceptions, not a series of actions.

[...]

No, it requires a plan as you say (even if erroneous). But what is planned
is the feeling of the foot touching the step, or the sensed position of the
foot (or both), not the muscle tensions that will accomplish this result. In
fact, the upset that occurs when you miscalculate occurs because the desired
degree of contact is not achieved when planned, and the lower-level systems
produce extremes of output in trying to achieve it. Also, if the plan calls
for perceiving your foot on the next step down, and it hits the floor
instead, the leg muscles can exert very large forces as they try to force
the foot to a position eight inches below the floor level.

  No problem with that, but the result is equivalent, no matter if
perceptions are planned, or actions (as far as the outside world is
concerned).

> So the delay, as I said, will become longer. But there is also an
>advantage: The speed of the actions carried out can be higher, since the
>planning can be done in less time (tighter circuits), and can be sent to
>the effectors at that speed.

That's a brave assertion, but I think it's wrong, at least the way it came
out. There's no significant difference in speed between the open-loop and
closed-loop systems you drew. And higher-level systems must, in general, be
slower than lower-level systems to preserve dynamic stability. I think your
later remarks correct this assertion.

[Me: not being able to follow my own movements when typing in a password]

Yes, this also happens with very fast piano-playing. There is nothing to
prevent a higher-level system from issuing changes in reference signals
faster than a lower-level system can accurately match them with perceptions.
But control is poor; a disturbance occurring during a fast action simply
can't be corrected.

  But why not? Because it takes the perceptual signals considerable time
to reach the higher levels, that run at a speed far higher than would
allow to wait for the perception before issuing the next signal! That is
what I meant by 'tight circuits': Ones that don't involve the delays that
occur in the spinal cord and the afferent and efferent nerves (from the
spinal cord to the muscles).

Also, the reference signals have to be exaggerated in
magnitude, because the lower system would otherwise not experience a large
enough error to produce the action fast enough. Fast movements like these
are executed just at the edge of the maximum possible speed of control, with
accuracy sacrificed.

  Not necessarily. When feedback is very slow, there still will be the
same errors: DOS games, for example, often presented several 'welcome' and
'option' screens before the actual games started. You had to go through
them every time, and so, after a few times, you knew what keys to press,
and did that before the actual screen appeared (during the fade-out of one
and the fade-in of the next), thus reducing the startup time. But when a
friend came to visit me, and we wanted to play together, I still hit 's'
for 'single player' ... and I am talking of turn-around times of a second
or so.

The lower-level systems can handle disturbances much faster than
the higher ones can, so the planning system really doesn't have to take them
into account.

  And when the higher-level system does not have to wait for the updated
perception, it can proceed faster (but with less accuracy, of course)!

> This example (with
>a fake laptop computer made of cardboard (a friend of mine got it at a
>computer fair), but which looked so real, it completely fooled me)
>actually happened to me. And the other cited examples did as well. It's
>not made up.

I stand corrected. How far over your head did you throw it? Next time, try
position control!

  That is the problem: I can't control these things consciously (at least
not in a real situation)! I did not throw the cardboard laptop over my
head, but the movement sure looked strange (my friends couldn't stop
laughing).

Regards,

  Robert

···

***************************************************************************
Remember: Disc space -- the final frontier!
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
*** Student of Computer Science and Medicine in Vienna, Austria, Europe ***
***************************************************************************

[From Robert Kosara, 970424.2030 MEST]

  Sorry for the delay, but I have been pretty busy in the last few days
(and still am).

[Hans Blom, 970421b]

  Reference Level ---->CMP-------> Real World ----> CV
                        ^ |
                        > >
                        +-----------------------+

Then collapse the closed loop part to a single element

  Reference Level ---->Some black box ----> CV

You now have an "open loop" system; at least, the loop is hidden.

  You lost me there ... the Real World (the physical world) is a part
of the black box, in your diagram. Where does the feedback value for the
comparator come from, how is it different from the CV?

That there is feedback _within_ the black box is not considered
important in the last diagram. The "some black box" could, for
instance, be an audio amplifier. That it has internal feedback stages
is not the important thing; that it delivers a CV accurately
resembling the (amplified) reference level is.

  But the audio amplifier does not contain any loops that involve the
current position of the loudspeaker diaphragm, for example. The feedback
only controls its amplification, and is a technical means for minimizing
noise!
  Consider this example: a stepper motor is used to move a theodolite
(guess where I got that idea from ;-), and I want it to point in a certain
direction. So I send a number of impulses to the motor, which then turns
the theodolite accordingly. This is open-loop. The current position of the
theodolite is 123 (degrees, for example), the new reference level is 456.
The difference, 333, is what the comparator (software, in this example),
passes to the motor. That is what I consider open-loop. Now if the
theodolite only moved 200 steps instead of 333 (because I wasn't finished
taking the cover off, and got in its way), the open-loop system will not
know, since the Real World is _not_ part of the loop! The comparator only
knows its output, and that was correct (333 impulses), but it doesn't know
how many of these impulses actually had any effect on the theodolite.
The CV is only a certain aspect of this real world, and cannot be
considered separate. In my opinion, anyway.

Regards,

  Robert

···

***************************************************************************
Remember: Disc space -- the final frontier!
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
*** Student of Computer Science and Medicine in Vienna, Austria, Europe ***
***************************************************************************

[From Bill Powers (970425.0130 MST)]

Robert Kosara, 970424.2115 MEST--

Unless you assume that calculations by the
nervous system require no time, the negative feedback circuit will >>

always work faster than the open-loop arrangement you show.

Not when typing on a keyboard, for example, where the number of possible
perceptions is very limited, and can be predicted very accurately.

Yes, even in that case -- the control system works faster than the open-loop
system.

Let me explain how this can be true -- it is obviously counterintuitive for
most people, although that's only because they imagine a control system to
be something hard to understand and therefore complex. What is _really_
complex is a system that has to predict things, especially if it has to
predict them very accurately. But that's not the only reason control systems
are faster. The other reason is that control systems can reach a given final
state faster than open loop systems of equal complexity.

Suppose you consider just a single keystroke while typing. How do you move
the finger against the key and exert the right pressure to depress it,
without banging on it so hard that your finger hurts or you damage the keyboard?

The simple open-loop way depends on using a calibrated signal that is
calculated to produce the correct final force. There is a conflict between
speed and the requirement that the force not be too great. A force large
enough to generate a high speed will also be a force large enough to hurt or
damage something when the finger hits the keyboard. So the signal must be
set to just the magnitude that will produce the required final force, and
that reuqirement limits the speed of its movement.

In a control system, when the reference signal is set suddenly to a new
state, there is initially a very large error signal. This accelerates the
finger very rapidly to a high speed. But, with rate feedback of the kind
that exists in every motor control loop, as the error begins to get smaller
(as it immediately does), the driving force immediately starts to decline,
and in fact, due to the rate component of the feedback signal, actually goes
negative, starting to decelerate the finger. When the finger touches the
key, a new source of negative feedback appears at the spinal level -- the
touch receptors are wired directly to the spinal motor neurons, with a
time-delay of around five or ten milliseconds. This greatly increases the
deceleration, so the final bump, when the key hits bottom, is far less than
it would have been if the initial force had been maintained. This means that
in order to achieve the SAME final bump, the control system can start with a
far larger acceleration of the finger toward the key than the open-loop
system can afford to do. And this is why the control system can operate
faster than the simple open-loop system.

An open-loop system can be devised to have the same characteristics, but it
can no longer be a _simple_ open-loop system. Instead of just issuing a
command of constant magnitude, the system producing the command now must
issue a _timed pattern_ of commands. First it must generate a large brief
signal to get the finger moving rapidly. After a coasting period, while the
finger approaches the key, the signal must decline and go negative, creating
a large deceleration just in time to keep the finger from hitting the key
hard enough to split the skin or crack the plastic, but hard enough to
bottom the key. The magnitude and timing of these signal changes must be
produced fairly accurately, because the finger is, after all, a physical
object subject to physical laws -- you can't issue just any old signal.

The control system actually ends up issuing force signals that are patterned
just like the force signals from the ideal complex open-loop system -- but
the patterns are generated naturally by the effects of negative feedback and
do not have to be specifically computed. Whenever a computation is involved,
a little extra time is involved.

It
takes a lot of time to 'recover' from a typing error, and people usually
only interrupt their typing several keystrokes after the wrong one!

This shows that higher-level perceptions are slower than lower-level
perceptions. It takes longer to detect a wrong sequence than a wrong
configuration (letter on the screen or paper). But we can also see another
fact here that is implied by the HPCT model, the fact that there is more
than one control system acting at more than one level.

When you type, there are several variables being controlled at the same
time. One is spelling -- you are watching the result on the screen, and
checking to see if the word is spelled corrrrectly -- that is, if the
sequence that is perceived is the intended sequence. Still another is
grammar or syntax -- is this good English, or French, or German that is
appearing on the screen? And another still is meaning: as I read what I am
writing, does it evoke the perceptions I am trying to communicate? All these
control processes are going on at the same time.

And of course they are hierarchically related. A sequence error can be
corrected only by a higher-level system issuing another sequence command --
erasing the letters back to the mistake, repositioning the hands, and
repeating the same sequence if the error occurred at a lower level like that
of finger positioning, or altering the sequence if the cause was a
higher-level mistake in spelling. If the correct sequence is perceived
(i.e., it matches the reference sequence), another system might decide that
the wrong word has been chosen, choose a different word, and initiate a
correction cycle so that a _different_ sequence can be produced. Or a still
higher system can decide that the words on the screen imply something
unwanted, or fail to convey the wanted meaning, and the whole sentence is
deleted in favor of a completely different one.

Understand that this is a very rough approximation. A study of typing errors
would be a very nice way to study levels of control, but nobody has done
that with PCT in mind yet. There's probably a literature on this subject,
although it may be pretty frustrating to read.

In this example, many different perceptions are involved on a number of
different levels. In a more simple example --- typing on the keyboard
again --- when I hit the wrong key, there is a difference between the
planned perception and the actual one. But it takes time until that is
recognized, a new plan developed, and the first signal of the new plan
reaches the fingers. During this time, I hit another two keys, and then my
fingers suddenly stop (no more signals in the queue).

This depends on the cause of the error. If the lower-level system fails to
move the finger to the vertical position that is requested, you are in big
trouble, because something has happened to a low-level system that will not
be easy to fix. If you've been joggled by somebody (or by a body movement of
your own like a sneeze), moving the hands to the wrong position over the
keyboard, it can take a while for the sequence-control level to perceive
that the wrong letters are appearing on the screen and stop, and for a still
higher system to initiate the erasure sequence, and restart the sequence.
Remember that perception of _sequence_ is not like perception of a single
letter-configuration: it involves a _set_ of letters received in a certain
order. To detect an error in sequence requires perceiving more than one letter.

If there has been a mistake in controlling the sequence of letters, the
wrong sequence will appear on the screen even though the finger positioning
systems are working perfectly. You will type "hte" instead of "the" not
because your finger slipped but because the sequence system reversed the
order of two reference signals. The fingers simply produced the letters they
were told to produce. If you type "the the" instead of "and the", the
sequence level is generating exactly the two sequences requested of it, but
the request was in error at the output of the next level up: the sentence
construction system got ahead of itself, issued the wrong sequence command,
and before a higher system could react to the unwanted duplication of words,
re-issued it. (Or something).

This might be a good time for Rick Marken to comment on the basis of his
experiments with perceptions of various levels.

No problem with that, but the result is equivalent, no matter if
perceptions are planned, or actions (as far as the outside world is
concerned).

Not so. If you command an action and it has the wrong effect because of a
disturbance, you will need a higher order system to detect the error and
re-issue the command (it's altogether too easy to forget about
disturbances). But if there is feedback at the lower level, the lower-level
system can correct the error without any change in the command from the
higher system. In a control system, it is the _result_, not the action, that
is specified by the reference signal. When disturbances occur they affect
the result, not the action, and it is the effect on the result that is
sensed and corrected immediately through the effect on the error signal. But
in the open-loop system, disturbances can't be detected at the level where
they have their immediate effects, because only the action is being specified.

When you plan perceptions, your actions will change as they need to to
oppose the effects of disturbances; you will seldom see the same actions
occurring twice, although they will produce the same results again and
again. When you plan actions, the external observer will see repeated
actions, but varying results because of the disturbances.

The fastest possible reaction to disturbances occurs in a hierarchy of
control systems, because the disturbance is always opposed at the lowest
possible level of control, where the fastest possible sensing is done.

... it takes the perceptual signals considerable time
to reach the higher levels, that run at a speed far higher than would
allow to wait for the perception before issuing the next signal! That is
what I meant by 'tight circuits': Ones that don't involve the delays that
occur in the spinal cord and the afferent and efferent nerves (from the
spinal cord to the muscles).

The long delays you speak of (like waiting for the game program to get
through its introductory delay) are in the environment, not in the
perceptual systems. It takes only milliseconds for the lowest level
perceptual signals to reach the highest levels of the cortex, counting only
signal propagation speed. The delays that occur in the brain are due to the
processing time needed to _recognize_ a perception of a given kind, and to
recognize _changes_ in the perception. These can be shorter than the delays
in the environment, or longer. If they are shorter, then it is the
environment that holds up the control process. If the environment reacts
fast enough, then we see the minimum delays in the perceptual systems.

[I typed "instantly," and then, on rereading, changed that to "fast enough".]

A farmer growing wheat can imagine the process of planting, cultivating, and
harvesting in a second or two -- but doing it requires waiting for the
environment to carry out its part of the control loop at its own speed.

[I typed "waiting the the environment...". Happens more often as you get old.]

When feedback is very slow, there still will be the
same errors: DOS games, for example, often presented several 'welcome' and
'option' screens before the actual games started. You had to go through
them every time, and so, after a few times, you knew what keys to press,
and did that before the actual screen appeared (during the fade-out of one
and the fade-in of the next), thus reducing the startup time. But when a
friend came to visit me, and we wanted to play together, I still hit 's'
for 'single player' ... and I am talking of turn-around times of a second
or so.

Yes, this was an error in a higher-level system, wasn't it? You selected the
sequence appropriate to a single player, when there were several. With
practice you could probably learn not to do this.

The lower-level systems can handle disturbances much faster than
the higher ones can, so the planning system really doesn't have to take
them into account.

And when the higher-level system does not have to wait for the updated
perception, it can proceed faster (but with less accuracy, of course)!

It can proceed faster up to its own maximum speed. There is no reason it
should be less accurate as long as it is not approaching its speed limit.

I did not throw the cardboard laptop over my
head, but the movement sure looked strange (my friends couldn't stop
laughing).

The slowness with which we correct such mistakes shows that the problem is
in a higher-level system, not in the force-control system. If you expect to
lift a heavy load, you approach the lift very differently, and you may well
use force control in addition to position control. But if this happens very
often, you soon cease to have this problem (as in the case of a package
sorter). It isn't that you become clairvoyant so you can anticipate how
heavy each new package will be; you change the variable you're controlling,
so you plan a target position and let the force be whatever it is, instead
of planning how hard to lift, and letting the position be whatever it is.

Best,

Bill P.

[Hans Blom, 970428f]

(Robert Kosara, 970424.2030 MEST)

  Reference Level ---->CMP-------> Real World ----> CV
                        ^ |
                        > >
                        +-----------------------+

Then collapse the closed loop part to a single element

  Reference Level ---->Some black box ----> CV

You now have an "open loop" system; at least, the loop is hidden.

You lost me there ... the Real World (the physical world) is a part
of the black box, in your diagram.

Yes! Just like the Real World (the physical world) is a part of _us_
(we have it "internalized"). The open loop diagram shows that we
realize our _internal_ goals as _external_ perceptions. The
perceptual illusion: our world is as we see it. The diagram hides how
we do that: through control.

Where does the feedback value for the comparator come from, how is
it different from the CV?

It still is the CV. Nothing changed:

That there is feedback _within_ the black box is not considered
important in the last diagram. The "some black box" could, for
instance, be an audio amplifier. That it has internal feedback
stages is not the important thing; that it delivers a CV
accurately resembling the (amplified) reference level is.

But the audio amplifier does not contain any loops that involve the
current position of the loudspeaker diaphragm, for example. The
feedback only controls its amplification, and is a technical means
for minimizing noise!

An audio amplifier has its own particular goal, which usually is
having an output voltage identical to its input voltage amplified.
There are (or have been, at least) amplifiers that did control the
position of the loudspeaker diaphragm, although that was difficult
except at low frequencies. But why _that_ goal? What is so important
about the position of a loudspeaker's diaphragm?

Anyway, feedback can never be a means for minimizing (white) noise:
noise (pure randomness) cannot be controlled away.

Consider this example: a stepper motor is used to move a theodolite
(guess where I got that idea from ;-), and I want it to point in a
certain direction. So I send a number of impulses to the motor,
which then turns the theodolite accordingly. This is open-loop.

A fine approach, except that it has its limitations as well, such as
skipping steps when the impulse rate becomes too high or if the
output torque exceeds a limit. Within these limitations, however,
stepper motors are extremely accurate over extremely long time
periods, and they do not need feedback. When the limits are exceeded,
however, the "model" (one-to-one mapping of impulses to number of
revolutions) breaks down, resulting in erratic behavior.

Simple rule: you can trust a model only if it accurately represents
reality. If you exceed its bounds of applicability, you're in
trouble.

Greetings,

Hans

[Hans Blom, 970428g]

(Robert Kosara, 970424.2115 MEST)

Bill Powers:

Unless you assume that calculations by the nervous system require
no time, the negative feedback circuit will always work faster
than the open-loop arrangement you show.

Not when typing on a keyboard, for example, where the number of
possible perceptions is very limited, and can be predicted very
accurately. It takes a lot of time to 'recover' from a typing error,
and people usually only interrupt their typing several keystrokes
after the wrong one!

I'm with you here, Robert. Feedback is limiting and deteriorates
response speed whenever significant delays are present. A technical
example can be found in the processor cache of modern processors like
the Pentium. These machines operate on a number of processes in
parallel: instruction fetch, decode and execute. A problem arises
when one process has to wait for the result of another process, such
as in conditional jumps/branches where (in this case _two_) different
results are possible. If one is able to predict what the result will
be, no time is lost, and the parallel fetch will get the right next
instruction. Otherwise the already (pre)fetched and/or decoded
instructions have to be discarded (flushed from the cache) and new
instructions have to be fetched from (slow) memory and stored into
the cache before processing can continue. In such cases of
information exchange between parallel prochesses, full paralellism
must break down. Lots of effort is therefore expended on how to best
predict which branch will be taken; a great many different algorithms
have been invented. Yet, parallelism hardly ever reaches speed gain
factors of more than 10 or 20, unless great effort is expended in the
design of the parallel streams -- except for trivial cases where
there is no information exchange between parallel streams.

The same applies to industrial (control) processes with a significant
delay in the loop. Say a step disturbance arises. It is sensed after
some delay T. During this time T no corrective action is, of course,
possible: the disturbance has not been perceived yet. As soon as it
is perceived, control action ensues, but what its effect will be can
only be noticed after an additional delay T, making a total delay of
2*T from the start of the disturbance. Thus, if the delay is large,
control must necessarily be slow. It is in these cases where open
loop and feedforward help most. That this has not been appreciated in
PCT is, maybe, due to the fact that delays in the loop have not yet
received attention -- at least not that I'm aware of.

Yet, the case of the brain with its small dimensions/distances and
concomitant high processing speed is much like the Pentium whose
on-chip calculations proceed much faster than if it must communicate
with the periphery along "long-distance" and hence slow paths. With
the lesson, maybe, to try to do as much as possible locally (in the
brain) before "commands" are "slowly" sent to the periphery and the
results of the actions are even more "slowly" received.

I recently had a student who designed a controller around a heart-
lung machine, with the goal to keep the arterial oxygen content
constant by manipulating the oxygen percentage of the gas supply to
the membrane oxygenator. This process is dominated by delays. The
main finding was that the properties of the controller had little
influence on how errors were corrected, and in particular that the
maximum error after a step disturbance hardly varied, whether the
controller was fast and agressive or slow and conservative. In this
case, by the way, the design resulted in a PCT-like servo-system (a
PID-controller); more sophistication just did not pay.

Is this heresy, Bill? :wink:

Greetings,

Hans

[From Bill Powers (970428.1700 MST)]

Hans Blom, 970428f]

Replying to Robert Kosara, 970424.2030 MEST, you said:

Anyway, feedback can never be a means for minimizing (white) noise:
noise (pure randomness) cannot be controlled away.

If you apply a white noise disturbance to the CV of a PCT control system,
the system will try to correct the random variations. It will be able to
cancel the low-frequency variations, but not the high-frequency part of the
spectrum. If the perceptual input function is given a low-pass
characteristic, such that it includes the entire bandwidth of good control,
the high-frequency component of the noise will not get through (much) to the
perceptual signal, and of course the _very_ high frequency components will
not appear at all. So from the standpoint of the control system, the
controlled perception will not be much disturbed.

Best,

Bill P.

[From Bill Powers (970428.1708 MST)]

Hans Blom, 970428g --

I'm with you here, Robert. Feedback is limiting and deteriorates
response speed whenever significant delays are present.

We have explored the effects of delays in PCT control systems. The output
gain of the integrating control system is simply adjusted for (relatively)
optimum performance. I am sure that there are ways of calculating this
optimum gain from advanced control theory, but since we use simulations the
easiest way is to raise the gain until error corrections occur as quickly as
possible without overshoot.

A technical
example can be found in the processor cache of modern processors like
the Pentium. These machines operate on a number of processes in
parallel: instruction fetch, decode and execute. A problem arises
when one process has to wait for the result of another process, such
as in conditional jumps/branches where (in this case _two_) different
results are possible. If one is able to predict what the result will
be, no time is lost, and the parallel fetch will get the right next
instruction.

I believe that this is not a prediction of what the result will be, but
simply a process of following both possible branches as far as possible. The
actual decision as to which branch to continue on is made after the result
of the conditional test is made -- but there is no prediction of the outcome
of the conditional test.
...

Yet, the case of the brain with its small dimensions/distances and
concomitant high processing speed is much like the Pentium whose
on-chip calculations proceed much faster than if it must communicate
with the periphery along "long-distance" and hence slow paths. With
the lesson, maybe, to try to do as much as possible locally (in the
brain) before "commands" are "slowly" sent to the periphery and the
results of the actions are even more "slowly" received.

In computer-based models, one is always constrained by the computer
metaphor. There are many tricks for getting around the limitations of
sequential processors, especially when there is only one CPU. One of the
problems you run into is that high-level processes must communicate with
low-level processes, and this can produce just the problems you describe,
because all levels run at the same speed.

In the HPCT model, higher systems know essentially nothing of what is going
on at lower levels. They deal with perceptions that change only slowly, and
they act on the same time-scale. Lower systems deal with faster variations,
with the lowest of all (tendon reflex) involving true transport lags of only
about 9 milliseconds. This is one of the great advantages of the
hierarchical organization. If you look into the possibility of a
hierarchical MCT model, this is one feature of the hierarchical organization
that you might find interesting.

I recently had a student who designed a controller around a heart-
lung machine, with the goal to keep the arterial oxygen content
constant by manipulating the oxygen percentage of the gas supply to
the membrane oxygenator. This process is dominated by delays. The
main finding was that the properties of the controller had little
influence on how errors were corrected, and in particular that the
maximum error after a step disturbance hardly varied, whether the
controller was fast and agressive or slow and conservative. In this
case, by the way, the design resulted in a PCT-like servo-system (a
PID-controller); more sophistication just did not pay.

Is this heresy, Bill? :wink:

Not with respect to PCT! I think there's a basic limit to how fast control
can be if there is a significant transport lag in the system, and just about
any approach will end up at the same limit. A step disturbance is simply
going to disturb the controlled variable, until the transport lag allows the
first corrective action. No way around that, if you can't predict exactly
when the step is going to occur.

Unfortunately, discussions of lag in the psychological/neurological
literature don't distinguish between true transport lags (pure delays
independent of frequency response) and integrative lags (frequency response
falls off at 6 db per octave). The speed limits are far higher for
integrative lags. In the case of the spinal reflexes, the main "lag" is the
roughly 50-millisecond integrative lag in the muscle, and that does not mean
that the time constant of a response to a disturbance is 50 milliseconds!
The actual limit is set by the 9-millisecond transport lag.

Best,

Bill P.

[Hans Blom, 970429d]

(Bill Powers (970428.1700 MST))

Anyway, feedback can never be a means for minimizing (white) noise:
noise (pure randomness) cannot be controlled away.

If you apply a white noise disturbance to the CV of a PCT control
system, the system will try to correct the random variations. It
will be able to cancel the low-frequency variations, but not the
high-frequency part of the spectrum.

White noise is defined as containing all frequencies equally. If the
noise power is finite, mathematics says that there is "no" noise
power in a finite frequency band. So there is "nothing" to cancel.
Reality says, however, that white noise does not exist. Much to our
luck. Real noise can often be considered to have some bandwidth. If
that is true, an implication is that part of the noise is predictable
and can be modeled.

If the perceptual input function is given a low-pass characteristic,
such that it includes the entire bandwidth of good control, the
high-frequency component of the noise will not get through (much) to
the perceptual signal, and of course the _very_ high frequency
components will not appear at all.

That is the sensible approach, I agree. That was also why the Kalman
Filter was invented: don't take the perception too seriously, and do
not consider (filter out) its randomness.

So from the standpoint of the control system, the controlled
perception will not be much disturbed.

Depends on what you consider "disturbed" to mean. Note that you do
not compare reference with the "pure" perception now, but with a
filtered version of the perception. If the "error" is the difference
between reference and _filtered_ perception, that error might be
small indeed. But then the next question is: why _that_ error? And
how to choose/tune that filter?

Greetings,

Hans

[Hans Blom, 970429e]

(Bill Powers (970428.1708 MST))

A technical example can be found in the processor cache of modern
processors like the Pentium. These machines operate on a number of
processes in parallel: instruction fetch, decode and execute. A
problem arises when one process has to wait for the result of
another process, such as in conditional jumps/branches where (in
this case _two_) different results are possible. If one is able to
predict what the result will be, no time is lost, and the parallel
fetch will get the right next instruction.

I believe that this is not a prediction of what the result will be,
but simply a process of following both possible branches as far as
possible.

I know of no existing algorithm following this approach. Its problem
is that a conditional branch may be followed immediately by another
one, and another one, etc. And thus it would require an exponentially
increasing number of parallel caches. The "prediction" that is
implemented in actually used algorithms is of a different sort, and
far simpler. One approach, for instance, is based on the observation
that most branches are backwards -- most programs have frequent
"repeat until" or "for" loops and few (forward) goto's. This
"prediction" may turn out to be right in only, say, 80% of the
actually occurring branches. Far from infallible, but quite an
improvement in overall speed.

The actual decision as to which branch to continue on is made after
the result of the conditional test is made -- but there is no
prediction of the outcome of the conditional test.

Well, you may want to call it something other than "prediction".
Others do.

It is not the "purpose" of a model to be perfect. The "purpose" is to
have a better overall performance than without a model -- or with a
different model. That is much more modest than what you seem to have
"read into" the model-based approach, it appears. Models need not be
perfect. They are there to improve performance.

Greetings,

Hans

[From Bill Powers (970429.0953 MST)]

Hans Blom, 970429d--

Anyway, feedback can never be a means for minimizing (white) noise:
noise (pure randomness) cannot be controlled away.

If you apply a white noise disturbance to the CV of a PCT control
system, the system will try to correct the random variations. It
will be able to cancel the low-frequency variations, but not the
high-frequency part of the spectrum.

White noise is defined as containing all frequencies equally. If the
noise power is finite, mathematics says that there is "no" noise
power in a finite frequency band. So there is "nothing" to cancel.

Are you sure you're saying this correctly? That's wierd! If you record a
noise signal, what you see are quantitative variations in a physical
variable. If these are variations in a disturbance variable and all lie well
within the bandwidth of good control, they will simply be cancelled (mostly)
by the action of the control system. The control system doesn't care how you
classify them. If you were to run an electronic noise signal through a
resistor, the resistor would get hot, wouldn't it?

I hope Martin will argue with you. He understands the mathematics better
than I do.

Reality says, however, that white noise does not exist. Much to our
luck. Real noise can often be considered to have some bandwidth. If
that is true, an implication is that part of the noise is predictable
and can be modeled.

Maybe the problem here is that the mathematical definition associated with
"white" implies an infinite bandwidth -- equal power at _all_ frequencies,
which is physically meaningless. And of course if the white noise signal had
finite power, then the power in any finite bandwidth would have to be zero.

Of course we could simply change the definition of white noise to something
sensible, such as equal power at all frequencies within a given bandwidth.
That's what we're talking about, isn't it?

Now that we're back to reality --

If the perceptual input function is given a low-pass characteristic,
such that it includes the entire bandwidth of good control, the
high-frequency component of the noise will not get through (much) to
the perceptual signal, and of course the _very_ high frequency
components will not appear at all.

That is the sensible approach, I agree. That was also why the Kalman
Filter was invented: don't take the perception too seriously, and do
not consider (filter out) its randomness.

So from the standpoint of the control system, the controlled
perception will not be much disturbed.

Depends on what you consider "disturbed" to mean.

Changed. Altered. Affected.

Note that you do
not compare reference with the "pure" perception now, but with a
filtered version of the perception. If the "error" is the difference
between reference and _filtered_ perception, that error might be
small indeed. But then the next question is: why _that_ error? And
how to choose/tune that filter?

Not my problem. I'm trying to understand a system that already exists, not
design a system. Maybe the filtering simply represents an unavoidable
feature of neural transmission, or of synaptic and intracellular processes.
Or maybe it has a function shaped by evolution or a Divine Engineer. That
would make no difference. If the bandwidth turns out to be variable, that
would make a difference: if the bandwidth of perception turns out to be
under the control of a higher system. We won't know if it's variable until
we find out that it's variable. And then we'll deal with it.

Best,

Bill P.