Feedforward yet again

[From Bill Powers (2009.12.30.0730 MST)]

Rick Marken (2009.12.29.2210) --

a simple integral control system -- with no
prediction or feedforward -- controls better in a pursuit tracking
than in a compensatory tracking experiment with the exact same
disturbance in both cases. What's that about? Bill? Help!

Exact same disturbance? I thought that in pursuit tracking you disturb the position of the target, while in compensatory tracking you disturb the position of the cursor relative to the mouse.

Best,

Bill

you will find why Tom Bourbon
thinks “feedforward” is not a PCT concapt.
[From Bill Powers (2009.12.30.0745 MST)]
Bjorn Simonsen (2009.12.29.2245 EU
ST) –
Many thanks for those samples of the 1994 discussions. Yes, I
think Tom Bourbon pretty much disposed of the idea that feedforward has
anything to do with the past or future. He didn’t discuss the case in
which a disturbing variable in the environment is sensed and the sensory
signal is sent through some sort of computing function to the muscles, or
lower-order control systems. That’s a different subject.
The idea of “predictive control” came up even fifteen years
ago, with predicted and anticipated perceptions getting mixed up with it
just as they are today. Predictive control, just to be clear about this,
involves a prediction of the effect of an action assuming that the action
and other factors will continue as they are going now. The value
currently predicted is the controlled variable in present time, so
ordinary feedback control is involved, as Tom was trying to make clear.
Maybe Tom might clarify to make sure we’re talking about the same thing.
Predictive control does not “speed up” control; it simply makes
it possible when the feedback function is too complex.
Martin Taylor appears to be proposing that adding a derivative to a
perceptual signal amounts to prediction and can be used to speed up
control. In fact it will put a lag into the control process, because it
makes the error appear to be less than it is and thus reduces the amount
of action being used to correct the error.
I just tried it with a simple control system. Any derivative feedforward
puts a lag into the controlled variable when tracking a sine-wave
reference signal. Without any output lag I can’t get it to run at all –
it immediately oscillates. If there is already a lag (due to an output
integration) it just gets longer as the amount of derivative increases
from zero. When the amount of derivative added gets too large the
loop goes into runaway oscillation unless the gain is reduced, which
makes the lag even greater.
So derivative feedback, as I suspected, does not work to speed up control
as one might intuitively suppose. I can see now that true predictive
control using calculations of future states would be a quite delicate
process, with instability rapidly approaching as the control gain gets
larger. No wonder the shuttle takes so long to make rendevous with the
space station! The control process has to be slowed way down to
keep it stable. But the prediction helps because orbital dynamics are
apparently too tricky for human brains to handle without some
help.

Maybe Bruce Abbott or Rick Marken will do their own testing to see if
they get the same result I did. First set up a simple control system that
tracks a sine-wave reference signal, with whatever slowing factor is
needed for stability at a given gain, and dt set to 1/60 second. Then add
derivative, which I did by saying

p = v + Kd*(v - vlast)/dt.

v being the controlled variable, and Kd being used to vary the amount of
derivative added. Vlast is the value of v from the previous
iteration.

···

=====================================================

I’m becoming pretty certain that there was very little valid motivation
for introducing the idea of feedforward, or at least for naming it that
way. I think it was named to show that those smart-alecks over on the
engineering side of the campus weren’t the only ones who could
think up explanations for control processes. Considering what it actually
amounts to, which is sensing a disturbance and reacting directly, and
blindly, to it, it looks mostly like an attempt to preserve the old
causal view of behavior, or perhaps to explain control phenomena without
having any understanding of control theory. It’s 18th Century
engineering. Actually, if you don’t know about control theory, it’s
pretty hard to explain how the driver of a car keeps it on the road in a
crosswind. I’ve seen explanations that used “cues” in the
environment such as moving treetops or smoke or blowing scraps of paper
to explain where the stimulus for the steering response comes from. The
whole idea of cues always struck me as arm-waving, especially
“subtle cues” meaning cues we don’t actually observe. It’s
especially violent arm-waving when used to explain a process requiring
quantitatively accurate behavior-- truly an example of “… and then
a miracle occurs.”

Best,

Bill P.

[From Rick Marken (2009.12.30.0850)]

Bill Powers (2009.12.30.0730 MST)--

Rick Marken (2009.12.29.2210) --

a simple integral control system �-- with no
prediction or feedforward -- controls better in a pursuit tracking
than in a compensatory tracking experiment with the exact same
disturbance in both cases. What's that about? Bill? Help!

Exact same disturbance? I thought that in pursuit tracking you disturb the
position of the target, while in compensatory tracking you disturb the
position of the cursor relative to the mouse.

I mean it's the same disturbance waveform (I've tried both filtered
Random and Sine disturbances). In pursuit tracking the disturbance
waveform is applied only to the target; in compensatory tracking the
same disturbance waveform is applied only to the cursor (target
stationary). In both cases the control system controls a perception of
the difference between target and cursor position relative to a
reference of zero. Here are the results in terms of RMS error.

                                     Disturbance
                           Random Sine
Pursuit 13.08 7.61
Compensatory 19.63 11.4

These results were obtained for the following control model:

output = output + (Gain * (target-cursor) - Damping * output) * dt

with Gain set to 20 and Damping set to .01

I believe that human subjects also do better in pursuit than in
compensatory tracking. So the results of this little exercise seem to
show that this can be explained without any appeal to the ability to
predict future target positions in the pursuit case. Apparently, the
superiority of pursuit over compensatory tracking is just the way a
simple feedback control system works. I just don't understand why. It
seems like the pursuit and compensatory cases are mathematically
equivalent; except that the value of target is variable in the pursuit
case and constant in the compensatory case. Any idea why I got the
results I did?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers(2009.12.30.1150MST)]

Rick Marken (2009.12.30.0850)--

RM: I mean it's the same disturbance waveform (I've tried both filtered
Random and Sine disturbances). In pursuit tracking the disturbance
waveform is applied only to the target; in compensatory tracking the
same disturbance waveform is applied only to the cursor (target
stationary). In both cases the control system controls a perception of
the difference between target and cursor position relative to a
reference of zero. Here are the results in terms of RMS error.

                                     Disturbance
                           Random Sine
Pursuit 13.08 7.61
Compensatory 19.63 11.4

These results were obtained for the following control model:

output = output + (Gain * (target-cursor) - Damping * output) * dt

with Gain set to 20 and Damping set to .01

Where was the disturbance added? Are these the RMS errors between model and real performance, or RMS tracking errors for just the computer model? I assume the latter. Did you use the same recorded table of disturbances in all four cases, or generate a new disturbance table for each run?

The difference between random and sine errors is easy to explain -- the random disturbance includes higher-frequency components than the sine waves. If you increase the frequency of the sine waves without changing the model, the tracking error will increase. You should be able to make the sine-wave error as large as the random error, or larger. The difference between pursuit and compensatory is harder to explain, especially for the sine-wave, if you used the same disturbance table throughout. Where was the disturbance added?

It's possible that the cursor position is scaled to fit the screen, so the number representing cursor position is not the same as the number indicating full-screen displacement. The ratio of (pursuit error)/(compensatory error) is 0.6663 for the random case, and 0.6675 for the pursuit case, a difference of 0.0012, which looks like rounding error and suggests that exactly the same ratio exists for random and sine, 2/3. I deduce that the target movements are different from the cursor movements under the same disturbance, by that factor. Is this my code you're using?

Best,

Bill P.

[From Rick Marken (2009.12.30.1330)]

RM: Here are the results in terms of RMS error.

                                          Disturbance
                              Random Sine
Pursuit 13.08 7.61
Compensatory 19.63 11.4

Bill Powers(2009.12.30.1150MST)--

Where was the disturbance added?

In compensatory, the disturbance is added to the output so that cursor
= o + d and target = 0.
In pursuit, the disturbance is added only to the target so that cursor
= o and target = d

Are these the RMS errors between model and
real performance, or RMS tracking errors for just the computer model?

Tracking errors for the computer model.

assume the latter.

Excellent.

Did you use the same recorded table of disturbances in
all four cases, or generate a new disturbance table for each run?

I used the same recorded table of disturbance in all four cases; the
random disturbance was tabled and used for both pursuit and
compensatory; the sine disturbance was also tables and used for both
pursuit and compensatory.

The difference between random and sine errors is easy to explain -- the
random disturbance includes higher-frequency components than the sine waves.

Right!

The difference between
pursuit and compensatory is harder to explain
especially for the sine-wave,

Tell me about , it;-)

if you used the same disturbance table throughout.

Same (tabled) disturbance for both pursuit and compensatory (see above).

Where was the disturbance added?

See above.

It's possible that the cursor position is scaled to fit the screen, so the
number representing cursor position is not the same as the number indicating
full-screen displacement. The ratio of (pursuit error)/(compensatory error)
is 0.6663 for the random case, and 0.6675 for the pursuit case, a difference
of 0.0012, which looks like rounding error and suggests that exactly the
same ratio exists for random and sine, 2/3. I deduce that the target
movements are different from the cursor movements under the same
disturbance, by that factor.

OK. But why? I have an idea but I'll have to do some analysis to see
if I can figure it out.

Is this my code you're using?

Yes. I'm using your code for the model:

output = output + (Gain * (target-cursor) - Damping * output) * dt

The random disturbance is taken from your tracking task, so a 1 minute
tracking trial is 3600 samples long and dt is 60/3600 sec.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.12.30.1505 MST)]

Rick Marken (2009.12.30.1330) --

Yes. I'm using your code for the model:

output = output + (Gain * (target-cursor) - Damping * output) * dt

If you introduced the disturbance here, you would have

output = output+(Gain*(target-cursor+disturbance)-Damping*output)*dt

for both pursuit and compensatory tracking, with target set to 0 in both cases. A positive target excursion gives the same error as a negative cursor excursion of the same size. I don't think a change of sign of the disturbance makes any difference.

So if you plot error against time using your current program, you should see the same waveform for either pursuit or compensatory tracking, one of them perhaps upside down. If not, you can probably figure out what makes the difference.

Best,

Bill P.

[From Rick Marken (2009.12.30.2300)]

Bill Powers (2009.12.30.1505 MST)--

So if you plot error against time using your current program, you should see
the same waveform for either pursuit or compensatory tracking, one of them
perhaps upside down. If not, you can probably figure out what makes the
difference.

Yes. What makes the difference is that the output in the compensatory
task is phase shifted (looks like 180 degrees) relative to the
disturbance; there is no phase shift in the pursuit case. This is why
the RMS error for the simple control model is always greater for
compensatory as opposed to pursuit tracking. Apparently, the fact that
the perceived cursor variations in the compensatory task are the
combined result of disturbance and output is what creates the phase
shift in this case; there is no phase shift when the perceived cursor
variations result only from output variations, as is the case in
pursuit tracking.

This means that pursuit tracking would be expected to be better than
compensatory tracking, given equivalent disturbances, even when the
disturbance is not patterned and, thus, not controllable in the
pursuit case by a higher level system. This will have to be taken into
account when trying to determine whether "prediction" or "higher level
control" is involved in pursuit tracking with a patterned disturbance
(target movement).

Best

Rick

PS. Copies of the Excel program that demonstrates this are available
upon request.

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.12.31.0400 MST)]

Rick Marken (2009.12.30.2300) --

Yes. What makes the difference is that the output in the compensatory
task is phase shifted (looks like 180 degrees) relative to the
disturbance; there is no phase shift in the pursuit case.

But then why isn't the error signal just as large? The RMS error should be the same.

The phase shift occurs because a positive change in the target position with the cursor stationary gives the same error as a negative change in the cursor position with the target stationary. Try just disabling the output (set the gain to zero); you should see equal peak-to-peak amplitude of the error whether you disturb the target or the cursor.

How about sending me all the equations you use in the model? Maybe I'll see something. I still think there's something about the scaling of the mouse movements relative to the target movements. Please don't make me figure it out by reading the equations off the spreadsheet. A7*B11 doesn't exactly stimulate my recognition circuits.

Best,

Bill P.

[From Rick Marken (2009.12.31.0910)]

Bill Powers (2009.12.31.0400 MST)--

But then why isn't the error signal just as large? The RMS error should be
the same.

Yes, I see that the phase shift would not affect RMS error. There must
be another explanation. My intuition is still that it has something to
do with the fact that cursor position is the sum of disturbance and
output in the compensatory case while cursor position is based on just
output in the pursuit case.

How about sending me all the equations you use in the model? Maybe I'll see
something. I still think there's something about the scaling of the mouse
movements relative to the target movements. Please don't make me figure it
out by reading the equations off the spreadsheet. A7*B11 doesn't exactly
stimulate my recognition circuits.

OK. Here's the visual basic program. I've annotated it a bit; I use
the spreadsheet itself as just a visible 3600x10 matrix for the data.
The cells in this matrix are addressed as Cells (Row#, Column#).

Gain = Cells(2, 2) // Gain set to 50 from Cell(2,2) in
spreadsheet
Damping = Cells(3, 2) // Damping set to .01 from cell 3,2 in
spreadsheet
dt = 60 / 3600

Ccursor = 0 // Starting Values for cursor
(Ccursor) and output (Coutput)
Coutput = 0 // in the Compensatory case
Pcursor = 0 // Starting Values for cursor
(Pcursor) and output (Poutput)
Poutput = 0 // in the Pursuit case

For i = 3 To 3601 //Loop trough the next 3599 samples of disturbance

CTarget = Cells(i, 7) // Cells(i,7) are all 0, the target
position in the Compensatory case
PTarget = Cells(i, 8) // Cells(i,8) are the sine values for the
target position in the Pursuit case
CDist = Cells(i, 8) // Cells (i,8) are also the disturbance
(CDist) in the Compensatory case

CErr = CTarget - Ccursor // Compensatory (CErr) and Pursuit (PErr) error
PErr = PTarget - Pcursor

// Now compute outputs based on error

Coutput = Coutput + (Gain * CErr - Damping * Coutput) * dt
Poutput = Poutput + (Gain * PErr - Damping * Poutput) * dt

// Compute new cursor position values

Ccursor = Coutput + CDist
Pcursor = Poutput

// Store cursor and output values in spreadsheet

Cells(i, 9) = Ccursor
Cells(i, 10) = Pcursor

Cells(i, 16) = Coutput
Cells(i, 17) = Poutput

Next i

···

----
When program (loop) ends RMS error for cursor variations is computed
in the spreadsheet.

RMS error for the compensatory case is: sqrt(sum(Ccursor-CTarget)^2),
where CTarget is always 0 and the sum is over samples 50-3600 (to
eliminate any possible initial "ringing")

RMS error for the pursuit case is: sqrt(sum(Pcursor-PTarget)^2), where
the sum is again over samples 50-3600.

Best

Rick
--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Hi, Rick --

Got the program. I'll put it into Delphi form and try it later today.

Bill

[From Rick Marken (2009.12.31.1500)

Bill Powers said:

Hi, Rick --

Got the program. I'll put it into Delphi form and try it later today.

No need. I just figured it out. I put the computation of the cursor
position at the wrong point in the program. The relevant code should
look like this:

Ccursor = Coutput + CDist
Pcursor = Poutput

CErr = CTarget - Ccursor
PErr = PTarget - Pcursor

Coutput = Coutput + (Gain * CErr - Damping * Coutput) * dt
Poutput = Poutput + (Gain * PErr - Damping * Poutput) * dt

When you do it this way, the RMS error for compensatory and pursuit
tracking is exactly the same.

Sorry about that. Now on to research on "predictive" control and
"feedforward". Boy it's good to have peer review, if a little
embarrassing;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.12.31.1905 MST)]

Rick Marken (2009.12.31.1500) --

Bill Powers said:

> Hi, Rick --
>
> Got the program. I'll put it into Delphi form and try it later today.

...

No need. I just figured it out. I put the computation of the cursor
position at the wrong point in the program. When you do it this way, the RMS error for compensatory and pursuit tracking is exactly the same.

Sorry about that. Now on to research on "predictive" control and
"feedforward". Boy it's good to have peer review, if a little
embarrassing;-)

No need for embarrassment. Finding bugs in programs is pretty routine for me, and it's always nice when you find your own bugs instead of publishing them for someone else to find.

So now everything makes sense just as it ought to. Very satisfying.

Best,

Bill P.

[From Bill Powers (2009.12.26.0300 MDT)]

Martin Taylor 2009.12.25.10.46 –

BP earlier: So if forward is the
opposite of back, tell me what feedforward is.

MT: OK, I will. It’s whatever the user of the term intends the
hearer/reader to understand by it.

BP: Sorry, I should have said “So if we agree that forward is the
opposite of back, tell me what definition of feedforward we can all agree
upon.” As you know, without consensus there is no possibility
of language, communciation, or for that matter, science. We have to
decide here whether we’re just having a slightly boozy argument over a
few beers, or are trying to reach some kind of common understanding of a
technical subject. If it’s science we want to do (and I do), we need a
technical vocabulary that is as internally consistent as we can make it,
and stable so that when we use a term more than once, we mean the same
thing every time.

MT: Different folks, different
intentions. I’ve understood people to intend quite a few different
meanings in the course of this rather dictionary-oriented discussion, and
I have thought of other ones in private correspondence. Here are three
(plus subtypes):

The use of prediction: (a) of the future course of the

controlled perception, (b) of the effect of the output on the controlled
perception at some time in the future, (c) of the effect of a disturbance
to one controlled perception perceived by the perceptual input of a
different control system.

Acting in an environment stable enough that the effect

of an output is likely to bring some variable to a desired state without
the need to observe the effect (a.k.a. “fire-and-forget”).
Examples: setting off a timer-controlled explosive (e.g. a suitcase
bomb). Throwing a glass into the fireplace after a toast (with the intent
that the glass should break). Pulling the trigger of a gun (with the
intent that the bullet should leave the barrel). All of these might
occasionally fail to produce the desired effect, but usually they work,
and the effect cannot be corrected at that level of control if they don’t
work. You can’t retrieve the suitcase bomb and re-set it, because the
target politician would not be there when it went off, though at a higher
level you can still retarget the politician for assassination by some
other means. You can’t pull the glass out of the fire if it failed to
break, and you might well not be able to tell whether it was your glass
that failed to break or that of one of your drinking buddies. And you
certainly don’t want to try pulling the trigger again if the bullet
failed to leave the barrel!

Putting up a shield against a disturbance that you

anticipate might or might not happen in future, so that when and if it
happens you will not need overt output to counter the disturbance. Skin
and cell membranes are shields developed by evolution. Insulating a house
is a more modern example, a shield against fluctuations in the
environmental temperature, reducing the range of outputs necessary to
control against those fluctuations.

BP: All of these I would classify as perfectly good negative feedback
processes, aimed at controlling different present-time perceptions.
Turning on the timer immediately produces, in imagination, an explosion
at some later time. Throwing the glass into the fireplace results
immediately in an imagined broken glass. Pulling the trigger of the gun
results in an immediate flinch and forward push against the imagined
report and kick from the gun. All control processes take place in present
time, no matter how delayed their physical effects might be in some later
present time. We control perceptions, not objective events in the
external world. The relationship between the perceptions and external
events is quite variable, as well as unverifiable.

The main difference when the delay is long is how rapidly disturbances
can be corrected if the resulting perception does not match the reference
perception. If the delay is a transport lag and the result is a brief
event, corrections can’t be made more than once per delay time, so the
longer the delay is, the worse the control will be. If it’s an integral
lag, adding rate of change to the perceptual function can greatly improve
the control of the perception but will also slow the correction of real
errors and increase the high-frequency noise level in the
system.

MT: Let me add another plausible
meaning that seems to incorporate much of the intent of these and other
possibilities.

In a normal elementary control loop, we conventionally use the term
“feedback path” to be the part of the loop that passes through
the environment. So the obvious name for the rest of the loop from
sensory input to functional output is “feedforward path”, and
any signals that use that path would be “feedforward” signals.
That’s simply a normal use of complementary words, in the same way that
the use of “in” implies the possibility of “out”.

BP: Yes, this is what I was suggesting by the question above. Control
systems already use feedforward. Feedback and feedforward go in the same
direction around the loop.

MT:If you define
“feedforward” so that it doesn’t exist when any other complete
control loop is also involved, then of course feedforward never exists.
If you define it as the part of any control loop that is not through the
environment, then it always exists. Neither of these is a very useful way
of looking at the considerable range of situations in which the word has
been used in this discussion and elsewhere. But if you use
“feedforward path” to refer to the signal pathways not in the
environment, and “feedforward” to signals in that pathway not
(yet) influenced by one or both of an anticipated disturbance or feedback
through the environment, then I think the term might be useful. It also
would cover most or all of the cases that have come to my
mind.

BP: Unfortunately it also encourages us to think of all the different
forms of feedforward as if they’re examples of the same thing, the way we
use the word “learning.” But feedforward in the output function
will increase the speed of output responses to actual errors, while
feedforward in the input function will slow the output response by making
the system think it has corrected the error before that has actually
happened. Feedforward in the feedback function (“quickening”)
will slow the response but improve control. So calling all these
different arrangements by the same name can only lead to confusion and
tempt people to draw invalid conclusions about “the”
effects of feedforward.

BOP: Yes, like the gun-aiming, a
higher-level control system adjusts its setting of references for the one
to which “feedforward” could be said to apply. A more
extreeeeme example of feedforward with reference influenced by a
higher-level learning system might be the old upper-class English
tradition of setting aside a bottle (or a case) of port for a newborn
son, to be opened on his 21st birthday. The controlled perception is that
the son will have a pleasant experience 21 years later (when at that
time, in England, he attained the age of majority and the parent might
quite probably be dead). Yes, there are feedback loops involved, not
least importantly one analogous to the gun-aiming loop, in which the
“aim” is the choice of port, which has been found to age well,
as opposed to, say, a Beaujolais Nouveau, which
doesn’t.

BP: The longer the delay, the poorer the control will be. I don’t need to
elaborate on that.

Of course all anyone has to do
to change my mind is show me a working model that matches real behavior
better with feedforward (whatever that means) than without it. Fifty
bucks to the first person who does it (all I can afford to lose).
If you accept my comment that the whole issue is a
dictionary squabble, you have lost already!

BP: I don’t accept it. Feedforward has different effects depending on
what is being fed forward and where in the control loop it is happening.
It’s simply a different phenomenon in each case, and using the same term
for all cases implies a nonexistent similarity.

MT: Actually, I think Rick won
some time back in the early 1990s, when he showed that using the
derivative of the perception to alter the reference value improved
tracking of a sine wave (Feedforward class “Prediction type
a”).

BP: That might improve tracking, but would adding this feature to a model
make the model match real behavior better, or make the fit worse? I can
easily design a model that will control far better than a real person
will control in the same tracking task. All I have to do is eliminate the
transport lag in the input function, and the model’s tracking error can
then go essentially to zero. But that’s not how the real person behaves.
To meet my challenge you must show that adding feedforward to a model
(any way you want to define it) makes the model fit real behavior better
than the same model without it, just as adding transport lag to the
tracking model makes it fit real behavior better (as you can easily
verify by using demo 4-1 in LCS3).

MT: My bottom line: if
“feedback” is a useful word, then so is
“feedforward”, and its natural meaning is the complement of
“feedback”. To argue simultaneously that the meaning of
feedforward is unknown, and that it doesn’t exist, is to leave
permanently open the escape hatch that THIS (whatever it might be) is not
“feedforward” when someone offers a model that purports to
satisfy your challenge.

BP: OK, then, let’s also add FeedSideways, since that goes naturally with
forward and back, and we can make FeedSideways more specific by
introducing FeedVertically and FeedHorizontally. Grooving on the sounds
of the words does not seem to me like a good reason to introduce
ambiguous and fuzzy terminology, but if you let one camel into the tent
you might as well welcome all the others crowding in behind it.

What really puzzles me is this: Why does critising the use of this
word feedforward elicit such an energetic defense? Is there
something so appealing about this word that we should be willing to adopt
it even without agreeing on any one meaning for it? Could it be that this
is just a defense against a criticism of some popular idea, as in the
case of the word “control”? What do you suppose might be said
if I pointed out how useless the term “intelligence” has proven
to be, or how misleading the term “aggression?” Is it that hard
to let go of the past?

Best,

Bill P.

[From Rick Marken (2009.12.26.1310)]

It looks like this may have bounced so I'm sending it again. Sorry if
it's a repeat.

···

[From Rick Marken (2009.12.25.2315)]

Martin Taylor 2009.12.25.16.56]

�Rick Marken (2009.12.25.0825)

Bill Powers (2009.12.25.0430 MDT)

Of course all anyone has to do to change my mind is show me a working
model
that matches real behavior better with feedforward (whatever that means)
than without it. Fifty bucks to the first person who does it (all I can
afford to lose).

I'll add $150 to that. So $200 to the first person who does it.

You should be pretty safe, since I think you are yourself the winner for
your 1993? demonstration. I can't think of anything earlier than that.

Thanks. But that was not a feedforward model (whatever that is). It
was a two level feedback control model; the higher level system was
controlling for making a sine wave pattern of back and forth movement
with the cursor by sinusoidally varying the reference for cursor
position ; the lower level system was controlling for keeping the
cursor at this varying reference position. �It's feedback all the way,
yo ho ho.

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.12.27.11.25]

[From Rick Marken (2009.12.25.2315)]
Martin Taylor 2009.12.25.16.56]
Rick Marken (2009.12.25.0825)

Bill Powers (2009.12.25.0430 MDT)

Of course all anyone has to do to change my mind is show me a working
model
that matches real behavior better with feedforward (whatever that means)
than without it. Fifty bucks to the first person who does it (all I can
afford to lose).
I'll add $150 to that. So $200 to the first person who does it.
You should be pretty safe, since I think you are yourself the winner for
your 1993? demonstration. I can't think of anything earlier than that.
Thanks. But that was not a feedforward model (whatever that is). It
was a two level feedback control model; the higher level system was
controlling for making a sine wave pattern of back and forth movement
with the cursor by sinusoidally varying the reference for cursor
position ; the lower level system was controlling for keeping the
cursor at this varying reference position. It's feedback all the way,
yo ho ho.

Then I win, for misunderstanding your model and diagram, and using what
I thought you had done, based on what you wrote in
[From
Rick Marken (950413.1145)]

:

Modafinil_Model_small1.jpg

···

http://www.mmtaylor.net/PCT/modafinil.tracking.doc

Bill !

BP : Very well: I have changed my mind since I wrote those earlier things
you cited.

BH : I must say this is really clear answer. Thank you. And I'm glad that
PCT is advancing on the bases of conversation here on CSG net. It's seems to
me like we all are reorganizing very quickly although I'm little surprised.
Some posts are only week or two weeks old.

BP : What really puzzles me is this: Why does critising the use of this word
feedforward elicit such an energetic defense? Is there something so
appealing about this word that we should be willing to adopt it even without
agreeing on any one meaning for it? Could it be that this is just a defense
against a criticism of some popular idea, as in the case of the word
"control"? What do you suppose might be said if I pointed out how useless
the term "intelligence" has proven to be, or how misleading the term
"aggression?" Is it that hard to let go of the past?

BH : You can't imagine how I'm really confused and puzzled with such a fast
changing of PCT explanations of feedforward. But I'm glad we're slowly
coming to the explanations which suits experiences more than those
theoretical discusions before.

BP : So if we agree that forward is the opposite of back, tell me what
definition of feedforward we can all agree upon."

BP : The direction of all signals inside the organism from sensory organs
to muscles is forward, toward the output. Only imagination runs the
other way.

BH : Can we assume that imagination could be process "opposite of forward" ?
Could the imagination run on the bases of feed-forward ? Could we say that ....

BP : Then I realized that I had probably been doing that all along, though
there were some cases where I really did turn and run, then turn around,
locate the ball, and catch it. But I also realized that those were the cases
in which I couldn't do it the other way because the ball was too high and
fast and I couldn't run backward fast enough while keeping my eye on the ball.

BH : ďż˝..you were running in control mode, then you switched to imagination
mode, and when you turned back into control mode you catched the ball.
I'm interested how PCT explains forming reference signal in imagination ?
How the imagined pathway to some future state of affair is constructed ?

Best,

Boris

[Martin Taylor 2009.12.28.10.50]

[From Rick Marken (2009.12.26.1310)]

It looks like this may have bounced so I'm sending it again. Sorry if
it's a repeat.
   
It didn't bounce. There was a hiatus in the operation of CSGnet. I answered this message one way, but my own message seemed to bounce (though it has since appeared). Here I'm answering it in a different way.
   

[From Rick Marken (2009.12.25.2315)]

Martin Taylor 2009.12.25.16.56]
       

  Rick Marken (2009.12.25.0825)
         

Bill Powers (2009.12.25.0430 MDT)
           
Of course all anyone has to do to change my mind is show me a working
model
that matches real behavior better with feedforward (whatever that means)
than without it. Fifty bucks to the first person who does it (all I can
afford to lose).

I'll add $150 to that. So $200 to the first person who does it.

You should be pretty safe, since I think you are yourself the winner for
your 1993? demonstration. I can't think of anything earlier than that.
       

Thanks. But that was not a feedforward model (whatever that is). It
was a two level feedback control model; the higher level system was
controlling for making a sine wave pattern of back and forth movement
with the cursor by sinusoidally varying the reference for cursor
position ; the lower level system was controlling for keeping the
cursor at this varying reference position. It's feedback all the way,
yo ho ho.

Let's suppose that what you had proposed was what you say here. Some planning system knew that the disturbance would be a sine wave, and moreover knew that it would have a particular frequency, phase, and amplitude. Wouldn't all the arguments made in the Powers and Bourbon paper "Models and their Worlds" apply? To avoid the Powers-Bourbon problem of blind planning, this hypothetical higher-level system would actually have to be three scalar control systems, one controlling for the sine-wave's frequency, one for its phase, and one for its amplitude. The outputs of these control systems would have to be numbers corresponding to a sine wave that should be generated by the output of the lower control system -- the one tracking the target. Somewhere there would have to be a function generator that took these three outputs and generated the sine wave reference signal for the tracking controller.

Yes, that would be feedback all the way. But it's very complicated, and it's not at all clear why a system that would be useful only for tracking sine waves would have evolved in a biological system.

As Bill points out [From Bill Powers (2009.12.26.0300 MDT)] in his mission to define "feedforward" out of existence, all we are ever doing is controlling current perceptions. If those perceptions involve using the derivative of a value to predict its future course, that future course is a current perception, too.

Martin

[Martin Taylor 2009.12.28.11.04]

[From Bill Powers (2009.12.26.0300 MDT)]

Martin Taylor 2009.12.25.10.46 –

BP earlier: So if
forward is the
opposite of back, tell me what feedforward is.

MT: OK, I will. It’s whatever the user of the term intends the
hearer/reader to understand by it.

BP: Sorry, I should have said “So if we agree that forward is the
opposite of back, tell me what definition of feedforward we can all
agree
upon.” As you know, without consensus there is no possibility
of language, communciation, or for that matter, science. We have to
decide here whether we’re just having a slightly boozy argument over a
few beers, or are trying to reach some kind of common understanding of
a
technical subject. If it’s science we want to do (and I do), we need a
technical vocabulary that is as internally consistent as we can make
it,
and stable so that when we use a term more than once, we mean the same
thing every time.

A laudable objective, and one with which I wholeheartedly agree. Which
is why I entered this discussion, thinking I might be able to reconcile
the various uses to which I have seen the term “feedforward” put in the
course of this thread (and in private correspondence as well).

MT: Different folks,
different
intentions. I’ve understood people to intend quite a few different
meanings in the course of this rather dictionary-oriented discussion,
and
I have thought of other ones in private correspondence. Here are three
(plus subtypes):

The use of prediction: (a) of the future course of the

controlled perception, (b) of the effect of the output on the
controlled
perception at some time in the future, (c) of the effect of a
disturbance
to one controlled perception perceived by the perceptual input of a
different control system.

Acting in an environment stable enough that the effect

of an output is likely to bring some variable to a desired state
without
the need to observe the effect (a.k.a. “fire-and-forget”).
Examples: setting off a timer-controlled explosive (e.g. a suitcase
bomb). Throwing a glass into the fireplace after a toast (with the
intent
that the glass should break). Pulling the trigger of a gun (with the
intent that the bullet should leave the barrel). All of these might
occasionally fail to produce the desired effect, but usually they work,
and the effect cannot be corrected at that level of control if they
don’t
work. You can’t retrieve the suitcase bomb and re-set it, because the
target politician would not be there when it went off, though at a
higher
level you can still retarget the politician for assassination by some
other means. You can’t pull the glass out of the fire if it failed to
break, and you might well not be able to tell whether it was your glass
that failed to break or that of one of your drinking buddies. And you
certainly don’t want to try pulling the trigger again if the bullet
failed to leave the barrel!

Putting up a shield against a disturbance that you

anticipate might or might not happen in future, so that when and if it
happens you will not need overt output to counter the disturbance. Skin
and cell membranes are shields developed by evolution. Insulating a
house
is a more modern example, a shield against fluctuations in the
environmental temperature, reducing the range of outputs necessary to
control against those fluctuations.

BP: All of these I would classify as perfectly good negative feedback
processes, aimed at controlling different present-time perceptions.

Yes. ALL behaviour is the control of current perceptions. We can agree
on that, I think. Current perceptions include perceptions of the
imagined future states of things, including the possible future states
dependent on different action outputs. I expect it might rain, so I
imagine that if I carry an umbrella I won’t get so we as I would if I
go out without an umbrella. Those are current perceptions, yes. But the
situation is one to which people have applied the term “feedforward”.
So if you ask what “feedforward” means, that’s one possibility.

MT: Let me add another
plausible
meaning that seems to incorporate much of the intent of these and other
possibilities.

In a normal elementary control loop, we conventionally use the term
“feedback path” to be the part of the loop that passes through
the environment. So the obvious name for the rest of the loop from
sensory input to functional output is “feedforward path”, and
any signals that use that path would be “feedforward” signals.
That’s simply a normal use of complementary words, in the same way that
the use of “in” implies the possibility of “out”.

BP: Yes, this is what I was suggesting by the question above. Control
systems already use feedforward. Feedback and feedforward go in the
same
direction around the loop.

MT:If you define
“feedforward” so that it doesn’t exist when any other complete
control loop is also involved, then of course feedforward never exists.
If you define it as the part of any control loop that is not through
the
environment, then it always exists. Neither of these is a very useful
way
of looking at the considerable range of situations in which the word
has
been used in this discussion and elsewhere. But if you use
“feedforward path” to refer to the signal pathways not in the
environment, and “feedforward” to signals in that pathway not
(yet) influenced by one or both of an anticipated disturbance or
feedback
through the environment, then I think the term might be useful. It also
would cover most or all of the cases that have come to my
mind.

BP: Unfortunately it also encourages us to think of all the different
forms of feedforward as if they’re examples of the same thing, the way
we
use the word “learning.” But feedforward in the output function
will increase the speed of output responses to actual errors, while
feedforward in the input function will slow the output response by
making
the system think it has corrected the error before that has actually
happened.

I’m afraid I don’t follow this. Maybe I’m thinking of a different
circuit than you are. I’m thinking of Rick’s addition of the derivative
to the perceptual (or reference) signal to induce the output to oppose
the future disturbance rather than the present disturbance. That
improved performance, according to Rick. It certainly did in my own
experiments.

BOP: Yes, like the
gun-aiming, a
higher-level control system adjusts its setting of references for the
one
to which “feedforward” could be said to apply. A more
extreeeeme example of feedforward with reference influenced by a
higher-level learning system might be the old upper-class English
tradition of setting aside a bottle (or a case) of port for a newborn
son, to be opened on his 21st birthday. The controlled perception is
that
the son will have a pleasant experience 21 years later (when at that
time, in England, he attained the age of majority and the parent might
quite probably be dead). Yes, there are feedback loops involved, not
least importantly one analogous to the gun-aiming loop, in which the
“aim” is the choice of port, which has been found to age well,
as opposed to, say, a Beaujolais Nouveau, which
doesn’t.

BP: The longer the delay, the poorer the control will be. I don’t need
to
elaborate on that.

Yes, but one doesn’t compare the level of control with what it would
have been had the delay been shorter. One compares it with what it
would have been in the absence of intent to control. Is it more likely
that the 21st birthday party will be enjoyable if the port was laid
down at the boy’s birth than if there was no port having that family
connection? Is it more likely that it will be enjoyable if the bottle
that was laid down is one believed to mature wonderfully than one
believed to turn to vinegar after such a long delay?

Of course all anyone
has to do
to change my mind is show me a working model that matches real behavior
better with feedforward (whatever that means) than without it. Fifty
bucks to the first person who does it (all I can afford to lose).
If you accept my comment that the whole issue is a
dictionary squabble, you have lost already!

BP: I don’t accept it. Feedforward has different effects depending on
what is being fed forward and where in the control loop it is
happening.
It’s simply a different phenomenon in each case, and using the same
term
for all cases implies a nonexistent similarity.

Wouldn’t that tend to make you agree with my comment, rather than
disagree with it?

MT: Actually, I think
Rick won
some time back in the early 1990s, when he showed that using the
derivative of the perception to alter the reference value improved
tracking of a sine wave (Feedforward class “Prediction type
a”).

BP: That might improve tracking, but would adding this feature to a
model
make the model match real behavior better, or make the fit worse?

Well, as you know, my use of Rick’s model did make the model match real
behaviour better, to the extent that I was able to show (in the figure
I presented) that the degree of reliance on prediction was affected
differently by the different drug conditions over the period of sleep
loss. I think I also mentioned it in the Toronto CSG meeting, though my
actual presentation was concerned not with the data but with the
process of fitting models to data.

I can
easily design a model that will control far better than a real person
will control in the same tracking task. All I have to do is eliminate
the
transport lag in the input function, and the model’s tracking error can
then go essentially to zero. But that’s not how the real person
behaves.
To meet my challenge you must show that adding feedforward to a model
(any way you want to define it) makes the model fit real behavior
better
than the same model without it, just as adding transport lag to the
tracking model makes it fit real behavior better (as you can easily
verify by using demo 4-1 in LCS3).

MT: My bottom line: if
“feedback” is a useful word, then so is
“feedforward”, and its natural meaning is the complement of
“feedback”. To argue simultaneously that the meaning of
feedforward is unknown, and that it doesn’t exist, is to leave
permanently open the escape hatch that THIS (whatever it might be) is
not
“feedforward” when someone offers a model that purports to
satisfy your challenge.

BP: OK, then, let’s also add FeedSideways, since that goes naturally
with
forward and back, and we can make FeedSideways more specific by
introducing FeedVertically and FeedHorizontally. Grooving on the sounds
of the words does not seem to me like a good reason to introduce
ambiguous and fuzzy terminology, but if you let one camel into the tent
you might as well welcome all the others crowding in behind it.

Isn’t that a rather silly comment? Language (at least Indo-European
language) thrives on opposites, but not so much on wide-ranging
variants. If you have a concept that is operating within a plane, then
you might be justified in adding sideways; if your concept applies
within a volume, then vertically and horizontally might be suitable
adverbs. But even in those conditions, sideways, vertically and
horizontally don’t have the same linguistic opposition to “back” as
does “forward”. And in any case, we aren’t dealing with a concept that
applies to a plane or a volume. We are dealing with a one-dimensional
system, a ring, and in the ring there are only two directions, forward
and back.

What really puzzles me is this: Why does critising the use of this
word feedforward elicit such an energetic defense? Is there
something so appealing about this word that we should be willing to
adopt
it even without agreeing on any one meaning for it?

No, but it’s quite clear that there are phenomena to which people
attach the word “feedforward”. It would be well if each such phenomenon
were to be described by a different agreed word, but since it seemed to
me that each of these phenomena had a similar underlying structure,
there was value in retaining a word that is almost inevitably going to
be used in a context where “feedback” is so prominent.

Could it be that this
is just a defense against a criticism of some popular idea, as in the
case of the word “control”? What do you suppose might be said
if I pointed out how useless the term “intelligence” has proven
to be, or how misleading the term “aggression?” Is it that hard
to let go of the past?

I’m afraid I don’t know where in the past there has been much use of
“feedforward”, so I don’t really think it’s attachment to the past that
leads people to want to use the word. Until my first contribution to
this thread, I can’t remember using it myself. I think that the reason
people want to use the word is more likely to be a desire to deepen
their understanding of control and the necessary use, therefore, of the
word “feedback”. “Feedback” could, of course, be applied to any part of
the loop, but is conventionally used only for the part between the
output to the environment and the input from the environment. The
natural domain of “feedforward” is therefore the rest of the loop,
between input from the environment and output to the environment. As to
precisely what events in this part of the loop might properly be
labelled “feedforward”, I made one suggestion, but other suggestions
might prove more useful.

Martin

[From Rick Marken (2009.12.28.0920)]

[Martin Taylor (2009.12.27.11.25)

Rick Marken (2009.12.25.2315--
Thanks. But that was not a feedforward model (whatever that is). It
was a two level feedback control model;

Then I win, for misunderstanding your model and diagram, and using what
I thought you had done, based on what you wrote in
[From
Rick Marken (950413.1145)]

:

Yes, I described it poorly. It was not a feedforward or predictive control model at all; it’s a two level hierarchical feedback control model. The “predictive control” part is really just a cheap way of implementing the 2nd level system, without implementing the function that perceives the rate of change in target position. This second level system sets the reference for the lower level system, specifying a cursor position that would keep the cursor moving at the velocity it was moving in the prior sampling interval. There is no prediction and it was my mistake to say that there was (I was so much older then; I’m younger then that now). The level two system is implicitly controlling for a constant velocity and the level one system is controlling for keeping the cursor at the reference specified by the level two velocity control system. Tracking is improved in the pursuit tracking situation as long as the movement of the target is sinusoidal. If the target jumped around then this model (like the human) would do worse controlling for constant velocity than it would if there were no level two system controlling for constant velocity.

So in the model there is no feedforward. No predictive control. Just hierarchical feedback control. By the way, you can see this kind of two level control in action by having a person track you finger as it moves slowly in a circle; their tracking performance is much better than it would be if you move the finger slowly in an arbitrary pattern. But if you stop you finger when you are making the circle you will see that the subject keep moving their finger for some time before also stopping. This is the higher level system still specifying the reference for the lower level finger position system. If you stop you finger after arbitrary motion the subject’s finger also stops almost immediately, because the reference for the position of the finger was not being set by a higher level system controlling for a regular pattern of movement, as in the case of the circular movement. By the way, when the finger doesn’t stop immediately after tracking the circular motion, it looks like this is due to predictive control; the finger is going where the subject predicts it should be. But this phenomenon is actually the the way a hierarchical feedback control system works.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.12.28.14.08]

[From Rick Marken (2009.12.28.0920)]

[Martin Taylor
(2009.12.27.11.25)

Rick Marken (2009.12.25.2315--
Thanks. But that was not a feedforward model (whatever that is). It
was a two level feedback control model;
Then I win, for misunderstanding your model and diagram, and using what

I thought you had done, based on what you wrote in
[From
Rick Marken (950413.1145)]

:

Yes, I described it poorly. It was not a feedforward or predictive
control model at all; it’s a two level hierarchical feedback control
model. The “predictive control” part is really just a cheap way of
implementing the 2nd level system, without implementing the function
that perceives the rate of change in target position. This second level
system sets the reference for the lower level system, specifying a
cursor position that would keep the cursor moving at the velocity it
was moving in the prior sampling interval. There is no prediction and
it was my mistake to say that there was (I was so much older then; I’m
younger then that now). The level two system is implicitly controlling
for a constant velocity and the level one system is controlling for
keeping the cursor at the reference specified by the level two velocity
control system. Tracking is improved in the pursuit tracking situation
as long as the movement of the target is sinusoidal. If the target
jumped around then this model (like the human) would do worse
controlling for constant velocity than it would if there were no level
two system controlling for constant velocity.

It would be quite interesting to see the results if you were to
actually model what you describe here, specifically: “The level two
system is implicitly controlling for a constant velocity”. What is the
reference value for this constant velocity, and where does it come
from? Does that “constant” reference value get changed in coordination
with the phase of the sine wave, and if so, how? If not, how does it
influence the system that has a reference to keep the cursor-target
difference at zero? How does the output of the velocity-control system
set the reference value for this lower (“tracking”) control system?
Could you show a diagram of what you have in mind?

Your model as originally described, with simple prediction of the
near-future value of the perceptual signal, works well, and fits human
performance better than a model that is the same except for the
omission of prediction (at least that was the case in my sleep-loss
tracking studies, using the model I diagrammed in [Martin Taylor
2009.12.27.11.25] – the same as your original). This model, which
accords to your original description, controls no velocity, though it
could be interpreted as perceiving the varying velocity of the
target-cursor separation. I have my doubts as to whether your newly
described model that controls for constant velocity at the second level
would work as well as your original. But that’s at least open to test.

Incidentally, what you say here is only partially correct: “Tracking is
improved in the pursuit tracking situation as long as the
movement of the target is sinusoidal. If the target jumped around then
this model (like the human) would do worse…” The essential condition
is not that the target movement be sinusoidal. It is that that target
bandwidth be small compared to the inverse of the loop transport lag. A
pure sinusoid has zero bandwidth, and is the extreme of this condition.
“If the target jumped around” the target bandwidth would be large, and
any prediction would be useful for only a very short time, commensurate
with the jumping speed and frequency.

Martin