Examples of Feedforward Control?

[From Bruce Abbott (2010.01.02.1400
EST)]

During the 1930s my grandfather
experimented with a system to reduce noise in the radio receivers of the day.
His idea was to use two receivers, one tuned to the frequency of the signal and
the other to another frequency, just off the main frequency, which presumably
would receive much of the same relatively broad-band noise. The phase of the
audio signal from the off-frequency channel was inverted and summed with the
audio signal of the main frequency. Much the same thing is done today in “noise-cancelling”
headphones: a microphone picks up external sounds and, after suitable inversion
and scaling, beats this signal against the one being heard inside the
headphones. (The latter includes the intended audio plus external noise that
leaks through the insulation of the headphone ear cups.)

In both cases, we have noise
mixed in with the intended signal. A sensor picks up what is intended to be the
noise alone. If we label the noise as “disturbance,” then these
systems sense the disturbance to the signal and use that signal to create an
opposing action that cancels out the disturbance. Their ability to function
well depends on adequate sensing of the disturbance and correct calibration
(e.g., the inverted noise signal must be scaled to have the same magnitude as
the noise component of the main signal and opposite phase).

These seem to meet the
definition of feedforward control as given in engineering. Or am I missing
something?

In the case of the
noise-canceling headphones, I can envision a system that would employ feedback
rather than feedforward. A microphone inside the cups of the headphone
would pick up the sound, which would include the audio plus any noise leaking
through the ear cups from outside. This signal could be compared to the
intended audio signal (reference signal) and output adjustments made to
diminish the difference. But this would seem to require a rather complex
device, one capable of adjusting a wide range of frequencies simultaneously,
each to the exact amount required depending on the amplitudes of the various
frequencies present in the disturbance. In this case the feedforward system
would appear to be simpler and more effective.

Bruce

[From Bill Powers (01.02.1440 MST)]

Bruce Abbott (2010.01.02.1400 EST) –

During the 1930s my grandfather
experimented with a system to reduce noise in the radio receivers of the
day. His idea was to use two receivers, one tuned to the frequency of the
signal and the other to another frequency, just off the main frequency,
which presumably would receive much of the same relatively broad-band
noise. The phase of the audio signal from the off-frequency channel was
inverted and summed with the audio signal of the main
frequency.

The key to all these ideas is to think of noise not as just some
amorphous mess, but as a disturbance which has only one amplitude at a
given instant, just like the disturbances we use in the tracking programs
only 10,000 or so times as fast.

A microphone is basically a pressure sensor. A microphone inside the
headset measures the acoustical pressure very rapidly, equivalent to
perhaps 20K to 40K times per second, and compares the signal representing
that pressure with the signal that is driving the tiny loudspeaker inside
the earphones. The object is to compare the perceptual signal
representing sensed acoustical pressure with the intended acoustical
pressure, and use the error signal to drive the loudspeaker. This will
force the total acoustic pressure measured by the microphone inside the
earpiece to be proportional at all times to the driving electrical audio
signal. Here’s my guess at the design:

7297ff.jpg

I’ve arranged the diagram somewhat like our standard PCT diagram. As you
can see, this design does not use a separate microphone to pick up
external sounds; they would have the wrong acoustics anyway, compared
with the sounds that the small speaker inside the earphone produces. The
system above simply controls the total sound pressure at the microphone
(and entering the ear canal), wave by wave, to make the microphone signal
equal to the electrical audio signal. Doing it this way assures that the
sound from the earphone transducer will deviate from proportionality to
the audio signal just enough to cancel the noise component. This will
also assure that any nonlinearities in the power amplifier or the
earphone transducer will be corrected, since the determining factor is
the quality of the microphone, which would probably be a miniature
condenser mike.

This is the arrangement I thought of back in the 60s – perhaps even in
the 50s – to linearize the acoustical output of loudspeakers.

I forgot to put in the signs: the left-hand terminal of the power op-amp
is the negative input, the audio signal input goes into the positive
input. It’s a combination comparator and output function. The feedback
function is the earphone transducer plus acoustics in the earphone
chamber, the disturbance is the noise.

I’m reasonably sure this is how Bose does it.

Your grandfather’s idea might have worked if there had been wide-band
amplifiers capable of amplifying signals of 20 kilohertz and higher. But
the cost of a second receiver would have made the design too expensive,
and differences between the feedforward and feedback systems would have
limited performance. Nowadays, when we can amplify gigahertz signals, my
approach above might work on the incoming radio-frequency
signals.

Best,

Bill P.

Happy New Bill

7297ff.jpg

···

Sent from my Verizon Wireless BlackBerry


From: Bill Powers powers_w@FRONTIER.NET

Date: Sat, 02 Jan 2010 16:40:25 -0700

To: CSGNET@LISTSERV.ILLINOIS.EDU

Subject: Re: Examples of Feedforward Control?

[From Bill Powers (01.02.1440 MST)]

Bruce Abbott (2010.01.02.1400 EST) –

During the 1930s my grandfather
experimented with a system to reduce noise in the radio receivers of the
day. His idea was to use two receivers, one tuned to the frequency of the
signal and the other to another frequency, just off the main frequency,
which presumably would receive much of the same relatively broad-band
noise. The phase of the audio signal from the off-frequency channel was
inverted and summed with the audio signal of the main
frequency.

The key to all these ideas is to think of noise not as just some
amorphous mess, but as a disturbance which has only one amplitude at a
given instant, just like the disturbances we use in the tracking programs
only 10,000 or so times as fast.

A microphone is basically a pressure sensor. A microphone inside the
headset measures the acoustical pressure very rapidly, equivalent to
perhaps 20K to 40K times per second, and compares the signal representing
that pressure with the signal that is driving the tiny loudspeaker inside
the earphones. The object is to compare the perceptual signal
representing sensed acoustical pressure with the intended acoustical
pressure, and use the error signal to drive the loudspeaker. This will
force the total acoustic pressure measured by the microphone inside the
earpiece to be proportional at all times to the driving electrical audio
signal. Here’s my guess at the design:

Emacs!

I’ve arranged the diagram somewhat like our standard PCT diagram. As you
can see, this design does not use a separate microphone to pick up
external sounds; they would have the wrong acoustics anyway, compared
with the sounds that the small speaker inside the earphone produces. The
system above simply controls the total sound pressure at the microphone
(and entering the ear canal), wave by wave, to make the microphone signal
equal to the electrical audio signal. Doing it this way assures that the
sound from the earphone transducer will deviate from proportionality to
the audio signal just enough to cancel the noise component. This will
also assure that any nonlinearities in the power amplifier or the
earphone transducer will be corrected, since the determining factor is
the quality of the microphone, which would probably be a miniature
condenser mike.

This is the arrangement I thought of back in the 60s – perhaps even in
the 50s – to linearize the acoustical output of loudspeakers.

I forgot to put in the signs: the left-hand terminal of the power op-amp
is the negative input, the audio signal input goes into the positive
input. It’s a combination comparator and output function. The feedback
function is the earphone transducer plus acoustics in the earphone
chamber, the disturbance is the noise.

I’m reasonably sure this is how Bose does it.

Your grandfather’s idea might have worked if there had been wide-band
amplifiers capable of amplifying signals of 20 kilohertz and higher. But
the cost of a second receiver would have made the design too expensive,
and differences between the feedforward and feedback systems would have
limited performance. Nowadays, when we can amplify gigahertz signals, my
approach above might work on the incoming radio-frequency
signals.

Best,

Bill P.

[From Bill Powers (2010.01.02.1755 MST)]

Carter Cloyd (2010.01.02 ...)

Happy New Bill

Well, Happy New Carter, too, my friend. Looking forward to seeing you in the spring.

Best,

Bill P.

[From
Bruce Abbott (2010.01.02.2020 EST)]

Bill Powers (01.02.1440 MST) –

Bruce Abbott (2010.01.02.1400 EST)

BP: During the 1930s my
grandfather experimented with a system to reduce noise in the radio receivers
of the day. His idea was to use two receivers, one tuned to the frequency of
the signal and the other to another frequency, just off the main frequency,
which presumably would receive much of the same relatively broad-band noise.
The phase of the audio signal from the off-frequency channel was inverted and
summed with the audio signal of the main frequency.

BP: The key to all these ideas is to think
of noise not as just some amorphous mess, but as a disturbance which has only
one amplitude at a given instant, just like the disturbances we use in the
tracking programs only 10,000 or so times as fast.

Yes, of course. Why didn’t I see that?

BP: A microphone is basically a pressure
sensor. A microphone inside the headset measures the acoustical pressure very
rapidly, equivalent to perhaps 20K to 40K times per second, and compares the
signal representing that pressure with the signal that is driving the tiny
loudspeaker inside the earphones. The object is to compare the perceptual
signal representing sensed acoustical pressure with the intended acoustical
pressure, and use the error signal to drive the loudspeaker. This will force
the total acoustic pressure measured by the microphone inside the earpiece to
be proportional at all times to the driving electrical audio signal. Here’s my
guess at the design:
7297ff.jpg
BP: I’ve arranged the diagram somewhat like
our standard PCT diagram. As you can see, this design does not use a separate
microphone to pick up external sounds; they would have the wrong acoustics
anyway, compared with the sounds that the small speaker inside the earphone
produces. The system above simply controls the total sound pressure at the
microphone (and entering the ear canal), wave by wave, to make the microphone
signal equal to the electrical audio signal. Doing it this way assures that the
sound from the earphone transducer will deviate from proportionality to the
audio signal just enough to cancel the noise component. This will also assure
that any nonlinearities in the power amplifier or the earphone transducer will
be corrected, since the determining factor is the quality of the microphone,
which would probably be a miniature condenser mike.

BP: This is the arrangement I thought of
back in the 60s – perhaps even in the 50s – to linearize the acoustical
output of loudspeakers.

Did you try to patent it?
BP: I forgot to put in the signs: the
left-hand terminal of the power op-amp is the negative input, the audio signal
input goes into the positive input. It’s a combination comparator and output
function. The feedback function is the earphone transducer plus acoustics in
the earphone chamber, the disturbance is the noise.

BP: I’m reasonably sure this is how Bose
does it.

Yes, it looks right. I found a diagram on the “How Stuff
Works” website that shows the microphone on the inside of the cup,
although the accompanying description doesn’t accurately describe how it
works. (See http://electronics.howstuffworks.com/gadgets/audio-music/noise-canceling-headphone3.htm
).

BP: Your grandfather’s idea might have
worked if there had been wide-band amplifiers capable of amplifying signals of
20 kilohertz and higher.

I’ll tell him that when I see him (;-> Evidently it
didn’t work to his satisfaction as nothing further came of it, so far as I
am aware.

But the cost of a second
receiver would have made the design too expensive, and differences between the
feedforward and feedback systems would have limited performance. Nowadays, when
we can amplify gigahertz signals, my approach above might work on the incoming
radio-frequency signals.

In the noise-cancelling headphones, you have a relatively “clean”
audio source to compare against the speaker output; in the noisy radio signal
case, how would you get the equivalent?

Bruce

[From Bill Powers (2010.01.02.1840 MST)]

Bruce Abbott (2010.01.02.2020 EST) –

In the noise-cancelling
headphones, you have a relatively “clean” audio source to compare against
the speaker output; in the noisy radio signal case, how would you get the
equivalent?

Well, after the miracle occurs … uh …
The biggest problem with your grandfather’s way is getting a measure of
the noise without the signal, such that it’s the same noise
amplitude instant by instant. In the 1930s, most of the noise in the
signal came from the vacuum tubes in the first stage of the receiver, so
the two receivers couldn’t be equal. I suspect that this is the problem
that stopped him. Even just a separation of a few feet could alter the
radio-frequency noise, too.

All feedforward methods have at least one problem like this. Occasionally
the environment is calm enough and conditions are repeatable enough that
this solution can be made to work pretty well (as in the case of the
compensated pendulum). All similar methods rely on getting direct
accurate measures of the disturbance, computing the needed effects on the
controlled variable accurately, and converting the computation to actual
physical effects with enough accuracy. Negative feedback control bypasses
most of these needs for precision, since it is the accuracy of the sensor
(and cleverness in picking the right thing to sense) that ultimately
determines the accuracy of control. Cesium clocks work by comparing the
frequency of microwave absorption by cesium with the frequency of a
crystal oscillator, and using the error to adjust the crystal frequency.
The comparison can detect extremely small deviations in frequency, so the
caesium frequency is reproduced within something like 10^-9 of the
mean frequency or better.

I once converted a cheap neon laser to a mode-stabilized one (where the
directions of magnetic and electric fields always remained in the same
fixed relationship) by detecting (with a phototransistor) an
audio-frequency beat note between the two modes of lasing, and controlled
for keeping that frequency at zero. The “actuator” consisted of
6 100-ohm one-watt resistors mounted near one end of the glass laser
tube, and the error signal was converted into a change of length in the
laser tube by running current, 20 or 30 milliamps max, through the
resistors to heat part of the glass tube. Talk about crude output
functions; that one cost about 60 cents. It worked fine for 10 or 15
years, and for all I know is still working 40 years later.

Feedback control is the only way to go, man, the only way to go. Mother
Nature figured that out a long time before we did.

The transition from open-loop compensation to closed-loop control was a
major step in engineering during the 1930s and 1940s, and for quite a
while after that. Psychology missed the bus.

Best,

Bill P.

[From Fred Nickols (2010.01.03.0742 MST)]

···

-------------- Original message ----------------------
From: Bill Powers <powers_w@FRONTIER.NET>

[From Bill Powers (2010.01.02.1840 MST)]

The transition from open-loop compensation to closed-loop control was
a major step in engineering during the 1930s and 1940s, and for quite
a while after that. Psychology missed the bus.

I really like the contrast of compensation vs control in the sentence above. If I understand it correctly, an open loop system, especially one involving feed forward, compensates for a disturbance to a targeted by assessing the disturbance and then calculating and applying a correction, whereas a closed loop system simply varies its actions so as to maintain correspondence between the perceived and reference states of a targeted variable. Assessing the disturbance itself is unnecessary in a closed loop control system but it is essential to the functioning of an open loop compensatory system. Do I have that right?

If so, it also seems to me that any adjustments made by a closed loop control system are much closer to instantaneous because there is no requirement to identify or assess the disturbance itself or to calculate and apply a correction.

All that said, I can see how some who favor open loop compensation might argue that the speed of modern computers, etc voids any objections in non-human systems (the same way digital computers replaced analog computers in fire control systems) but, in human systems, that argument breaks down. People might be capable of computations but they are not computers and they are most certainly no digital computers.

It does indeed seem that psychology missed the bus on this score. I would guess that sociology did too.

--
Regards,

Fred Nickols
Managing Partner
Distance Consulting, LLC
nickols@att.net
www.nickols.us

"Assistance at A Distance"

[From Bill Powers (2010.01.03.-0925 MST)]

Fred Nickols (2010.01.03.0742 MST) --

FN: I really like the contrast of compensation vs control in the sentence above. If I understand it correctly, an open loop system, especially one involving feed forward, compensates for a disturbance to a targeted by assessing the disturbance and then calculating and applying a correction, whereas a closed loop system simply varies its actions so as to maintain correspondence between the perceived and reference states of a targeted variable. Assessing the disturbance itself is unnecessary in a closed loop control system but it is essential to the functioning of an open loop compensatory system. Do I have that right?

BP: Exactly. One of the great strengths of closed-loop control is that it doesn't rely on knowing even the nature or location of the disturbing variable or any of the physical laws relating it to the controlled variable. It doesn't have to calculate the physical effects of the disturbing variable on the controlled variable -- or then to compute the exact action that will have an exactly opposing effect, and carry it out without any feedback to tell it if it is succeeding.

I hope somebody is digging into the history of feedforward. I expect to find that it's a product of ignorance about how control systems really work, coupled with someone's experience that he could usually work out how things work without having to crack a textbook. That's a kind of hubris with which I am embarassingly familiar, though with decreasing frequency in my later years.

FN: If so, it also seems to me that any adjustments made by a closed loop control system are much closer to instantaneous because there is no requirement to identify or assess the disturbance itself or to calculate and apply a correction.

Yes, that is part of what the proponents of open-loop compensation overlook when they claim that compensation is faster and more accurate than feedback control.

All that said, I can see how some who favor open loop compensation might argue that the speed of modern computers, etc voids any objections in non-human systems (the same way digital computers replaced analog computers in fire control systems) but, in human systems, that argument breaks down. People might be capable of computations but they are not computers and they are most certainly no digital computers.

Even a computer can't use information that is unavailable. Most of the disturbances we experience are first known to us as an unexpected change in a controlled variable. Either that, or the cause of the change becomes known to us only after we have already counteracted its effects. I wonder if all this isn't known already simply because people don't realize that they're controlling all the time, not just once in a while. Control is so quick, reliable, continuous, and automatic that we just say we're "doing" things; we name our behaviors in terms of their controlled outcomes and don't even notice the continuous accurate adjustments of our actions that are producing those outcomes and making them repeat even when the action has to change to do so.

The control systems that stabilize the guns on a battleship are run by computers, but those computers don't work by analyzing the ocean and computing exactly when the next wave will tilt the ship, and exactly how much effect there will be on the orientation of the gun's barrel, and how many degrees in azimuth and altitute that the gun's mounting has to swivel to keep the gun from moving. Gun control systems just aren't designed that way, and for good reason.

Best,

Bill P.

[From Avery Andrews (2010.01.04 EDT (Australia)]

One way to look at many instances of apparent feedforward is that they
constitute keeping at a low level a perception that some future
disturbance will be severe. E.g. you're sitting on a bodyboard in the
surf and see a big wave approaching, but don't to ride it in, and so
turn to face it, thereby reducing the likelihood that you'll have
problem controlling for an upright position on the board.

Or you're at a picnic, and see a wasp crawl into somebody's beer bottle
while they're not looking at it, and so tell them that a wasp just
crawled into their beer, thereby hopefully avoiding major disturbances
that would follow if they drank the beer.

That this kind of thing is rarely if ever completely relied upon to get
a job done doesn't mean that nothing like it ever happens.

···

-----Original Message-----
From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] On Behalf Of Bill Powers
Sent: Monday, 4 January 2010 3:58 AM
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: Re: Examples of Feedforward Control?

[From Bill Powers (2010.01.03.-0925 MST)]

Fred Nickols (2010.01.03.0742 MST) --

FN: I really like the contrast of compensation vs control in the
sentence above. If I understand it correctly, an open loop system,
especially one involving feed forward, compensates for a disturbance to

a targeted by assessing the disturbance and then calculating and
applying a correction, whereas a closed loop system simply varies its
actions so as to maintain correspondence between the perceived and
reference states of a targeted variable. Assessing the disturbance
itself is unnecessary in a closed loop control system but it is
essential to the functioning of an open loop compensatory system. Do I

have that right?

BP: Exactly. One of the great strengths of closed-loop control is that
it doesn't rely on knowing even the nature or location of the disturbing
variable or any of the physical laws relating it to the controlled
variable. It doesn't have to calculate the physical effects of the
disturbing variable on the controlled variable -- or then to compute the
exact action that will have an exactly opposing effect, and carry it out
without any feedback to tell it if it is succeeding.

I hope somebody is digging into the history of feedforward. I expect to
find that it's a product of ignorance about how control systems really
work, coupled with someone's experience that he could usually work out
how things work without having to crack a textbook. That's a kind of
hubris with which I am embarassingly familiar, though with decreasing
frequency in my later years.

FN: If so, it also seems to me that any adjustments made by a closed
loop control system are much closer to instantaneous because there is
no requirement to identify or assess the disturbance itself or to
calculate and apply a correction.

Yes, that is part of what the proponents of open-loop compensation
overlook when they claim that compensation is faster and more accurate
than feedback control.

All that said, I can see how some who favor open loop compensation
might argue that the speed of modern computers, etc voids any
objections in non-human systems (the same way digital computers
replaced analog computers in fire control systems) but, in human
systems, that argument breaks down. People might be capable of
computations but they are not computers and they are most certainly no
digital computers.

Even a computer can't use information that is unavailable. Most of the
disturbances we experience are first known to us as an unexpected change
in a controlled variable. Either that, or the cause of the change
becomes known to us only after we have already counteracted its effects.
I wonder if all this isn't known already simply because people don't
realize that they're controlling all the time, not just once in a while.
Control is so quick, reliable, continuous, and automatic that we just
say we're "doing" things; we name our behaviors in terms of their
controlled outcomes and don't even notice the continuous accurate
adjustments of our actions that are producing those outcomes and making
them repeat even when the action has to change to do so.

The control systems that stabilize the guns on a battleship are run by
computers, but those computers don't work by analyzing the ocean and
computing exactly when the next wave will tilt the ship, and exactly how
much effect there will be on the orientation of the gun's barrel, and
how many degrees in azimuth and altitute that the gun's mounting has to
swivel to keep the gun from moving. Gun control systems just aren't
designed that way, and for good reason.

Best,

Bill P.

[From Richard Kennaway (2010.01.04.0928 GMT)]

Happy New Year!

[From Bill Powers (2010.01.03.-0925 MST)]
I hope somebody is digging into the history of feedforward. I expect
to find that it's a product of ignorance about how control systems
really work, coupled with someone's experience that he could usually
work out how things work without having to crack a textbook. That's a
kind of hubris with which I am embarassingly familiar, though with
decreasing frequency in my later years.

Did it originate in amplifier design? I was googling for feedforward and found this: Theory of Feed-Forward Audio Amplifiers - A survey,
a design from 1921 by Harold Black, whose name I'm sure you'll know. (For those who don't know, the inventor of the negative feedback amplifier. Amplifiers and control systems have a lot in common.) Note that (from Wikipedia: Harold Stephen Black - Wikipedia) this was six years before he came up with the negative feedback amplifier.

It's interesting that the article refers to a couple of issues of "Wireless World" from the 1970's, because I used to read that magazine back then. (It was an electronics magazine aimed at both hobbyists and professionals.) I remember a series of articles that appeared there analysing in detail the design of a particular commercially manufactured audio amplifier, about which there was a controversy: was it a feedback or a feedforward design? The authors mathematically modelled the whole thing, and pointed out that if you arranged the equations like this, it looked like feedforward, and if you arranged them like that, it looked like feedback. Mathematically equivalent descriptions.

Back in the present, I've been reading "Internal models in the cerebellum" (http://learning.eng.cam.ac.uk/pub/Public/Wolpert/Publications/WolMiaKaw98.pdf) which may raise Bill's blood pressure to dangerous levels. Quote:

"Fast and coordinated arm movements cannot be executed under pure feedback control because biological feedback loops are both too slow and have small gains."

It proposes a feedforward scheme using an inverse model to compute the required muscular outputs to execute a desired trajectory, with the parameters of the inverse model tuned by feedback.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Richard Kennaway (2010.01.04.0949 GMT)]

Here's another historical account:

http://www.willamette.edu/~fthompso/MgmtCon/Control_Systems.html

"Control systems are intimately related to the concept of automation (q.v.), but the two fundamental types of control systems, feedforward and feedback, have classic ancestry. The loom invented by Joseph Jacquard of France in 1801 is an early example of feedforward; a set of punched cards programmed the patterns woven by the loom; no information from the process was used to correct the machine's operation. Similar feedforward control was incorporated in a number of machine tools invented in the 19th century, in which a cutting tool followed the shape of a model.

"Feedback control, in which information from the process is used to correct a machine's operation, has an even older history. Roman engineers maintained water levels for their aqueduct system by means of floating valves that opened and closed at appropriate levels. The Dutch windmill of the 17th century was kept facing the wind by the action of an auxiliary vane that moved the entire upper part of the mill. The most famous example from the Industrial Revolution is James Watt's flyball governor of 1769, a device that regulated steam flow to a steam engine to maintain constant engine speed despite a changing load."

That paragraph on feedforward extends the concept rather widely -- one might define feedforward as simply the absence of a control system.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, http://www.cmp.uea.ac.uk/~jrk/
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bruce Gregory 92010.01.04 12:02 GMT)]

[From Richard Kennaway (2010.01.04.0928 GMT)]

Back in the present, I've been reading "Internal models in the cerebellum" (http://learning.eng.cam.ac.uk/pub/Public/Wolpert/Publications/WolMiaKaw98.pdf) which may raise Bill's blood pressure to dangerous levels. Quote:

"Fast and coordinated arm movements cannot be executed under pure feedback control because biological feedback loops are both too slow and have small gains."

It proposes a feedforward scheme using an inverse model to compute the required muscular outputs to execute a desired trajectory, with the parameters of the inverse model tuned by feedback.

I have always wondered how a system "using an inverse model to compute the required muscular outputs" might be expected to evolve.

[From Richard Kennaway (2010.01.04.1224 GMT)]

[From Bruce Gregory 92010.01.04 12:02 GMT)]
I have always wondered how a system "using an inverse model to compute the required muscular outputs" might be expected to evolve.

I think they suggest that the feedback system came first, and was then improved by adding the feedforward system.

Doesn't the vestibulo-ocular reflex work like that, though? I seem to recall Bill himself describing it as a feedforward system whose accuracy is maintained by continually being tuned over a timescale of tens of minutes.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bruce Gregory (2010.0104.1256 GMT)]

[From Richard Kennaway (2010.01.04.1224 GMT)]

[From Bruce Gregory 92010.01.04 12:02 GMT)]
I have always wondered how a system "using an inverse model to compute the required muscular outputs" might be expected to evolve.

I think they suggest that the feedback system came first, and was then improved by adding the feedforward system.

Doesn't the vestibulo-ocular reflex work like that, though? I seem to recall Bill himself describing it as a feedforward system whose accuracy is maintained by continually being tuned over a timescale of tens of minutes.

At the risk of sounding like a proponent of Intelligent Design, I have never been able to figure out how an inverse model that computes muscular outputs could have evolved. It reminds of the metaphor in which a dog carries out a series of complex Newtonian calculations to decide how to catch a frisbee. I'm very fond of my dogs, but I have seen no signs that they are latent physicists.

[From Richard Kennaway (2010.01.04.1345 GMT)]

[From Bruce Gregory (2010.0104.1256 GMT)]
At the risk of sounding like a proponent of Intelligent Design, I have never been able to figure out how an inverse model that computes muscular outputs could have evolved. It reminds of the metaphor in which a dog carries out a series of complex Newtonian calculations to decide how to catch a frisbee. I'm very fond of my dogs, but I have seen no signs that they are latent physicists.

Ah, I was thinking about the feedforward aspect instead of the internal model. The authors of that paper would probably say that when the feedforward system has been tuned so as to successfully perform the task, then it must constitute an inverse model, and that conversely, the system is best understood as adapting to maintain an internal model.

I think it's interesting to compare this with the adaptive control systems in chapters 7 and 8 of LCS III. These are purely feedback control systems, no feedforward, but they also undergo a process of adaptation, using the E. coli method to converge to a set of parameters that produce successful control. The question is, are those parameters a model of the task?

Not in any sensible use of the word "model". In general there are many sets of parameters that will work. Putting the mathematics very briefly, there is a matrix A that describes the task, and a matrix B of control parameters, and the adaptation process has to find a B that makes the system control well. If the only such B was the inverse of A (or close to it), then it would be reasonable to say that the system had learned an inverse model, but in fact any B such that AB has a complete set of eigenvalues with positive real part will do. This is too large a cloud of possibilities to call B an inverse model of A. It would be like saying that the solution to a problem is a model of the problem.

BTW, I've found that when demonstrating even a purely non-adaptive control system, such as my walking robot simulation, the moment I show people the structure of the whole thing and point out all the stuff that isn't there -- learning, modelling, and so on -- people start redefining the word "model" to make it have a "model", until "is a model of" means "has some arbitrary causal connection with". But that is not what the word means in any other context.

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bruce Gregory (2010.01.04.1408 GMT)]

[From Richard Kennaway (2010.01.04.1345 GMT)]

Not in any sensible use of the word “model”. In general there are many sets of parameters that will work. Putting the mathematics very briefly, there is a matrix A that describes the task, and a matrix B of control parameters, and the adaptation process has to find a B that makes the system control well. If the only such B was the inverse of A (or close to it), then it would be reasonable to say that the system had learned an inverse model, but in fact any B such that AB has a complete set of eigenvalues with positive real part will do. This is too large a cloud of possibilities to call B an inverse model of A. It would be like saying that the solution to a problem is a model of the problem.

BTW, I’ve found that when demonstrating even a purely non-adaptive control system, such as my walking robot simulation, the moment I show people the structure of the whole thing and point out all the stuff that isn’t there – learning, modelling, and so on – people start redefining the word “model” to make it have a “model”, until “is a model of” means “has some arbitrary causal connection with”. But that is not what the word means in any other context.

I find science education filled with faulty uses of the term model. The idea that students have a faulty model of some physical phenomenon suggests that they somehow look inside their heads, manipulate a representation of the world, and then report on the result of their manipulations. It has always seemed much simpler to me to think that students are arguing by analogy (“the earth must be closer to the sun in summer because when you are closer to a source of heat you feel warmer”). Models are pretty sophisticated ways to look at the world; I doubt that they come naturally to anyone.

[From Bruce Gregory (2010.01.04 1516 GMT)]

One more note on models. I suspect it would be easier to teach science if students really did rely on mental models. One would simply point out the problems with the model they were using and describe to them a better model. Correcting a model seems likely to be easier than getting someone to use models in the first place.

[From Richard Kennaway (2010.01.04.1345 GMT)]

[From Bruce Gregory (2010.01.04.1408 GMT)]
I find science education filled with faulty uses of the term model. The idea that students have a faulty model of some physical phenomenon suggests that they somehow look inside their heads, manipulate a representation of the world, and then report on the result of their manipulations. It has always seemed much simpler to me to think that students are arguing by analogy ("the earth must be closer to the sun in summer because when you are closer to a source of heat you feel warmer"). Models are pretty sophisticated ways to look at the world; I doubt that they come naturally to anyone.

What's the difference between an analogy and a model? A mathematical model consists of a set of mathematical variables that correspond to some physical quantities, and mathematical relationships between them that match the way the physical quantities behave. Someone imagining a long elliptical orbit for the Earth with summer at the closest point to the Sun seems to me to be applying a model, even if they aren't doing mathematical calculations. Just a wrong model. ("So why is it winter in Australia at the same time as summer in the US?")

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Fred Nickols (2010.01.04.0828 MST)]

Hmm. I don't see either of those examples as "feedforward." It seems to me that the punched cards and the model served as reference conditions or standards for the loom and the cutting tool respectively. The loom and the cutting tool simply did what they were told to do so to speak. Operating conditions were no doubt standardized and so (a) no variance from the "program" occurred and (b) no disturbances existed, hence no need for feedback from the process. A "program" would work. If that's "feedforward" then it seems to me that "feedforward" is basically a matter of issuing commands to a compliant system that can do nothing except follow orders.

···

--
Regards,

Fred Nickols
Managing Partner
Distance Consulting, LLC
nickols@att.net
www.nickols.us

"Assistance at A Distance"
  
-------------- Original message ----------------------
From: Richard Kennaway <jrk@CMP.UEA.AC.UK>

[From Richard Kennaway (2010.01.04.0949 GMT)]

Here's another historical account:

http://www.willamette.edu/~fthompso/MgmtCon/Control_Systems.html

"Control systems are intimately related to the concept of automation
(q.v.), but the two fundamental types of control systems, feedforward
and feedback, have classic ancestry. The loom invented by Joseph
Jacquard of France in 1801 is an early example of feedforward; a set
of punched cards programmed the patterns woven by the loom; no
information from the process was used to correct the machine's
operation. Similar feedforward control was incorporated in a number
of machine tools invented in the 19th century, in which a cutting
tool followed the shape of a model.

"Feedback control, in which information from the process is used to
correct a machine's operation, has an even older history. Roman
engineers maintained water levels for their aqueduct system by means
of floating valves that opened and closed at appropriate levels. The
Dutch windmill of the 17th century was kept facing the wind by the
action of an auxiliary vane that moved the entire upper part of the
mill. The most famous example from the Industrial Revolution is James
Watt's flyball governor of 1769, a device that regulated steam flow
to a steam engine to maintain constant engine speed despite a
changing load."

That paragraph on feedforward extends the concept rather widely --
one might define feedforward as simply the absence of a control
system.

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bill Powers (2010.01.04.0830 MST)]

Avery Andrews (2010.01.04 EDT (Australia) --

Great to hear from you, Avery. I'll try you on Skype one day pretty soon.

AA: One way to look at many instances of apparent feedforward is that they
constitute keeping at a low level a perception that some future
disturbance will be severe. E.g. you're sitting on a bodyboard in the
surf and see a big wave approaching, but don't [want] to ride it in, and so
turn to face it, thereby reducing the likelihood that you'll have
problem controlling for an upright position on the board.

BP: I think this reminds us of some sort of boundary that separates basic design of a system from simply the way a particular system is being used. In PCT we have a lot of levels, at least proposed levels, above configuration, event, and relationship control. One of them is a level at which we use learned rules to manipulate statements -- I refer to it loosely as the "logic level." At the logic level we would expect all sorts of reasoning processes to happen, the sort we refer to as "cognitive." So it doesn't surprise me that we might solve some problems by reasoning, calculating, or in general thinking about them in the imagination mode. If I see a large wave coming toward me, I will prepare myself to meet it, or would if I had any idea of how to do that, which I don't. By thinking, I am still trying to control what is happening to me, and over the long term the feedback is still negative, not positive or missing. If I make a change in the way I meet the wave this time and that results in an error being less than I was afraid it might be, I will probably go on making more of that change before meeting new waves until it stops making things better. Then I will try something else, and keep trying until the error is as small as possible, or too small to bother with any more. Not a sign of anything but negative feedback there, as far as I can see. And the result is a control system that sometimes controls well and sometimes fails, but on the average works as well as possible.

What makes some examples of feedback control look like feedforward is that they take place over many samples of a given behavior. When you get too close to them, you lose sight of the fact that there is a slow adjustment going on, trial after trial, and that without this adjustment, which depends on negative feedback, there's no chance at all of success. And it's only the fact that the lower-level "behaviors" -- actually, perceived outcomes -- are under strong negative feedback control that makes it possible to repeat "doing the same thing" with or without adjustments.

When professionals bowl in a tournament, they throw a few practice balls to determine the state of the lanes they will be using. The pattern of oiling in particular varies from one game to another, so the ball's spin has more or less effect at different points along the path. Even during play, players note that the ball missed the aiming point by a little or a lot, and they make small adjustments in the delivery until they find a pattern that seems to minimize errors. The delivery is under such good control that if the player doesn't vary the reference pattern, the same pattern will occur again and again so its consequences at another level can be correctly perceived.

Talking that way makes sense only if we think of the controlled variable as something that changes slowly over successive releases of the ball, not something that is controlled or not controlled on each trial. The higher-level systems work over longer time spans than the lower ones do. They use average measures of variables, not values at a given instant. Any system above the event level sees control as a pattern of repetitions, as in the way a basketball player controls "dribbing" of the ball as he runs downcourt or a tennis player adjusts his "form" each time he prepares for a backhand return.

HPCT gives us concepts and a vocabulary for dealing with observed behavior that are missing in other ways of explaining what we observe. The simplistic idea of feedforward is replaced by a much more detailed and elegant picture of what is going on -- as well as an explanation that really does explain rather than just describing the same phenomenon in different words. The idea of feedforward doesn't really tell us much does it?

Best,

Bill P.