Ashby and Control Theory

[ From Bill Powers (2009.11.30.1520 MDT)]

I have claimed now and then that W. Rosh Ashby, the prominent cyberneticist, didn't know what control is. Today I was looking in Design for a Brain, and accidently opened the book to page 155 where I found this:

"14/3. The factual content of the concept of one variable 'controlling' another is now clear. A 'controls' B if B's behavior depends on A, while A's does not depend on B."

Compare that with the definition I offer:

A controls B if, for every disturbance D acting on B, the effect of A on B is opposite to and substantially equal to the effect of D on B.

Note that under Ashby's definition, it's not necessary for A to be influenced by any knowledge of B; it's necessary only that the state of B depend on the state of A, objectively. My definition, however, will work only if A can know the state of B at all times, and can compare it with some reference state. My definition works only for negative feedback control systems, while Ashby's also works for a paperweight or a stick of dynamite.

Best,

Bill P.

[From Bruce Abbott (2009.12.01.0920)]

Bill Powers (2009.11.30.1520 MDT) --

BP: I have claimed now and then that W. Rosh Ashby, the prominent
cyberneticist, didn't know what control is. Today I was looking in
Design for a Brain, and accidently opened the book to page 155 where
I found this:

BP: "14/3. The factual content of the concept of one variable
'controlling' another is now clear. A 'controls' B if B's behavior
depends on A, while A's does not depend on B."

BP: Compare that with the definition I offer:

BP: A controls B if, for every disturbance D acting on B, the effect of A
on B is opposite to and substantially equal to the effect of D on B.

BP: Note that under Ashby's definition, it's not necessary for A to be
influenced by any knowledge of B; it's necessary only that the state
of B depend on the state of A, objectively. My definition, however,
will work only if A can know the state of B at all times, and can
compare it with some reference state. My definition works only for
negative feedback control systems, while Ashby's also works for a
paperweight or a stick of dynamite.

The Ashby definition of "control" given above conforms to the definition
typically given in science and everyday usage. Consider an example of the
latter: the setting of a rheostat is said to "control" the speed of a motor,
although all that it actually does is alter the flow electrical current to
the motor. There is no feedback: place the motor under load and it will slow
down. Yet the rheostat provides the MEANS of control (in the negative
feedback sense) when coupled with a human operator: if the motor slows under
load the operator can turn the knob to a higher setting and compensate.

What you define as "control" I believe Ashby would define as "regulation."

I have the second edition of Design for a Brain, the second half of which
was almost entirely rewritten. The section in which your quote appears no
longer exists and thus far I have not been able to find the quote. But what
I have just (re)read in this edition leaves no doubt in my mind that Ashby
understood how negative feedback systems achieve stability against
disturbances.

Bruce A.

[From Bill Powers (2009.12.01.0835 MSDT)]

Bruce Abbott (2009.12.01.0920) --

The Ashby definition of "control" given above conforms to the definition
typically given in science and everyday usage. Consider an example of the
latter: the setting of a rheostat is said to "control" the speed of a motor,
although all that it actually does is alter the flow electrical current to
the motor. There is no feedback: place the motor under load and it will slow
down. Yet the rheostat provides the MEANS of control (in the negative
feedback sense) when coupled with a human operator: if the motor slows under
load the operator can turn the knob to a higher setting and compensate.

What you define as "control" I believe Ashby would define as "regulation."

I have the second edition of Design for a Brain, the second half of which
was almost entirely rewritten. The section in which your quote appears no
longer exists and thus far I have not been able to find the quote. But what
I have just (re)read in this edition leaves no doubt in my mind that Ashby
understood how negative feedback systems achieve stability against
disturbances.

Earlier in the book that seemed to be the case, but not later. Did he also correct his model in Introduction to Cybernetics? In that book, he had the controller working by sensing the disturbance and computing an output that would just compensate for the effect of the disturbance -- the same model people are using today.

But thanks for the correction concerning Design for a Brain.

Best,

Bill P.

[From Dag Forssell (2009.12.01.1030 PST)]

You may find these links of interest


http://pespmc1.vub.ac.be/ASHBBOOK.html
and


http://pespmc1.vub.ac.be/books/IntroCyb.pdf
2 MB.

(I discover that I downloaded the same file in 2004.)

PDF file is not password protected and is fully searchable because it is
not an image file but a fully formatted vector graphics and text file.

“negative” referring to feedback occurs in the middle of page
80, page 237 and bottom of 239 and in the index on page 292.

Looking for DESIGN FOR A BRAIN
on the same basis, I find


http://pespmc1.vub.ac.be/CSBOOKS.html

Best, Dag

···

At 07:42 AM 12/1/2009, you wrote:

[From Bill Powers
(2009.12.01.0835 MSDT)]

Bruce Abbott (2009.12.01.0920) –

[From Rick Marken (2009.12.01.1110)]

Bill Powers (2009.12.01.0835 MSDT)

Bruce Abbott (2009.12.01.0920) --

I have just (re)read in this edition leaves no doubt in my mind that Ashby
understood how negative feedback systems achieve stability against
disturbances.

Earlier in the book that seemed to be the case, but not later. Did he also
correct his model in Introduction to Cybernetics?...

But thanks for the correction concerning Design for a Brain.

This reminds me that I got a copy of Ashby's "Introduction to
Cybernetics" hwile I was an undergrad at UCLA, several years before I
ran into "Behavior: The control of perception" (B:CP). I had no idea
what Ashby was talking about but it looked pretty impressive. And I
certainly had no idea how cybernetic ideas applied to psychology.
B:CP, while difficult, still was a lot clearer to me, especially in
terms of how cybernetics (qua control theory) relates to psychology.
Indeed, I think what's most important about B:CP is that it clearly
showed how control theory "maps" to behavior. The title says it all:
behavior _is_ the control of perception. What this means is that
behavior is organized around the control of perceptual variables:
controlled variables. Understanding behavior means discovering the
perceptual variables that organisms control and how they control them.
The apparent causal influence of stimuli or reinforcers on behavior is
an illusion created by ignoring the fact that behavior (the actions
that seem to be caused or selected) is always aimed at keeping some
perceptual variables under control.

So whether or not Ashby correctly understood control seems to me to be
somewhat unimportant. What Ashby clearly did not understand (or if he
understood it he never clearly described it) is that behavior is the
control of perception -- and all that that implies.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.12.1.1603 MDT)]
Dag Forssell (2009.12.01.1030 PST) –
Thanks for the links, Dag. I found this in the reference forIntroduction to Cybernetics:

LAW OF REGULATORY MODELS

every good
regulator
of a system must be (contain)
a model of that
system. (Conant and Ashby, 1982)

···

This is not true.

Best,

Bill P.

[From
Bruce Abbott (2009.12.2005 EST)]

Dag Forssell (2009.12.01.1030 PST)]

···


DF:You may find these links of interest http://pespmc1.vub.ac.be/ASHBBOOK.html
and http://pespmc1.vub.ac.be/books/IntroCyb.pdf
2 MB. (I discover that I downloaded
the same file in 2004.) PDF file is not
password protected and is fully searchable because it is not an image file but
a fully formatted vector graphics and text file.
DF: “negative” referring to
feedback occurs in the middle of page 80, page 237 and bottom of 239 and in the
index on page 292.
DF: Looking for DESIGN FOR A BRAIN on the
same basis, I find http://pespmc1.vub.ac.be/CSBOOKS.html
Thanks, Dag. I’ve seen the Ashby website and
had already downloaded the PDF file of Introduction to Cybernetics, but
perhaps others will also find them of interest. I was able to locate an online
text of the first edition of Design for a Brain; it’s badly
formatted and omits any figures, unfortunately. You can find it at http://www.archive.org/stream/designforbrain00ashb/designforbrain00ashb_djvu.txt

Bruce
A.

[From Bruce Abbott (2009.12.01.2015 EST)]

Rick Marken (2009.12.01.1110) --

RM: This reminds me that I got a copy of Ashby's "Introduction to
Cybernetics" hwile I was an undergrad at UCLA, several years before I
ran into "Behavior: The control of perception" (B:CP). I had no idea
what Ashby was talking about but it looked pretty impressive. And I
certainly had no idea how cybernetic ideas applied to psychology.
B:CP, while difficult, still was a lot clearer to me, especially in
terms of how cybernetics (qua control theory) relates to psychology.

Interesting: I also had read Ashby's "Introduction to Cybernetics" before
spotting Bill's "Behavior: The Control of Perception" in the BGSU bookstore.
I think it help prepare me to appreciate what B:CP had to offer.

Bruce A.

[From
Bruce Abbott (2009.12.01.2100 EST)]

Bill Powers (2009.12.1.1603MST) –

Dag Forssell (2009.12.01.1030
PST)
BP: Thanks for the links, Dag. I found this in the
reference for Introduction to Cybernetics:

LAW OF REGULATORY MODELS

every good regulator of a system
must be (contain) a model
of that system. (Conant and Ashby, 1982)

BP: This is not true.

That depends on what you take to
be a “good” regulator. You and I may believe that regulation
without a model can be excellent; others may disagree. Here’s what Conant
and Ashby (1970) have to say about the difference between error-controlled and “cause-controlled”
(i.e., disturbance-controlled) regulation:

The distinction
may be illustrated by a simple example. The cow is homeostatic for blood-temperature,
and in its brain is an error-controlled centre that, if the blood temperature falls,
increases the generation of heat in the muscles and liver- -but the blood-temperature
must fall first. If, however, a sensitive temperature-recorder be inserted in
the brain and then a stream of ice-cold air driven past the animal the temperature
rises without any preliminary fall. The error-controlled reflex acts, in fact, only
as reserve: ordinarily, the nervous system senses, at the skin, that the cause
of a fall has occurred, and reads to regulate before the error actually occurs.
Error-controlled regulation is in fact a primitive and demonstrably inferior
method of regulation. It is inferior because with it the entropy of the
outcomes Z cannot be reduced to zero: its success can only be partial. The
regulations used by the higher organisms evolve progressively to types more effective
in using information about the causes (at D) as the source and determiner of
their regulatory actions. From here on, in this paper, we shall consider ‘regulation’
of this more advanced, cause-controlled type (though much of what we say will
still be true of the error-controlled.)

You can find the full paper at http://pespmc1.vub.ac.be/Books/Conant_Ashby.pdf

Bruce A.

From Bill Powers (2009.12.01.2120 MDT)]

Bruce Abbott (2009.12.01.2100 EST)–

Bill Powers (earlier)
(2009.12.1.1603 MST) –

**LAW OF REGULATORY

MODELS**

every good
regulator
of a system must be (contain)
a model of that
system. (Conant and Ashby, 1982)

BP: This is not true.

BA: That depends on what you take to be a “good” regulator. You and I may
believe that regulation without a model can be excellent; others may
disagree. Here’s what Conant and Ashby (1970) have to say about the
difference between error-controlled and “cause-controlled” (i.e.,
disturbance-controlled) regulation:

The distinction may be illustrated by a simple example. The cow is
homeostatic for blood-temperature, and in its brain is an
error-controlled centre that, if the blood temperature falls, increases
the generation of heat in the muscles and liver- -but the
blood-temperature must fall first. If, however, a sensitive
temperature-recorder be inserted in the brain and then a stream of
ice-cold air driven past the animal the temperature rises without any
preliminary fall. The error-controlled reflex acts, in fact, only as
reserve: ordinarily, the nervous system senses, at the skin, that the
cause of a fall has occurred, and reads to regulate before the error
actually occurs. Error-controlled regulation is in fact a primitive and
demonstrably inferior method of regulation. It is inferior because with
it the entropy of the outcomes Z cannot be reduced to zero: its success
can only be partial. The regulations used by the higher organisms evolve
progressively to types more effective in using information about the
causes (at D) as the source and determiner of their regulatory actions.
from here on, in this paper, we shall consider ‘regulation’ of this more
advanced, cause-controlled type (though much of what we say will still be
true of the error-controlled.)

You can find the full paper at

http://pespmc1.vub.ac.be/Books/Conant_Ashby.pdf

BP: Yes, I’ve seen this before. The authors didn’t explain why, if the
brain temperature was being controlled in this way, “the temperature
rises without any preliminary fall”. Rises from what temperature? If
the blood temperature of the brain was really being controlled, it should
have remained the same or changed by only a small amount. Why did it rise
at all if it was supposedly under control? Far more likely that the
temperature rise was due to the temperature error induced at the
periphery by the ice-cold air, which resulted in warming of the blood by
various means. It didn’t occur to the authors that closed-loop
temperature regulation can be based on skin and internal tissue
temperatures as well as blood temperature. Whatever is proposed as an
input to the open-loop system can also be proposed as the controlled
variable of a closed-loop system. If that was a real experiment with a
real cow they were describing, and not a thought-experiment, the
experiment was badly done.
But worse than that, they seem to think that the error needed to cause a
negative feedback control system to act is necessarily large; they speak
qualitatively, as if an error of one millidegree is just as bad as an
error of ten degrees. In fact, negative feedback control systems can be
designed so they keep the error at zero as closely as our best
instruments can measure the variables. An instructor I once had remarked
that control systems can control very accurately because they can use our
best measuring instruments as their sensors. The sensors, not the
actuators, determine the accuracy of a closed-loop controller. In an
open-loop system the accuracy of every stage between input and output has
an equal AND ADDITIVE effect on the overall result. The loop gain of some
control systems I have seen can be as high as one billion. It’s
ridiculous to think that any open-loop system can approach the
performance of a closed-loop system.

The authors seem to forget, or perhaps never knew, that building
open-loop controllers requires balancing macroscopic effects against
macroscopic disturbances with any failures of exact compensation showing
up full sized in the final output. The whole reason for the invention of
negative feedback control was the failure of open-loop compensators to
work with suffient speed and accuracy. The multiplier of G/(G+1), which
expresses the perceptual signal as a fraction of the reference signal,
shows why negative feedback systems can tolerate large changes in the
properties of their output devices without losing much accuracy. If
the loop gain G is 1000, the perceptual signal (and the controlled
variable with suitable conversion factors) will be 1000/1001 or 0.999
times the reference signal. If the output function loses half its
sensitivity to the error signal, the perceptual signal and controlled
quantity become 500/501 or 0.998 times the reference signal – a change
of 0.1%. In an open-loop system all the functions from input to output
would have to work with errors less than 0.1% to achieve that degree of
accuracy: halving the output function’s sensitivity to driving signals
would also halve the output effect, a change of 50%, or 500 times as much
as the closed-loop system would permit. Closed-loop control is nearly
immune to such changes in gain.

Model-based control is a clever and logical idea. Make a model of the
external system, the “plant”, and use the model to find the
output that will produce the desired value of a modeled plant variable.
Then send that output to the real plant, which will be affected just as
the model is affected. Unfortunately, this is not a practical idea if you
need any kind of precision of control. It can only be as accurate as the
model’s match to the real “plant,” but that ideal can’t be
approached because even with a perfect model, you still have to issue the
driving signals with enough accuracy, and those signals have to operate
actuators with enough accuracy, and the actuators have to operate on the
plant with enough accuracy, and the plant itself must continue to have
constant parameters within narrow enough limits. Any little changes in
this causal chain would go uncorrected. And of course any unanticipated
disturbances of the plant would not be resisted at all.

A negative feedback control system has an accuracy of control that
depends almost entirely on the sensing apparatus. It is thus far more
accurate than an open-loop system can be, because the open-loop system’s
accuracy depends not only on the accuracy of the sensing apparatus (which
we can assume is the same as for the closed-loop system), but just as
much on the accuracy of every subsequent step in the process. Of course
if all you know about these systems consists of equations on paper, it
may not occur to you that the real system might not behave as exactly as
the equations do.

Best,

Bill P.

From Arthur Dijkstra 2 dec 2009

AD: Is the choice for control system design not dependent on the
nature of the system to be controlled ?

I think in technical systems error control is often sufficient,
if the sensors are precise and the controlling dynamics are relatively high.
E.g. a temperature control in a hospital might not get the desired accuracy
when it is controlled only by feedback because the effect of the heaters to
counteract the measured temperature drop is to slow.

In a management system (a socio technical system) controlling on
feedback only (thus without a model) is often not effective and might even be
dangerous. Safety management in a an organization based only on feedback would
be dangerous. The sensors to detect unsafety as input for feedback control are
problematic.

Is the combination of feedback and feedforward not preferable in
most situations. I imagine a continuum from 100% feedback based via feedback
and feedforward to 100% feedforward.

The combination of feedback and feedforward depends on the
nature of the system and its controller.

Is this an acceptable line of reasoning ?

Arthur

[From Rick Marken (2009.12.01.2150)]

Bill Powers (2009.12.01.2120 MDT)–

Bruce Abbott (2009.12.01.2100 EST)–

Bill Powers (earlier)
(2009.12.1.1603 MST) –

**LAW OF REGULATORY

MODEL**

every good
regulator
of a system must be (contain)
a model of that
system. (Conant and Ashby, 1982)

BP: This is not true.

BA: That depends on what you take to be a “good” regulator…

The distinction may be illustrated by a simple example. The cow is
homeostatic for blood-temperature, and in its brain is an
error-controlled centre that, if the blood temperature falls, increases
the generation of heat in the muscles and liver- -but the
blood-temperature must fall first. If, however, a sensitive
temperature-recorder be inserted in the brain and then a stream of
ice-cold air driven past the animal the temperature rises without any
preliminary fall…

You can find the full paper at

http://pespmc1.vub.ac.be/Books/Conant_Ashby.pdf

BP: Yes, I’ve seen this before.

And I’m sure we’ve discussed this many times before on CSGNet. But it’s always nice to see your clear, quantitative explanation of why model-based control can’t work, let alone work better than a high gain negative feedback control system.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.12.02.0920 MDT)]

Arthur Dijkstra 2 dec 2009 --

AD: Is the choice for control system design not dependent on the nature of the system to be controlled ?
I think in technical systems error control is often sufficient, if the sensors are precise and the controlling dynamics are relatively high. E.g. a temperature control in a hospital might not get the desired accuracy when it is controlled only by feedback because the effect of the heaters to counteract the measured temperature drop is too slow.

I can't imagine that any hospital would rely on open-loop control, if the accuracy of control were critical. In general, a closed loop system will react faster, will correct temperature variations faster, and will keep the final temperature more exactly at the intended level, if properly designed. Moreover, the closed-loop system will counteract the effects of opened and closed windows, heat sources like operating-theater lights going on and off, changes in air circulation when people go in and out through doors a lot, under or over voltage in the electrical supply, changes in the quality of heating fuel, increases or decreases of the number of people present, and any other disturbances whether from known or unknown sources, occurring at unpredictable times, and undetectable except for their effects on inside air temperature. No open-loop system with any practicable design could do those things.

The closed-loop hospital temperature control system can react quickly to temperature changes because it can crank the heater power up not to what is anticipated as the amount needed to achieve the final temperature, but far above that level, causing the temperature to rise very rapidly. But as the temperature approaches the final level, the output power is reduce more and more, so the heater power is reduced to the final level just as the temperature levels out at the set point. This does not require any complex calculations or arrays of sensors distributed all over the outside walls, or computers stuffed with thermodynamic equations and properties of materials, or accurately adjustable heaters with absolutely unchanging characteristics, or sealed windows and airlock doors. Your own home thermostat works this way by adjusting the duty cycle of the furnace. When you first turn up the reference temperature in the morning, the duty cycle starts out at 100%, with the furnace on continuously. If it stayed on that way, the house would drastically overheat. But as the temperature rises, a simple "anticipation" circuit in the wall box measures the rate of change of temperature and lies to the comparator, telling it that the final temperature has been reached. That turns the furnace off, and after a while the rate of change slows down a little and the anticipation circuit say "oh, my goodness, it's freezing in here" and the furnace turns back on. So the duty cycle gradually changes, reaching the final duty cycle just as the room air comes to the desired temperature. This happens automatically and needs no fancy calculations.

Of course you could design an open-loop controller that would reproduce the same changes in duty cycle, but you'd probably have to recalibrate it every day and watch out for unexpected north winds or excessive sunshine, not to mention all the other undetectable causes of errors mentioned above. This controller would probably cost hundreds if not thousands of dollars, compared with the $30 for a digital programmable home thermostat. And it wouldn't work as well.

In a management system (a socio technical system) controlling on feedback only (thus without a model) is often not effective and might even be dangerous. Safety management in a an organization based only on feedback would be dangerous. The sensors to detect unsafety as input for feedback control are problematic.

I agree, a little planning or attention to design details certainly helps avert disasters at times. But there are severe limits to how much can be accomplished without continuous monitoring of the actual, as opposed to hoped-for or intended, results. A management that simply issues orders to subordinates and waits for the intended results to be achieved is doomed to failure. The real world is not that repeatable, reliable, or disturbance-free, and that includes the people receiving the orders.

In fact, it's possible to create closed-loop systems in which the controlled variable is some continuously-computed future state of the world. An example is an automatic aircraft-landing system. In that case there is a perceptual input function that measures distance to the runway, altitude, speed, flap settings, and engine thrust, and displays to the pilot not where the airplane is, but where on the runway it will touch down if all else remains equal. The pilot, a closed-loop control system, operates the controls to keep the displayed touchdown point stationary at the proper place. The "model", in this case, is built into the perceptual input function, and is not used to calculate the required output, but simply to predict the landing place wherever it is. the calculations get more and more accurate as the touchdown point nears, because the computation errors make less difference in the path of the airplane. The pilot controls the display as if it is any present-time variable, because of course the prediction is occurring, over and over, in present time. An automatic negative feedback control system given the same input information could also land the airplane, but you will notice that the FAA hestitates to approve any such device. That device, though a closed-loop system, still could not detect wind shear or traffic in the air or approaching the runway.

Watch NASA TV when a shuttle docks to the space station. Exactly this kind of predictive control is used to show the shuttle pilot where the shuttle will be in half an hour or an hour, after going through the loops that the orbital dynamics generate. The pilot, in present time, simply operates the controls to move the displayed future point to the place on the screen where he wants it, and keeps it there.

Is the combination of feedback and feedforward not preferable in most situations. I imagine a continuum from 100% feedback based via feedback and feedforward to 100% feedforward. The combination of feedback and feedforward depends on the nature of the system and its controller. Is this an acceptable line of reasoning ?

Sure, if feedforward can speed things up a little, by all means use it. But it's no way to achieve accuracy and speed. All it can do is fill in the fraction of a second before the controlled variable starts to change, and it can do that only approximately, and at considerable expense for the required sensors and computers.

I'm quite aware of the tradition that has grown up in some areas of control engineering, following the lines first set up by Ashby. I think it's a bad mistake, but who am I to tell them how to spend their money?

Best,

Bill P.

Hi Arthur,

I think you could be right in your combination lock.

AD : Is the combination of feedback and feedforward not preferable in most
situations. I imagine a continuum from 100% feedback based via feedback and
feedforward to 100% feedforward.
The combination of feedback and feedforward depends on the nature of the
system and its controller. Is this an acceptable line of reasoning ?

BP : Sensing the cause of a disturbance and anticipating it's effects can
sometimes improve control (feed-forward) but is by no mean necesary and
usually it's not even possible. (B:CP, p. 48, 2005).

BH : I agree that feed-forward is improving control, but I'm asking myself,
why would evolution "develope" such a sofisticated mechanism as feed-forward
control is and "upgrade" classic control, if it wasn't neccessary for
organisms to survive ?
Little animals start running and hide if they hear a voice of preditor.
Animals start running if they see some signs which could mean the presence
of preditors, before they see them.

I would say that combination of feed-back and feed-forward involve also
complexity of outer conditions (complex disturbances on system) and how much
the conditions are known to control system, when control is going on.

For example : if you try to turn right from the main street to a side
street, you can do it easy with negative control loop. But if you want to
turn right and the pedestrian is closing to crossing and wants to cross the
road, possibly disturbing you way, then I think prediction is "turned on"
dealing with the problem, where will the pedestrian be when my car will turn
right in the crossing. The question that comes to me in such a situations
is, whether I'll come enough fast to turn right before pedestrian do come to
the edge of the pavement or not. So if I'll have to push the break suddenly
there may be some lunatic behind me and bump into my car from behind,
because of my surprising act. I think in this case prediction helps to make
better decision what to do : how much to accelerate and maybe turn before
pedestrian or start slowly stopping the car and let the pedestrian cross the
road before I do.

I think there are numeruous examples in everyday life, in which we need
predictions of what will happen, thus improving control.
If conditions are well known and not so much changeable, like in sports, I
think that there is so much room for predictions because most of outer
parameters are steady and thus future events more or less predictable. I
can't imagine playing top tennis or table tennis or basketball, football,
handball, baseball �.etc. without feed-forward.

With putting all kinds of vegetable and condiment into water (past
experiances) we can predict what kind of taste the soup will have. But we
have to control the taste from time to time and add some substances to make
tight control of taste.

I could agree with you Arthur : "combination of feedback and feedforward are
preferable in most situations. I imagine a continuum from 100% feedback
based via feedback and feedforward to 100% feedforward. The combination of
feedback and feedforward depends on the nature of the system and its
controller".
BH : I would add, that complexity of situations which produce various
complexity of disturbances on control system and degree of knowing the
situations, does affect the kind of "mixed" control involved.
I could be wrong, but my experiances with my life shows that you could be
right. But it's also possible that I didn't understand something right about
PCT.

Best,

Boris

[From Rick Marken (2009.12.04.0900)]

BH : I agree that feed-forward is improving control, but I'm asking myself,
why would evolution "develope" such a sofisticated mechanism as feed-forward
control is and "upgrade" classic control, if it wasn't neccessary for
organisms to survive ?

I think we are talking about several different things when we talk
about "feed-forward" control. One is model based control, where
control output is driven, in part, by the output of a model of the
environment that predicts the future value of disturbances to the
controlled variable. This is the kind of control that Bill (and I)
think is unlikely to exist in organisms since it is computationally
unfeasible and basically unhelpful (for the reasons Bill described in
an earlier post).

The second kind of "feed-forward" control is predictive control, where
the prediction occurs in the perceptual function and so what is
controlled is a perception of the predicted future state of the
controlled variable. This is the kind of control that Bill described
as being used in aircraft landing systems, where the plane's computer
displays to the pilot a constantly updated perception of where the
plane will be in the future given the current outputs of the pilot. I
think something like this kind of predictive controlling can be done
by living control systems; what is controlled in this case is the
imagined future (or current) state of a variable rather than the
actual variable. I'm not sure this necessarily improves control but it
might allow some degree of control in situations where perceptual
information is degraded (ie. reaching for a glass of water on the
nightstand in the dark).

The third kind of "feed-forward" control is really feedback control of
a higher level perceptual variable. This is the kind of control we see
in a pursuit tracking task when the target moves in a regular pattern,
such as a sine wave. Control, measured as RMS deviation of cursor from
target, is better in this case than when then target moves in an
arbitrary pattern. The improved control with the regular target
pattern can be shown to be the result of controlling for the higher
level movement pattern by adjusting the reference for the position of
the cursor appropriately. The fact that this improvement is not the
result of producing a predicted output (as in model based feed-forward
control) can be shown by applying a disturbance to the cursor as well,
making control of cursor position a compensatory task. The model based
prediction would actually interfere with control is this case.

As far as I know, there is no demonstration of the existence of the
first kind of "feed-forward" control -- model-based control -- in
living organisms. If there is such a demonstration I would like to see
it.

Best

Rick

···

On Thu, Dec 3, 2009 at 9:42 AM, Boris Hartman <boris.hartman@masicom.net> wrote:
--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2009.12.04.12.19]


[From Rick Marken (2009.12.04.0900)]
BH : I agree that feed-forward is improving control, but I'm asking myself,
why would evolution "develope" such a sofisticated mechanism as feed-forward
control is and "upgrade" classic control, if it wasn't neccessary for
organisms to survive ?

I think we are talking about several different things when we talk
about "feed-forward" control. One is model based control, where
control output is driven, in part, by the output of a model of the
environment that predicts the future value of disturbances to the
controlled variable. This is the kind of control that Bill (and I)
think is unlikely to exist in organisms since it is computationally
unfeasible and basically unhelpful (for the reasons Bill described in
an earlier post).
The second kind of "feed-forward" control is predictive control, where
the prediction occurs in the perceptual function and so what is
controlled is a perception of the predicted future state of the
controlled variable....
The third kind of "feed-forward" control is really feedback control of
a higher level perceptual variable.

Very well put!

If I can add to Bill’s example (elided in my quote here), I was shown
demonstrations of the second kind of feed-forward (predictive control)
in a Dutch ship simulator, and at another laboratory in a submarine
simulator. In both cases the operator watched a map display on which
the location of the ship/sub was shown (two for the sub, vertical and
horizontal). The predictive display showed a curved line projecting
from the bow to show where the ship/sub would go if the control
surfaces remained unchanged (and there were no changes in the
disturbance values). The probability of the ship/sub hitting some fixed
obstruction such as a harbour entrance (not too hard to do when you are
taking a supertanker into Rotterdam!) was much lower when the predicted
track was shown than when it was not.

In these demonstrations, the prediction was supplied externally to the
perceptual input, but a suitably trained pilot might have been able to
do the same using his well-practiced internal perceptual functions. It
would take a lot of experience with the particular ship, for the pilot
to be able to do it in his head, but the control loop analysis would be
the same. To have it on the display makes it possible for much less
experienced pilots to avoid wrecking a supertanker in a busy harbour.

Martin

···

boris.hartman@masicom.net

[From Bill Powers (2009.12.04. 0955 MDT)]

BH: Hi Arthur,

I think you could be right in your combination lock.

AD : Is the combination of feedback and feedforward not preferable in
most

situations. I imagine a continuum from 100% feedback based via feedback
and

feedforward to 100% feedforward.

The combination of feedback and feedforward depends on the nature of
the

system and its controller. Is this an acceptable line of reasoning
?

BP : Sensing the cause of a disturbance and anticipating it’s effects
can

sometimes improve control (feed-forward) but is by no mean necesary
and

usually it’s not even possible. (B:CP, p. 48, 2005).

BH : I agree that feed-forward is improving control, but I’m asking
myself,

why would evolution “develope” such a sofisticated mechanism as
feed-forward

control is and “upgrade” classic control, if it wasn’t
neccessary for

organisms to survive?

The term feedforward is not very well defined. If all you mean is passing
a signal in the forward direction from a sensory receptor or higher
system to an output actuator, the control system model consists entirely
of feedforward components. If the output of the system influences the
signal from the sensory receptor, the whole system becomes a feedback
system. If the feedback is negative and stable, the system is then a
control system.
Feedforward is also used to mean sensing the cause of a
disturbance and producing an output that will oppose the effect of
the disturbance on some controlled variable when that effect occurs. This
allows a system having internal delays to produce an action opposing the
effect of the disturbance at the same time the effect occurs. It also
allows for timely opposition to arise if the cause of the disturbance can
be detected some time before its effects reach the system, so the delay
is in the environment.

If the system has to wait for an error signal before acting, the
controlled variable will not be defended against the effect of the
disturbance for the entire duration of the internal delay, whether the
delay is long or short. So there are some advantages to be gained from
this kind of feedfoward.

However, anticipatory feedforward (as we can call the second kind) is
useful only to the extent that the timing and accuracy of the
anticipatory sensing and actions are correct. Reacting too soon will
increase the error, as will reacting too late. Reacting too little or too
much will also leave uncorrected error, and acting in the wrong direction
can make the error worse than it would have been without the
feedforward.

In order to set up anticipatory feedforward in an optimum way, it is
necessary to monitor the effects of disturbances on the controlled
variable, and adjust the timing, amount, and direction of the sensing and
compensatory output until the effect of the disturbance is minimized.
Doing this, of course, requires a negative feedback control system to be
operating on the feedforward functions.

Once adjusted as well as possible, the anticipatory feedforward system is
limited in its usefulness by the accuracy and stability of (a) its
sensory disturbance-detection system, (b) the computations that take
place, and (c) the conversion of the results of the computation into
compensating actions. It also depends on the local environment remaining
exactly the same, with no disturbances in it that are not taken into
account and no changes in the relationship between the controller and
either the cause of the disturbance or the effects of actuators on the
controlled variable. Feedback control, of course, does not need any
information about the cause of a disturbance at all, since it is based
only on actual changes in the controlled variable.

For anticipatory feedforward to work at all, the control system must be
able to observe the cause of a future disturbance. Standing on the deck
of a ship in heavy seas is a case where the effect of a disturbance
arrives without any warning; watching chaotic waves doesn’t supply enough
information to make feedforward work. On the other hand, the speed with
which the deck rotates and translates is slow enough, and the moment of
inertia of the body is large enough, that ordinary feedback is sufficient
to allow sailors to walk upright while the ship rolls underneath them.
There are many more situations in which invisible causes produce changes
in the controlled variable, so there is no possibility of sensing the
cause. The is probably true in most real-world behavior.

There are few cases in which anticipatory feedback will give enough
improvement in control to make it worth the expense and complexity it can
entail. For slowly changing disturbances, feedback alone gives
essentially all the resistance to disturbances that is needed. For very
long delays, it is hard to predict accurately the effects that will occur
when they finally get to the control system – too many unpredictable
influences can come into play. For very short delays, small errors in
timing can convert opposition to the disturbance into aiding it, or
adding a new disturbance before or after the effects of the real one.
There is only a small range of delays over which anticipatory feedforward
can, in real systems, offer any improvement in performance.

This is why the “anticipation” in real systems is usually the
kind that can be achieved at minimal expense with simple devices such as
a capacitor-resistor combination that advances the phase of a varying
signal a little, or a small heater that warms a temperature sensor (or
when turned off lets it cool) faster than the air temperature can
normally change. More elaborate kinds of feedfoward aren’t worth the
expense because the improvement in performance to be gained is so slight.
Sometimes feedforward is the only way to achieve some degree of control,
as in Boris’s nice examples, but those are rare cases.

As to other kinds of feedforward such as making plans or taking
precautions, those belong to higher-level kinds of control which involve
much slower changes and longer delays, and as Robert Burns said, often go
wrong or “gang aft agley”. We try to anticipate problems, but
the unexpected happens anyway, and we just have to rely on our good old
negative feedback control systems to deal with the many uncorrected
errors that are left when the feedforward has done all it can do. The
safest thing to do is trust the feedforward, but verify it with feedback
control to handle the many situations in which feedforward fails to help
or makes matters worse.

Best,

Bill P.

···

At 11:42 AM 12/3/2009 -0600, Boris Hartman wrote:

Hi to all !

Thanks for your answers. It doesn't seem to me that I was trying to define
anything, I just used the definition I found in B:CP about feedforward and
try some examples (experiences) from everyday life (or with free-living
organisms) to find out, whether feedforward correspond to evolution
necessity of development in histroy of organisms or not.

I offered 4 cases and neither was answered directly. Only some abstract
definitions. I stil think that however we define feedforward (anticipation,
prediction, reflex) it has it's function in maintaining the essential
variables in their limits. So I still think that development of feedforward
was somehow necessary from the perspective of evolution.
I'll still remain with definition of feedforward given in B:CP (p.48, 2005)

1. CASE ďż˝ NATURE AND FREE LIVING ORGANISMS : It was about how free-living
organisms in wild nature respond to some signs of predator with
feed-forward. The problem was how could we describe what's happening inside
animals, for example antilope, who senses the predator (probably possible
actual disturbance to essencial variables) and starts suddenly running away
and there is no attack of predator (for example lion) or the predator
attacks anyway, after she starts running. I still think that feed-forward
saves life to antilope. We can see that practically everyday on NG WILD.

If I try to follow definition in B:CP (2005) can we say that antilope is
sensing the cause of the disturbance (lion) and she anticipates it's effects
(deadly attack of the lion) thus improving control. What happens in antilope ?
Can we simply say that antilope is anticipating (predicting) future
dangerous situation on the bases of smell and counteract to something what
will acctually occur or never occurs. Whatever happens on the basis of
feedforward, it's more efficiently than pure control. It's maintaining
antilope's essential variables stable, what would not be the case with only
classic control. Whatever is happening, it's saving her life. If she stays
and the attack of predator begins (IN CONTROL MODE OF ANTILOPE) she has less
chances to escape and so to survive or there is even no chances.

2. CASE : was about the drivers and pedestrians. If the driver is
controlling only speed of the car and course of the car and turn right in
the crossing, he could run over the pedestrian. If he is controlling the
speed of the car and the speed of the pedestrian, they both can come to
crossing at the same time and driver would be forced to stop the car
immediately so risking that some driver behind him bump into his car.
If the driver is anticipating (predicting) the course and speed of his car
and course and speed of the pedestrian, he improves control and make save
deccision whether to stop or accelerate and safely turn in crossing.
So if the driver is sensing the cause of distrubance (pedestrian) and
anticipating it's effects (possibility of running over the pedestrian), he
is improving his control.

3. CASE : In every sport I mentioned, anticipating (predicting) what
opponent will do is huge advantage and ussually means winning. Better
anticipation of players means more chances for winning. Some actions of
players I think it can be done only with presence of feed-forward (throw,
handing the ball overďż˝). The goal of the game is more efficiently achieved
if the moving of players are anticipated (predicted, planned). It's so
called tactics.
So sensing the cause of disturbance (opponent) and anticipation of effects
(opponent's action) players are improving control.
So feed-forward is somehow neccessary because everybody wants to win, and
most players and teams make tactics and improve other elements of training.
Trainings contains also the improvement of feedforward actions, so to
achieve their goals : to WIN, thus have more money and higher quality of
life (survival) or in if we say it in Ashby's language to maintain essential
variables in the limits.

4. CASE : putting all kind of vegetables in water is by my oppinion
feed-forward act (anticipating the taste of the soup), and testing it while
cooking is control act.

I offered only 4 examples, but I think Bill mantioned also the 5. possibility.

BP : As to other kinds of feedforward such as making plans or taking
precautions, those belong to higher-level kinds of control which involve
much slower changes and longer delays, and as Robert Burns said, often go
wrong or "gang aft agley". We try to anticipate problems, but the unexpected
happens anyway, and we just have to rely on our good old negative feedback
control systems to deal with the many uncorrected errors that are left when
the feedforward has done all it can do.

BH : So we agree that building houses, skyscrapers, bridges with plans
(feed-forward) very much improve control. Plans in constructing anything
(probably also when engineers construct machines) are necessary. So the LAW
says. Despite this, some smart guys build houses without plans (only
control). The consequences are ussually seen. Speccially on muddy ground
like in our country. The houses simply sink into the ground. There are many
things to predict when building houses.

BP: The safest thing to do is trust the feedforward, but verify it with
feedback control to handle the many situations in which feedforward fails to
help or makes matters worse.

BH : So I think we solve the problem. Can we agree that feed-forward is
improving control and it's mostly necessary with activities living organisms
do, along with feed-back control ? And can we agree that some combination of
feed-back and feed-forward is ussually present as Arthur proposed : "Is the
combination of feedback and feedforward not preferable in most situations ?"
Can we adjust your definition in B:CP (2005) ?

Best,

Boris

Hi Richard !

We haven't talk for a long time. We've been probably too busy :))

If I understood you right, living organisms are likely to produce only one
kind of feed-forward control :
RM : The second kind of "feed-forward" control is predictive control, where
the prediction occurs in the perceptual function and so what is controlled
is a perception of the predicted future state of the controlled variable.
This is the kind of control that Bill described as being used in aircraft
landing systems, where the plane's computer displays to the pilot a
constantly updated perception of where the plane will be in the future given
the current outputs of the pilot. I think something like this kind of
predictive controlling can be done by living control systems; what is
controlled in this case is the imagined future (or current) state of a
variable rather than the actual variable. I'm not sure this necessarily
improves control but it might allow some degree of control in situations
where perceptual information is degraded (ie. reaching for a glass of water
on the nightstand in the dark).

BH : Oh. I almost forgot. I wanted to ask you something.

RM : I think the E.coli "reorganization" model of evolution can be
considered somewhat Lamarckian inasmuch as it assumes (as I understand it)
that the rate of mutation in a population varies in proportion to the degree
of "error" in individuals in the population (where error is the difference
between reference and actual states of intrinsic variables) with the
_purpose_ of reducing the level of error (developing a phenotype that can
better control the intrinsic variables in the environment in which the
organisms now happen to find themselves).

BH : Who's oppinion is this ?

Best,

Boris

From Arthur Dijkstra 2009-12-05

What do you think of this use of feedback and feedforward ?

http://www.ida.liu.se/~eriho/ECOM_M.htm

Arthur