Devil's Bibliography

[From Rick Marken (930118.0800)]

Here are some entries for the "Devil's Bibliography"

1. Sheridan, T. B. and Ferrell, W.R. (1974) Man-machine
systems, Cambridge, MIT Press

Figure 9.1 Shows "reference input variable" as a variable
outside the human operator where it is combined with an
output variable to produce the inut to the subject, e(t),
the "error variable" which causes (via the operator
function Y(H)) the "control variable" that influences the
output of the "plant".

Figure 11.10 (on quasi-linear model of a car driver) shows
the "desired path" (of the car) as an INPUT to the driver.

There are probabaly more but I'm going to look for a newer
source. Ah, here's one:

2. Huchingson, R. D. (1981) New horizons for human
factors in design, New York: McGraw Hill

Figure 5.15 (p 181) shows a "command" signal being
combined with system output to produce the display
variable that is the input to the "man" component of
the system. The "man" box converts the display input into
an output that is the "input" to the machine that produces
the output that ultimately becomes the display input.

Getting even more recent (and with no obvious signs
of anyone catching on) we find:

3. Boff, K.R., Kaufman, L and Thomas, J.P (1986)
Handbook of perception and human performance: vol.II,
Cognitive processes and performance. New York: Wiley

Chapter 39 (by C. Wickens) is chock full of the "input
blunder" and "man machine blunder". Particularly clear
examples are:

Figure 39.1 which shows the error signal (e(t)) as the
input to the human operator.

Figure 39.17 which shows the "optimal control model"
as a sequence of transformations of input yielding a
"control" output; the secular contribution of the operator
is observation and motor noise -- quite an operator, though
not completely unlike some people I've met.

Figure 39.39 shows an input being turned into
a "displayed error" and presented to the operator who
turns it into the output variable.

Finally, the latest source that was nearby:

4. Weiner, E. L. and Nagel, D. C (1988) Human factors
in aviation. New York. Academic Press

In Chapter 11, "Pilot control" by Sheldon Baron
we find Figure 11.4 which shows altitude and
pitch signals as inputs to the pilot "box" which
transforms these inputs, via the function Y(P),
into outputs that enter the vehicle's machinery.

I could probably find some articles from the 1990s by
searching through recent issues of journals like
"Human Factors" but I like the stuff in these "bibles"
of "human performance" knowledge the best.

Best regards

Rick

[From Bill Powers (930118.1600)]

Rick Marken (930118.0800) --

You spur me to add a few more entries to the "Devil's
Bibliography."

Here is Warren S. McCulloch helping to form the myth that
feedback systems go unstable when their gain exceeds unity:

McCulloch, W. S.; Finality and form; in Embodiments of Mind,
(Cambridge, MIT Press, 1965) pp. 256-275.

"When we change the magnitude of the quantity measured, a reflex
may return the system toward, but not quite to, the original
state, or it may overshoot that state. The ratio of the return to
the change that occasioned it is called the _gain_ around the
circuit. When the return is equal to the change that occasioned
it, then the gain is one.....if the gain is greater than one at
the natural frequency of the reflex, fluctuations at that
frequency begin and grow until the limitations of the structure
composing the path reduce the gain to one; then, at the level for
which the gain has become one, both the measured quantity and the
reflex activity go on fluctuating." (p. 267)

Even earlier than this confident mangling of closed-loop
properties, we have this:

McCulloch, W. S., Appendix I: Summary of the points of agreement
reached in the previous nine conferences on cybernetics.
_Cybernetics_: circular causal and feedback mechanisms in
biological and social systems. Transactions of the Tenth
Conference, April 22,23 and 24, 1953, Princeton, NJ. Josiah Macy,
Jr. Foundation, 1955. LCN 51-33199.

"The transmission of signals requires time, and gain depends on
frequency; consequently, circuits inverse for some frequencies
may be regenerative for others. All become regenerative when gain
exceeds one. Regeneration leads to extreme deviation or to
schizogenic oscillation..." (p71)

This rules out negative feedback systems with loop gains higher
than unity -- in other words, all actual control systems that
exist in organisms.

On explaining how control works:

"Wiener drew a most illuminating comparison between the
cerebellum and the control devices of gun turrets, modern
winches, and cranes. The function of the cerebellum and the
control of those machines is, in each case, to precompute the
orders necessary for servomechanisms, and to bring to rest, at a
preassigned position, a mass that has been put into motion which
otherwise, for inertial reasons, would fall short of, or
overshoot, the mark." (p. 72)

Here we have the germ of the compute-then-execute approach.

There is a certain kind of intellect which can, on hearing the
merest summary of an idea, immediately leap ahead to its most
profound implications and applications, completely unaware, or
unconcerned, that it has a superficial and mostly incorrect
understanding of the idea.

Here's another gaggle of myths, this time from W. Ross Ashby, in
_An Introduction to Cybernetics (New York: Wiley, 1966 (third
printing, copyright 1963).

"The basic formulation of s.11/4 assumed that the process of
regulation went through its successive stages in the following
order:

  1. A particular disturbance threatens at D;
  2. it acts on R, which transforms it to a response;
  3. the two values, of D and R, act on T _simultaneously_ to
     produce T's outcome;
  4. the outcome is a state of E, or affects E." (p. 221)

E is an essential variable that is to be stabilized by the action
of a regulator R, acting through an environmental function T
which is fixed. Regulation is achieved when the effect of D on T
is precisely cancelled by the response of the regulator R to D,
also acting on T. It is assumed that E depends on T and T alone,
so there are no disturbances acting directly on E that can't be
sensed by the regulator.

If R and T are precisely calibrated and act with infinite
precision, then perfect regulation is possible -- but not
otherwise. Ashby tended to overlook the question of precision,
largely because in examples he tended to use small integers or
decimal fractions accurate to one decimal place to represent the
variables. As a result he greatly overestimated the capacities of
compensating systems, and therefore, by comparison, greatly
underestimated the capacities of control systems.

"_Regulation by error._ A well-known regulator that cannot react
directly to the original disturbance D is the thermostat-
controlled water bath, which is unable to say "I see someone
coming with a cold flask that is to be immersed in me -- I must
act now." On the contrary, the regulator gets no information
about the disturbance until the temperature of the water (E)
actually begins to drop. And the same limitation applies to the
other possible disturbances, such as the approach of a patch of
sunlight that will warm it, or the leaving open of a door that
will bring a draught to cool it." (p. 222).

Note the implication that a compensating regulator might exist
which, on seeing someone approach with a flask, could deduce that
it contains cold water and is about to be immersed in the bath.
Note also the unspoken assumption that merely from qualitative
knowledge about a flask of cold water, a patch of sunlight, or a
potential draught through an open door, the regulator could be
prepared to act quanititatively: to add heat to the bath that
would exactly compensate for the cooling from the water in the
flask or the evaporation due to the draught, or cooling just
sufficient to prevent any rise in the temperature of the bath.

From qualitative knowledge of the disturbance, the regulator

somehow achieves exact quantitative compensation for the
quantitative effects of the disturbance. If, of course, such a
thing were possible, the compensator would be much superior to
any form of feedback controller. But such a thing is not remotely
possible.

After doing through a series of diagrams, Ashby finally diagrams
the true error-driven control system:

" ... we have the basic form of the simple 'error-controlled
servomechanism' or 'closed-loop regulator,' with its well-known
feedback from E to R." (p. 223)

The diagram is D -> T -> E
                             ^ |
                             > >
                             R <--

Now we get to a whole fountain of misinformation about control
systems, a series of deductions that is just close enough to
reality to be convincing, and just far enough from it to be utter
nonsense.

"A fundamental property of the error-controlled regulator is that
_it cannot be perfect_ in the sense of S.11/3" (p.223)

He then goes through a "formal proof" using the Law of Requisite
Variety to conclude

"It is easily shown that with these conditions _E's variety will
be as large as D's_ -- i.e., R can achieve no regulation, no
matter how R is constructed (i.e., no matter what transformation
is used to turn E's value into an R-value)."

"If the formal proof is not required, a simpler line of reasoning
can show why this must be so. As we saw, R gets its information
through T and E. Suppose that R is regulating successfully, then
this would imply that the variety of E is reduced below that of D
-- perhaps even to zero. This very reduction makes the channel

               D -> T -> E

to have a lessened capacity; _if E should be held quite constant
then the channel is quite blocked_. So the more successful R is
in keeping E constant, the more does R block the channel by which
it is receiving its necessary information. Clearly, any success
by R can at best be partial." (p. 223-224)

This argument has apparently convinced many cyberneticists and
others that the Law of Requisite Variety is more general than the
principles of control, and in fact shows that control systems are
poor second cousins to compensators when it comes to the ability
to maintain essential variables constant against disturbance.

In fact this argument shows how utterly useless the Law of
Requisite Variety is for reaching any correct conclusion about
control systems.

Having swept through this dizzying exercise in proving a
falsehood, Ashby then grudgingly allows feedback control to creep
humbly back into the picture:

"Fortunately, in many cases complete regulation is not necessary.
So far, we have rather assumed that the states of the essential
variables E were sharply divided into "normal" ... and "lethal",
so occurrance of the "undesireable" states was wholly
incompatible with regulation. It often happens, however, that the
system shows continuity, so that the states of the essential
variables lie a long a scale of undesireability. Thus a land
animal can pass through many degrees of dehydration before dying
of thirst; and a suitable reverse from half way along the scale
may justly be called "regulatory" if it saves the animal's life,
though it may not have saved the animal from discomfort. "

Note the gratuitous "half way along the scale."

"Thus the presence of continuity makes possible a regulation
that, though not perfect, is of the greatest practical
importance. Small errors are allowed to occur; then, by giving
their informfation to R, they make possible a regulation against
great errors. This is the basic theory, in terms of
communication, of the simple feedback regulator." (p. 224)

The argument then veers off into "Markovian machines" and
Markovian -- stochastic -- regulation. This is billed as the most
important and far-reaching application of the error-controlled
regulator.

Note how the argument relies on qualitative statements to reach
quantitative conclusions. It is perfectly true that if a
compensating regulator affects T equally and oppositely to the
effect of D, E will not be affected at all. But by that same
argument, to the extent that R does not have perfect information
about D (and about the nature of the connection from D to T and
from R to T), T will not be affected equally and oppositely, and
thus to the extent of the imperfection, E will not be perfectly
regulated. Furthermore, if there is any disturbance at all that
is NOT detected by R (for example, a disturbance that acts
directly on E), the effects of that disturbance will not be
compensated at all. If R does not compensate for all
nonlinearities and time-functions in the connection from D to T,
compensation will not occur. When the processes involved are
thought of as real physical processes in a real environment, the
idealized assumptions behind the compensatory regulator are
easily seen to be unrealistic -- they predict regulation that is
far, far better than any that could actually be achieved in this
way.

Note also how the qualitative concept that error-regulated
control must be imperfect is used to imply that it must be _more
imperfect than compensatory regulation_. This non sequitur has
appeared in the literature over and over ever since Ashby. In his
earlier book Ashby was still toying with true feedback control
and continuous systems; the appendix is heaped with rather
aimless mathematics that is oriented in that direction. But in
this second book, Ashby shows that he never understood how an
"error-controlled" regulator works. He didn't know that the
"imperfection" inherent in such systems can be reduced to levels
of error far smaller than the error-reductions that any real
compensating system could achieve -- smaller by orders of
magnitude, in many cases, particularly cases involving human
behavioral systems.

Ashby's entire line of reasoning about feedback control in _An
introduction to cybernetics_ is spurious. Yet Ashby has been
revered in cybernetics and associated fields for 40 years as a
deep thinker and a pioneer. His Law of Requisite Variety has
nothing at all useful to say about control systems -- and in fact
led Ashby to a completely false conclusion about them -- yet it
is still cited as a piece of fundamental thinking. Whether Ashby
originated these misconceptions or simply picked them up from
others I don't know. One thing is certain: he did not get them
from an understanding of the principles of control.

Here's a little test.

I have 200 pounds of ice cubes, and you have 50 gallons of
boiling water. Desired: a nice tub of water for a bath. I get to
throw in the ice cubes (you can see exactly how may I throw in);
you get to pour in the boiling water. As you see me disturbing
the bath with ice-cubes, you estimate how much boiling water to
pour in to arrive at a bath of the right temperature. When I have
exhausted my ice cubes, you finish the process by adding more
boiling water in the amount you think is necessary.

As an alternative, I will let you see a thermometer in the tub,
but will not let you see how many ice cubes I am throwing in. You
must base your additions of boiling water entirely on the
thermometer reading.

Whichever method of filling the tub you elect, when the process
is finished you must then step into the tub and immediately sit
down in it.

Which method would you choose: compensating for known
disturbances, or basing your action on perception of the state of
the essential variable without knowing what the disturbances are?

···

--------------------------------------------------------------
Best,

Bill P.

[Avery.Andrews 930119.1700]

Who shall be responsible for looking after it? I'll try to integrate,
but all contributors should keep ahold of their own stuff.

My feeling is that there should be a bad guy list (McCulloch,
Arbib, at least on the basis of his Hand. of Phys. diagrams (when
I reread the accompanying discussion, it started looking even worse,
since he was mangling concepts presented quite clearly and coherently
by Houk and Rymer in their Handbook article), for real trash by famous
people, and a good guy list, for stuff that is basically sensible
(Camhi, Houk & Rymer from whnat I've seen so far), although arguably
hindered by a bad heritage.

Hmm. I think I remember trying to make sense out of the Ashby book when
I was in school ...

Avery.Andrews@anu.edu.au

[From Bill Powers (930119.0900)]

Today is JAMES WATT'S BIRTHDAY.

Ft. Lewis' computer is down again; the Fine Arts building
collapsed under the weight of a new heavy slow and this has
probably upset everything, although the computer is not in that
building. So I might as well go on with the Devil's Bibliography.

This is Myerson, J. and Miezin, F, M.; The kinetics of choice: an
operant analysis. Psychological Review Vol. 87, No. 2, 160-174
(1980). Quotations are placed between pairs of dashed lines.

···

------------------------------------------------------------
M&M:
The results of a large number of experiments on concurrent
schedules are well described by the matching law (Herrnstein,
1961),

(1) B1/(B1+B2) = R1/(R1+R2)

where Bn is the rate of response or amount of time allocated to
some behavior _n_ and Rn is the obtained rate of reinforcement
for this behavior. (p. 161)
-------------------------------------------------------------
WTP:
This equation shows a relationship between behavior rate and
reinforcement rate. Without altering the algebraic relationship,
we can simplify it:

1 multiply by denominators:

         B1*R1 + B1*R2 = B1*R1 + B2*R1
         B1*R2 = B2*R1

2. B1/R1 = B2/R2, or B1/B2 = R1/R2

The first form of 2. states that the ratio of behavior rate to
reinforcement rate on each schedule is the same. This will be
trivially true for any pair of identical fixed-ratio schedules,
because Bn/Rn is merely the number of presses required to produce
one reinforcement. Equation 2 states that this number is the same
on both schedules. Note that the "rate" aspect drops out because
both numerator and denominator contain inverse seconds as units.

However, the result will be wrong for any pair of fixed-ratio
schedules in which the number of presses per reinforcement is not
the same for both choices. Whether the prediction is _judged_
wrong will depend on how far the prediction must be from the data
while the data are still considered "well-described." There seems
to be no reason to believe that the results will be any better
for variable ratio or for any interval schedules.

------------------------------------------------------------
M&M:
In the two-alternative case, reinforcing a response for which
preference is P1 = B1/(B1+B2) decreases preference for the
alternative, P2. The rate at which P2 decreases will be
proportional to the rate R1, at which the first response is
reinforced. Thus

(2) dP2/dt = - kR1P1

where the constant k governs the proportion of change per
reinforcement.

Because preference is defined as _relative probability_, P1 + P2
= 1, and decreasing the preference for one alternative must
therefore increase the preference for the other. Therefore, the
decrease in P2 in Equation 2 implies an equal and opposite
reaction, a compensatory increase in P1:

(3) dP1/dt = +kR1P2 (p. 162)
---------------------------------------------------------------
Probability has nothing to do with it. If Pn = Bn/(B1+B2), then
P1+P2 = 1 because B1/(B1+B2) + B2/(B1+B2) = (B1+B2)/B1+B2) = 1.
---------------------------------------------------------------
M&M:
If, however, in addition to reinforcing one response at rate R1,
we also reinforce the other response at rate R2, then equation 3
must be supplemented to include the proportional decrease in P1
produced by reinforcement of the second response.
---------------------------------------------------------------
Note that R1 and R2 have suddenly become manipulated independent
variables.
---------------------------------------------------------------
M&M:
Moreover, Equation 2 must be supplemented to include the
compensatory increase in P2 that comes from decreasing P1.
Therefore the complete system is described by

(4a) dP1/dt = kr1P2 - kR2P1

and

(4b) dP2/dt = kR23P1 - kR1P2

This system is represented diagramatically in Fig. 1a. At
equilibrium,

              dP1/dt = dP2/dt = 0,

Whereupon Equations 4a and 4b both lead to

                     B1/(B1+B2)
    R1/R2 = P1/P2 = ----------- = B1/B2
                     B2/(B1+B2)

which is algebraically equivalent to Equation 1 (Baum & Rachlin,
1969). (p. 162)
--------------------------------------------------------------

Note that Equations 4a and 4b are not a "system" of equations.
The two equations are linearly dependent and are just two ways of
writing the same relationship.

So M&M have managed to run this series of permutations of the
original equation around to its starting point, which is a
statement that the ratio of behaviors to responses is the same on
both choices (even if the authors have failed to see that this is
what it states).
-------------------------------------------------------------
M&M:
The general kinetic model just described is a hypothesis
concerning the functional relation between the reinforcement
input to the organism and the behavioral output. (p. 163)
-------------------------------------------------------------
It is no such thing. In all operant experiments, reinforcement is
a function of behavioral output; that is what can be observed.
Behavior is the input to the scheduling apparatus; reinforcement
is the output of the apparatus. There may be a simultaneous
dependency of behavioral output of the organism on reinforcment
input to the organism, but to describe that would require a
second basic equation to be solved simultaneously with the first.

By plodding through the rest of the paper one can show that every
step of the development is just as far from real system analysis
as the initial part. The mathematics gets more and more involved
as the paper proceeds, so it becomes more and more tedious to
show that each successive form is merely a transformation of the
preceding forms, extending the tautology (and the self-
contradictions) to ever greater complexity, and that all the
"predictions" of dynamics are simply curve-fitting of arbitrary
mathematical forms to the data. The entire paper is an attempt to
make deductions about a system-environment relationship by taking
into account only one of the two necessary functional
relationships -- and attributing that one incorrectly to the
organism.
-------------------------------------------------------------
M&M:
Powers (1973,1978) has contended that systems analysis represents
a theoretical alternative to behaviorism. However, the present
treatment demonstrates the compatibility of the two approaches
and suggests that the techniques of systems analysis have much to
offer to the mathematical behavior theorist. Powers' analysis
differs from ours in two important ways. First, his is based on a
control theory model. Control theory is a branch of systems
analysis that assumes the existence of control [sic] variables or
reference values [sic sic sic] with which inputs are compared
(Milhorn, 1966). Although such hypothetical reference values have
their uses, they are by no means necessary for the analysis of
feedback systems (Milhorn, 1966; Rosen, 1970). Second, in opting
for a "quasi-static" analysis (Powers, 1978), we believe that
Powers has forsaken perhaps the most important attribute of
systems analysis, its ability to describe both transient-state
analysis and equilbrium behavior. (p. 172).
--------------------------------------------------------------
No comment required in the present company.

Best to all,

Bill P.

[From Bruce Nevin (091100.1033 EDT)]

(Bruce Kodish [000910.1146PDT]) --

Having worked through the simple equations in Power's and Robertson's
textbook on psychology (an exercise I highly recommend to you and anyone else
reading this missive) I now understand that what I read in my neuro textbooks
about feedback appeared incorrect. In other words, a serious transmission of
error has continued and become part of the 'knowledge' base in motor control
studies and neuroscience. I think that the whole field of movement science
would make a great leap forward if more researchers in that area understood
behavior as the control of input not output.

In 1992-93 we started a collection of misinformation about negative
feedback control systems. A search for "devil" on the CSG web site turned
up this:

http://www.ed.uiuc.edu/csg/documents/devil’s.bib.html

Should this be made a more accessible part of the web site? Maybe brushed
up, made a simpler and more coherent document, added to? I can't volunteer
to do that, but doesn't it seem worth doing?

  Bruce Nevin

···

At 03:55 PM 09/10/1999 -0400, Bruce Kodish wrote:

[From Bruce Nevin (990912.0732 EDT)]

Rick Marken (990911.1500)--

···

At 02:57 PM 09/11/1999 -0800, Rick Marken wrote:

I don't know what's up with the CSGNet search engine:

http://www.ed.uiuc.edu/csg/search.html

It doesn't seem to work anymore. If Gary Cziko is reading
this, perhaps he can find out what's up with this.

It worked for me yesterday to search for "devil". It worked just as well
for that today.

  Bruce Nevin

[From Rick Marken (990911.1500)]

Bruce Nevin (091100.1033 EDT) to Bruce Kodish (000910.1146PDT) --

In 1992-93 we started a collection of misinformation about
negative feedback control systems. A search for "devil" on
the CSG web site turned up this:

http://www.ed.uiuc.edu/csg/documents/devil’s.bib.html

Should this be made a more accessible part of the web site?

I think it is. It's one of the items listed in the "Best of
the CSGNet" selection at

http://www.ed.uiuc.edu/csg/CSGNet.html

The URL for the "Devil's Bibliography" item is:

http://www.ed.uiuc.edu/csg/documents/devil's.bib.html

I don't know what's up with the CSGNet search engine:

http://www.ed.uiuc.edu/csg/search.html

It doesn't seem to work anymore. If Gary Cziko is reading
this, perhaps he can find out what's up with this.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rick Marken (990912.0950)]

Bruce Nevin (990912.0732 EDT)--

It[CSG site search] worked for me yesterday to search
for "devil". It worked just as well for that today.

Yes. I see. Such are the benefits of empiricism. I based my
conclusion on my experience from several weeks ago, when it
didn't seem to be working.

Thanks

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/