What happened to cybernetics (was RE: The reality of "information")

[From Bill Powers (2009.04.18.0809 MDT)]

Thanks Bill,

You enriched my view on Ashby�s law of req var. I have never found such a
critical view on his work, did I look in the wrong place
?

Lots of people think Ashby’s work, particularly his later work on
cybernetics, was wonderful. I went down a different path, developed the
model that he abandoned, and concluded that he made the wrong choice,
with what I see as unfortunate consequences for those who put much time
and effort into following his reasoning.

When I first read Design for a Brain, I thought it was wonderful, too.
The parts of the book in which negative feedback control was explained in
relation to human behavior got me started on the path to PCT. I just sort
of ignored the rest, which made little sense to me. But who knows? Maybe
I’m the one who made the mistake. Others will have to decide that for
themselves.

May I ask you your view on
Ashby�s concept of an ultrastable system which must keep the essential
variables within physiological limits (in chapter 7 of Design for a
brain) ?

That was a brilliant idea and I adopted it in the mid-1950s; it solved a
lot of problems about learning and adaptation. Later on, I read Daniel
Koshland’s book on bacterial chemotaxis where a principle is laid out
that shows how systematic learning can result from random variations ( a
super-efficient form of natural selection), and put that together with
Ashby’s idea to form what I now call “E-coli reorganization
theory.” In my new book there are some respectably complex
simulations of control systems getting themselves organized by this
method.

I am trying to use these
concepts including the Viable System Model for the description of a
safety management system so your (and others of course) comments are very
welcome.

Tell me something. Suppose you design a system for managing safety, and
it doesn’t work as well as you want. You hire a consultant, and he
reports to you that your system’s output actions don’t have enough
variety to match the variety of the environment you’re trying to control.
What does that tell you about how to make the system work
better?

I have a strong impression that starting with Ashby, a lot of people
found the logic of negative feedback control baffling, while the logic of
the kind of organization Ashby proposed made good common sense. Ashby
himself got the idea of negative feedback control pretty well, but only
superficially; he never really explored systems of that sort, and drew
some conclusions about them that were just wrong enough to throw him off
the track.

One very wrong conclusion was that negative feedback control systems,
being error-driven, necessarily controlled imperfectly, nowhere near as
well as a system that could compute exactly the required action and carry
it out. What he didn’t realize, simply because of limited knowledge about
real error-driven control systems, is that the “imperfections”
in control of this sort can be as small as one part per million, or even
much less. I once built one that positioned a 300-pound carriage on its
ways over a 20-inch range with an accuracy of one tenth of a wavelength
of red light. Actually, most real control systems can achieve accuracies
of a percent or two, which is plenty close for most of the control tasks
anyone performs – how close to the center of your lane do you really
have to keep your car? One inch would be better than necessary, and
that’s about one percent of the lane width.

Also, not being an engineer, Ashby seemed to be under the impression that
just because you can compute the required action perfectly (given a big
enough and fast enough computer), a real system could produce it
perfectly (and instantly). In fact, one reason that negative feedback
control systems were invented was that computed-output systems are very
crude in their actual performance, since they have no way to correct for
unanticipated disturbances, changes in friction or other disturbances, or
changes in the efficiency of their own output mechanisms. Negative
feedback control systems are naturally able to maintain the same output
results even in the face of novel disturbances and even if their output
machinery loses a significant part of its effectiveness. That is why they
work so much better, and are so much simpler and faster, than systems
that work by computing what they need to do and then doing it.

The logic of the system Ashby finally settled on is appealing, especially
to engineers. You analyze the system to see what the effect of some
control input is on the system to be controlled. Given a sufficiently
accurate model of the controlled “plant”, you can see exactly
how any given control output, including magnitudes of input and rates of
change and so on, affects the variables you want to control. Equations
can be found that describe the causes and effects between the control
input and the final effect. This part of the process is called
“system identification.” The success of control will depend
crucially on the accuracy of the equations.

Next, you decide what values of the variables in the plant you want to
control, and how you want them to behave – the “trajectories”
of the variables. Given the desired values and the trajectories, you can
then use the inverses of the equations to calculate the values of the
control variables that will generate the desired end-points (inverse
kinematics) and the ways in which the control variables must change
through time to generate the desired trajectories (inverse dynamics).
Once you have completed those inverse calculations, all that remains is
to manipulate the control variables in the way you have deduced, and the
plant will then generate the desired results.

This is a rather complex procedure and involves some challenging
mathematical computations. It requires extensive knowlege about the
physics (and chemistry in some cases) of the plant. The conversion of the
control variable into exactly the computed values of effects on the plant
requires highly precise machinery (often the engineers have to cheat and
use a negative feedback control system to generate precise enough
effects). And since the actual effects will change from time to time
because of disturbances and changes in the environment and wear and tear
on the plant, a large fast computer is needed to update the model and
repetitively recompute all the inverses.

However, computers are small and fast now, and machinery can be made
quite precise, so there is no basic reason why this sort of system can’t
be made to work. And it has the great advantage that it is understandable
without adding anything much to 19th-century engineering knowlege. Anyone
can understand how it works, even if not everyone could design such a
system. The basic idea is very simple: you figure out what you have to do
to get the result you want, and then you do it. Compute and
execute.

Apparently, once a person has this architecture firmly in mind, nothing
can dislodge it. And a negative feedback control system makes no sense at
all in comparison.

The first thing that makes no sense is that the “control
variable” is in the wrong place, and it’s called the
“controlled variable.” Instead of being the variable that is
used to do the controlling, it’s the variable that gets
controlled.

The second puzzle is that the controller contains no model of the plant.
The designer may have a model in mind – probably does – but when the
controller is in operation it does not do any inverse calculations at all
and makes no use of knowlege about how the plant works.

The third weird fact is that the controller can respond to unanticipated
disturbances by adjusting its effects on the plant so as to counteract
those disturbances – which it does not need to sense. And more than
that, if a motor starts to lose torque because of age, or a load is
placed on the plant, or the quality of fuel used in the plant changes,
the action of the control system automatically changes in just the way
needed to cancel 90, or 99, or if you wish 99.9% of the effects of these
changes.

But the worst thing is that this controller mangles cause and effect. The
variable that is controlled is also the variable that causes the action
that is doing the controlling of that variable. Causation runs in a
circle. Every effect in the control loop is part of its own cause. Normal
19th-century thinking simply can’t handle this. Many cyberneticists speak
with delight about “circular causation,” but few of them have
any idea of what that really implies, or how it works.

To understand exactly what a negative feedback control system is and what
it can do, you have to go farther into the details and properties of
these systems than Ashby went. That’s what I did. I didn’t learn
everything that control engineers learn, but I found shortcuts and built
enough control systems to get a pretty clear idea of their properties.
That’s how I came to realize that Ashby had bet on the wrong horse. When
I finally came to that conclusion, I didn’t much like it. I felt
disloyal.

“Requisite variety” is the kind of idea that people get when
they don’t really understand a system but want to say something useful
about it. That’s the impression I got about Ashby. He was looking for
generalizations that seemed true and that didn’t require getting into
details about control systems.

He would have done better to focus on the details before
generalizing.

Best,

Bill P.

···

At 11:21 AM 4/18/2009 +0200, Arthur Dykstra wrote:

Regards,
Arthur

Van: Control Systems Group Network (CSGnet)
[
mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill Powers
Verzonden: zaterdag 18 april 2009 1:28
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality
of “information”)

[From Bill Powers (2009.04.17.1624 MDT)]

At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

AD: Dear Bill,

With great interest I am trying to catch up on these interesting
posts.

Can you please elaborate a bit more on what you say about cybernetics and
Ashby in the following part:

BP earlier:I saw this happen in cybernetics, with Ashby’s “Law
of Requisite Variety.”

BP: I assume this is the passage that caught your eye. I was referring to
the concept of requisite variety as a kind of measure of variability,
which is related to uncertainty and the concept of information. Ashby
maintained that the actions of a control system had to have at least as
much “variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from


http://en.wikipedia.org/wiki/Variety_(cybernetics

============================================================================

The Law of Requisite Variety

If a system is to be stable the number of states of its control mechanism
must be greater than or equal to the number of states in the system being
controlled. Ashby states the Law as “only variety can destroy
variety”[4]. He sees this as aiding the study of problems in biology
and a “wealth of possible applications” . He sees his approach
as introductory to Shannon Information Theory (1948) which deals with the
case of “incessant fluctuations” or noise. The Requisite
Variety condition can be seen as a simple statement of a necessary
dynamic equilibrium condition in information theory terms c.f. Newton’s
third law, Le Chatelier’s principle.

Later, in 1970, Conant working with Ashby produced the Good Regulator
theorem [5] which required autonomous systems to acquire an internal
model of their environment to persist and achieve stability or dynamic
equilibrium.

=============================================================================

The idea in that last unfortunate paragraph has steered lots of people
into a blind alley.

While the law of requisite variety may in fact be true (I wouldn’t know),
it’s not sufficient for designing a stable control system, or even a
control system that controls. All it really says is that the control
system must have the same number of output degrees of freedom as the
environment to be controlled. It doesn’t even say they have to be the
same degrees of freedom! If the outputs of the control system can apply
forces to an object’s position in x, y, and z, and the environment
controlled can vary in angles rho, theta, and tau, the number of degrees
of freedom of the output is the same as the number of degrees of freedom
of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that
means only that each output variable must have at least the same number
of discriminable states or magnitudes as the corresponding environmental
variable. It still doesn’t say the variables have to correspond in any
particular way. If you match only the number of states, the chances of
creating even a closed loop are pretty small.

Even if all those conditions are met, you still don’t have a control
system, much less a stable one. To have a control system, you need to
give it the ability to sense the state of the environment in each
independent dimension (a subject Ashby totally ignored, apparently), to
compare what is sensed with a reference condition, and to generate an
output that affects the same variable that is sensed in such a way
that the difference between the sensory signal and the reference
magnitude is minimized and kept small despite unpredictable disturbances
of the environment. The law of requisite variety says nothing helpful
about those fundamental requirements. It’s one of those generalizations
that, while quite possibly true, is useless for designing or
understanding anything.

In “Design for a Brain” Ashby abandoned the best approach to
control theory and switched to a very bad version in which the variables
are discrete and enumerable. I think this is what gave rise to the
current fad called “modern control theory,” and that the
underlying principle he and his followers adopted is completely
impractical as a model of living systems (or the systems they control).
Ashby thought you could design a system so it would compute how much
action and what kind of action were needed to produce a desired result,
and then execute the action and get the result. He thought this would
provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error.
That is, of course, not physically possible for any real system no matter
how it’s designed, including the systems Ashby imagined. But systems of
the kind Ashby finally chose are illusory, because simply expressing the
variable magnitudes as small whole numbers by no means shows that any
real system would behave in that sort of infinitely precise instantaneous
steps. 2 - 2 is zero in the world of integers, but in the real world it’s
anywhere between -0.4999… and +0.4999… . When you add 1 to 1 in the
real world, you get something close to 2, but not right away. Everything
in the real world takes time to happen, and Ashby chose an approach in
which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of
admiration, and it took me quite a while to realize that his acquaintance
with real control systems was rather sparse. I think he just had the bad
luck to have an insight that led him straight off the productive path on
which he started. If he had been any kind of engineer he might have
realized his mistake, but he was a psychiatrist and more of a hobbyist
than an engineer. And like many in cybernetics in the early days, he was
engaged in that very popular contest of seeing who could come up with the
most general possible statements. What a coup, to boil it all down to
“Only variety can destroy variety”! Wow! And what a bummer to
be topped by Boltzman, who shortened that terse generalization by two
whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

No virus found in this incoming message.

Checked by AVG -
www.avg.com

Version: 8.0.238 / Virus Database: 270.11.58/2062 - Release Date:
04/16/09 08:12:00