What happened to cybernetics (was RE: The reality of "information")

Dear Bill,

With great interest I am trying to catch up on these interesting
posts.

Can you please elaborate a bit more on what you say about
cybernetics and Ashby in the following part:

Determining channel capacity, as you say, is a pretty simple
proposition – no metaphysics needed. But information theory introduces
metaphysics, and that is where IT and I part company. The concept that information
is a reduction in uncertainty comes from confusing an equation used to describe
a phenomenon with the phenomenon itself. I saw this happen in cybernetics, with
Ashby’s “Law of Requisite Variety.” The whole concept of uncertainty
in physics or in casinos is metaphysics. The fact that we sometimes say we are
“uncertain” about something has no meaning outside our private
experiences. It doesn’t mean that there is something in nature called
uncertainty and we are sensing it. And reducing uncertainty can be accomplished
by many means, including getting a good night’s sleep or regaining one’s
self-confidence (justifiably or not).

Thanks,

Arthur Dijkstra

···

Van: Control Systems
Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill
Powers
Verzonden: donderdag 16 april 2009 18:39
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: The reality of “information”

[From Bill Powers
(2009.04.16.0819 MDT)]

Rick Marken (2009.04.15.2200)]

Martin Taylor:

The exact
same environmental situation can be perceived and controlled in a
literally
infinite number of different ways.

RM:
That seems to rule out the idea that perception is a process of
communicating to the mind what is actually out there in the
environment. If the same environmental situation can be perceived in
an infinite number of ways, the there is no information to be
transmitted about it. Information theory assumes that there is a
message to be transmitted and received. The message might be a binary
sequence like 1011010. There are 7 bits of information to be
transmitted in this message. If the received message is 1011010 then
we can say that 7 bits of information have been transmitted. If there
is noise in the transmission channel then the message received might
be 10x101x; only 5 bits were transmitted successfully. In this
situation, measuring the amount of information carried by a
transmission channel makes sense and it might even have practical
value; it can tell us how many times a message should be repeated over
a channel so that we can be sure it was received successfully.

BP:
I think you’re getting close to something here. Electrical engineers, or most
people (like me) when they’re being engineers, are naive realists. We assume
that the soldering iron is really there, that the circuit components are really
what they appear to be, and so on. And the communications engineer assumes that
the dots and dashes the telegrapher is sending are really in the sequence that
appears to be happening. Shannon’s job at Bell Labs was to figure out how
faithfully, and how fast, that sequence could be transmitted via some
particular channel to its destination. Fidelity is determined by comparing the
message that was sent against the message that was received. To define
information transfer, or determine channel capacity, you have to know both. If
you receive a message that says “Mary had a libble limb”, for all you
know that is exactly the message that was transmitted, and the channel capacity
was not exceeded. But if the original message was “Mary hab a labble
lamb,” the message was not transmitted faithfully, regardless of what you
expected the original to be. To know what the channel capacity is you have to
have a way of knowing what is really Out There – what message was really sent.
Determining channel capacity, as you say, is a pretty simple proposition – no
metaphysics needed. But information theory introduces metaphysics, and that is
where IT and I part company. The concept that information is a reduction in
uncertainty comes from confusing an equation used to describe a phenomenon with
the phenomenon itself. I saw this happen in cybernetics, with Ashby’s “Law
of Requisite Variety.” The whole concept of uncertainty in physics or in
casinos is metaphysics. The fact that we sometimes say we are
“uncertain” about something has no meaning outside our private
experiences. It doesn’t mean that there is something in nature called
uncertainty and we are sensing it. And reducing uncertainty can be accomplished
by many means, including getting a good night’s sleep or regaining one’s
self-confidence (justifiably or not).
Here is a quote from a Wiki article:
http://en.wikipedia.org/wiki/Information_entropy

Shannon’s entropy represents an absolute limit on the best possible lossless
compression of any communication, under certain constraints: treating messages
to be encoded as a sequence of independent and identically-distributed random
variables, Shannon’s source coding theorem shows that, in the limit, the
average length of the shortest possible representation to encode the messages
in a given alphabet is their entropy divided by the logarithm of the number of
symbols in the target alphabet.
A fair coin has an entropy of one bit. However, if the coin is not fair, then
the uncertainty is lower (if asked to bet on the next outcome, we would bet
preferentially on the most frequent result), and thus the Shannon entropy is
lower. Mathematically, a coin flip is an example of a Bernoulli trial, and its
entropy is given by the binary entropy function. A long string of repeating
characters has an entropy rate of 0, since every character is predictable. The
entropy rate of English text is between 1.0 and 1.5 bits per letter,[1] or as
low as 0.6 to 1.3 bits per letter, according to estimates by Shannon based on
human experiments.[2] .
==============================================================================

BP:
My immediate reaction to the first sentence is to start looking for exceptions
to this wild generalization. What do you mean, the “best possible lossless
compression of any communication?” Who says you have exhausted all the
possibilities ever known or that will ever be known? You can do this only by
defining some small universe with only a few possibilities so you can be sure
nothing has been left out – and this is exactly what information theory does.

That is why Shannon has to say “in a given alphabet”. As soon as he
said that, I knew two things: (1) information theory is not about the real
world, and (2) neither Shannon nor anyone else had any idea of the size of the
alphabet needed to encode all possible messages.

Channel capacity is a physical property of the transmission channel itself –
it does not change when you change alphabets. For example what is the message
you get if you call someone on the telephone and there is no answer? It doesn’t
matter what alphabet you expect the answer to be written or spoken in: no
answer gives you the information that nobody is answering that telephone. You
don’t know why, but there are endless possibilities, including a mass murder, a
fire, or a fickle friend. Considering all the things that might account for the
lack of an answer, it is clearly impossible to find any finite alphabet in
which every answer could be encoded. So information in the sense of knowledge
about the world is not the same thing as Shannon information. Channel capacity
does not tell you how much information the world has to give us, or how fast it
is generating that information.

An interesting thing happened on the way to the internet. Here’s another
reference, clearly somewhat dated:

[http://www.skepticfiles.org/cowtext/comput~1/9600info.htm

](http://www.skepticfiles.org/cowtext/comput~1/9600info.htm)And some quotes from it:

The roughly 3000-Hz available in the telephone bandwidth poses few

problems

for 300 bps modems, which only use about one fifth of the

bandwidth. A full

duplex 1200 bps modem requires about half the available bandwidth,

transmitting simultaneously in both directions at 600 baud and using

phase

modulation to signal two data bits per baud. “Baud rate”

is actually a

measure of signals per second. Because each signal can represent

more than

one bit, the baud rate and bps rate of a modem are not necessarilly the

same.

In the case of 1200 bps modems, their baud rate is actually 600 (signals

per

second) and each signal represents two data bits. By multiplying

signals per

second with the number of bits represented by each signal one determines

the

bps rate: 600 signals per second X 2 bits per signal = 1200 bps.


In moving up to 2400 bps, modem designers decided not to use more

bandwidth,

but to increase speed through a new signalling scheme known as

quadrature

amplitude modulation (QAM).


In QAM, each signal represents four data bits. Both 1200 bps and

2400 bps

modems use the same 600 baud rate, but each 1200 bps signal carries two

data

bits, while each 2400 bps signal carries four data bits:

600 signals per second X 4 bits per signal = 2400 bps.

- - - - - - -
-



*ECHO-CANCELLATION*

*This method solves the problem of overlapping transmit and receive*
*channels.*
*Each modem's receiver must try to filter out the echo of its own*
*transmitter*
*and concentrate on the other modem's transmit signal.  This presents*
*a*
*tremendous computational problem that significantly increases the*
*complexity*
*-- and cost -- of the modem.  But it offers what other schemes*
*don't:*
*simultaneous two-way transmission of data at 9600 bps.*

*The CCITT "V.32" recommendation for 9600 bps modems includes*
*echo-*
*cancellation.  The transmit and receive bands overlap almost*
*completely, each*
*occupying 90 percent of the available bandwidth.  Measured by*
*computations per*
*second and bits of resolution, a V.32 modem is roughly 64 times more*
*complex*
*than a 2400 bps modem.  This translates directly into added*
*development and*
*production costs which means that it will be some time before V.32 modems*
*can*
*compete in the high- volume modem market.*

=================================================================
BP:
… and now we have dial-up modems that run at 56000 bits per second by
compressing the message before transmission and decompressing the received
message. Net-Zero and Juno, I read, can compress text (in the server) to 4% of
its original size and achieve another factor of 25.

A last quote from http://en.wikipedia.org/wiki/Modem:

================================================================================

List of dialup speeds

Note that the values given are
maximum values, and actual values may be slower under certain conditions (for
example, noisy phone lines).[4] For a complete list see the companion article
List of device bandwidths.
Connection
Bitrate
Modem 110
baud
0.1 kbit/s
Modem 300 (300 baud) (Bell 103 or
V.21)
0.3 kbit/s
Modem 1200 (600 baud) (Bell 212A or
V.22)
1.2 kbit/s
Modem 2400 (600 baud) (V.22bis)
2.4 kbit/s
Modem 2400 (1200 baud)
(V.26bis)
2.4 kbit/s
Modem 4800 (1600 baud)
(V.27ter)
4.8 kbit/s
Modem 9600 (2400 baud)
(V.32)
9.6 kbit/s
Modem 14.4 (2400 baud)
(V.32bis)
14.4 kbit/s
Modem 28.8 (3200 baud)
(V.34)
28.8 kbit/s
Modem 33.6 (3429 baud)
(V.34)
33.6 kbit/s
Modem 56k (8000/3429 baud) (V.90)
56.0/33.6
kbit/s
Modem 56k (8000/8000 baud)
(V.92)
56.0/48.0 kbit/s
Bonding Modem (two 56k modems)) (V.92)
112.0/96.0
kbit/s [5]
Hardware compression (variable)
(V.90/V.42bis) 56.0-220.0 kbit/s
Hardware compression (variable) (V.92/V.44)
56.0-320.0 kbit/s
Server-side web compression (variable) (Netscape ISP) 100.0-1000.0 kbit/s

BP:
Entropy is not easy to define. A good discussion is in
http://www4.ncsu.edu/unity/lockers/users/f/felder/public/kenny/papers/entropy.html
Here is a quote:

If I were able to measure the complete, microscopic state of the air
molecules then I would know all the information there is to know about the
macroscopic state. For example, if I knew the position of every molecule in the
room I could calculate the average density in any macroscopic region. The
reverse is not true, however. If I know the average density of the air in each
cubic centimeter that tells me only how many molecules are in each of these
regions, but it tells me nothing about where exactly the individual molecules
within each such region are. Thus for any particular macrostate there are many
possible corresponding microstates. Roughly speaking, entropy is defined for
any particular macrostate as the number of corresponding microstates.
To recap: The microstate of a system consists of a complete description of the
state of every constituent of the system. In the case of the air this means the
position and velocity of all the molecules. (Going further to the level of
atoms or particles wouldn’t change the arguments here in any important way.)
The macrostate of a system consists of a description of a few, macroscopically
measurable quantities such as average density and temperature. For any
macrostate of the system there are in general many different possible
microstates. Roughly speaking, the entropy of a system in a particular
macrostate is defined to be the number of possible microstates that the system
might be in. (In the appendix I’ll discuss how to make this definition more explicit.)
==================================================================================

BP:
Since the number of possible microstates of the outside world is rather large,
about all that we can conclude is that the entropy of any macrostate is
infinite.

RM:

This is not the way I think perception works. Perception is
not a
channel that brings a message about the “true” state of the
environment into the brain. The “true” state of the environment could
be represented as a binary “message” like 1011010. But I don’t think
of this as a real message; it is just the state of a set of physical
variables. If what is perceived is, say, some linear combination of a
subset of the elements of this “message”, then it makes no sense (it
seems to me) to ask how much information about the state of the
environment is communicated by the perceptual signal. It’s just
doesn’t seem like a relevant question.

BP:
This is an important observation. If the taste of lemonade consists of
temperature, tartness, sweetness, and other sensations, the perception of
lemonade is not about some corresponding entity in the outside world. There is
no “message” about lemonade coming into the brain. Instead, the
perception is to the individual components as density is to the positions of
individual molecules. The world consists of microstates; perceptions are the
macrostates. One level of perception consists of the microstates of the next
level up, which relatively speaking consists of the macrostates. Obviously we
can’t go from the macrostates to the microstates, although by considering many
different macrostates derived from the same microstates outside, we can begin
to build a fuzzy picture of the microstates. And the control process, plus
reorgnaization, allows us to manipulate microstates in such a way as to give us
control of the macrostates without even knowing what the microstates are.

Well, that is metaphysics, too: it’s one level talking about other levels. I
think the most important point you make here is that we can’t consider
perceptual signals as “messages” passed to higher levels. The higher
levels take whatever inputs they want from the existing lower levels; they
create a new set of perceptions from some set of lower-perceptions, with the
“taking” being done by the recieving entity, not by some transmitting
entity. The lower levels do not decide what they want to say to the higher
levels. Yet the higher levels can tell the lower ones what they want to receive
from them.

Best,

Bill P.

[From Bill Powers (2009.04.17.1624 MDT)]

AD: Dear Bill,

With great interest I am trying to catch up on these interesting
posts.

Can you please elaborate a bit more on what you say about cybernetics and
Ashby in the following part:

BP earlier:I saw this happen in
cybernetics, with Ashby’s “Law of Requisite
Variety.”

BP: I assume this is the passage that caught your eye. I was referring to
the concept of requisite variety as a kind of measure of variability,
which is related to uncertainty and the concept of information. Ashby
maintained that the actions of a control system had to have at least as
much “variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from


http://en.wikipedia.org/wiki/Variety_(cybernetics

···

At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

============================================================================

The Law of Requisite Variety

If a system is to be
stable the number of states of its control mechanism must be greater than
or equal to the number of states in the system being controlled. Ashby
states the Law as “only variety can destroy variety”[4]. He
sees this as aiding the study of problems in biology and a “wealth
of possible applications” . He sees his approach as introductory to
Shannon Information Theory (1948) which deals with the case of
“incessant fluctuations” or noise. The Requisite Variety
condition can be seen as a simple statement of a necessary dynamic
equilibrium condition in information theory terms c.f. Newton’s third
law, Le Chatelier’s principle.
Later, in 1970, Conant working with Ashby produced the Good Regulator
theorem [5] which required autonomous systems to acquire an internal
model of their environment to persist and achieve stability or dynamic
equilibrium.

The idea in that last unfortunate paragraph has steered lots of people
into a blind alley.
While the law of requisite variety may in fact be true (I wouldn’t know),
it’s not sufficient for designing a stable control system, or even a
control system that controls. All it really says is that the control
system must have the same number of output degrees of freedom as the
environment to be controlled. It doesn’t even say they have to be the
same degrees of freedom! If the outputs of the control system can apply
forces to an object’s position in x, y, and z, and the environment
controlled can vary in angles rho, theta, and tau, the number of degrees
of freedom of the output is the same as the number of degrees of freedom
of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that
means only that each output variable must have at least the same number
of discriminable states or magnitudes as the corresponding environmental
variable. It still doesn’t say the variables have to correspond in any
particular way. If you match only the number of states, the chances of
creating even a closed loop are pretty small.
Even if all those conditions are met, you still don’t have a control
system, much less a stable one. To have a control system, you need to
give it the ability to sense the state of the environment in each
independent dimension (a subject Ashby totally ignored, apparently), to
compare what is sensed with a reference condition, and to generate an
output that affects the same variable that is sensed in such a way
that the difference between the sensory signal and the reference
magnitude is minimized and kept small despite unpredictable disturbances
of the environment. The law of requisite variety says nothing helpful
about those fundamental requirements. It’s one of those generalizations
that, while quite possibly true, is useless for designing or
understanding anything.

In “Design for a Brain” Ashby abandoned the best approach to
control theory and switched to a very bad version in which the variables
are discrete and enumerable. I think this is what gave rise to the
current fad called “modern control theory,” and that the
underlying principle he and his followers adopted is completely
impractical as a model of living systems (or the systems they control).
Ashby thought you could design a system so it would compute how much
action and what kind of action were needed to produce a desired result,
and then execute the action and get the result. He thought this would
provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error.
That is, of course, not physically possible for any real system no matter
how it’s designed, including the systems Ashby imagined. But systems of
the kind Ashby finally chose are illusory, because simply expressing the
variable magnitudes as small whole numbers by no means shows that any
real system would behave in that sort of infinitely precise instantaneous
steps. 2 - 2 is zero in the world of integers, but in the real world it’s
anywhere between -0.4999… and +0.4999… . When you add 1 to 1 in the
real world, you get something close to 2, but not right away. Everything
in the real world takes time to happen, and Ashby chose an approach in
which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of
admiration, and it took me quite a while to realize that his acquaintance
with real control systems was rather sparse. I think he just had the bad
luck to have an insight that led him straight off the productive path on
which he started. If he had been any kind of engineer he might have
realized his mistake, but he was a psychiatrist and more of a hobbyist
than an engineer. And like many in cybernetics in the early days, he was
engaged in that very popular contest of seeing who could come up with the
most general possible statements. What a coup, to boil it all down to
“Only variety can destroy variety”! Wow! And what a bummer to
be topped by Boltzman, who shortened that terse generalization by two
whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

[From Bill Powers (2009.04.17.1731 MDT)]

Replying again to Arthur Dykstra 09:45 PM 4/17/2009 +0200:

Sorry, it wasn't Boltzman who said variety absorbs variety. It was Stafford Beer.

Best,

Bill P.

Thanks Bill,

You enriched my view on Ashby’s law of req var. I have
never found such a critical view on his work, did I look in the wrong place ?

May I ask you your view on Ashby’s concept of an
ultrastable system which must keep the essential variables within physiological
limits (in chapter 7 of Design for a brain) ?

I am trying to use these concepts including the Viable System
Model for the description of a safety management system so your (and others of
course) comments are very welcome.

Regards,

Arthur

Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill
Powers
Verzonden: zaterdag 18 april 2009 1:28
“information”)

[From Bill Powers
(2009.04.17.1624 MDT)]

AD: Dear Bill,
With great interest I am trying to catch up on these interesting posts.
Can you please elaborate a bit more on what you say about cybernetics and Ashby
in the following part:

BP earlier:I saw this happen in cybernetics, with Ashby’s
“Law of Requisite Variety.”

BP: I assume this is the passage that caught your eye. I was referring to the
concept of requisite variety as a kind of measure of variability, which is
related to uncertainty and the concept of information. Ashby maintained that
the actions of a control system had to have at least as much
“variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from
http://en.wikipedia.org/wiki/Variety_(cybernetics

···

Van: Control
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality of
At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

============================================================================

The Law of Requisite Variety

If a system is to be stable the number of states of its
control mechanism must be greater than or equal to the number of states in the
system being controlled. Ashby states the Law as “only variety can destroy
variety”[4]. He sees this as aiding the study of problems in biology and a
“wealth of possible applications” . He sees his approach as
introductory to Shannon Information Theory (1948) which deals with the case of
“incessant fluctuations” or noise. The Requisite Variety condition
can be seen as a simple statement of a necessary dynamic equilibrium condition
in information theory terms c.f. Newton’s third law, Le Chatelier’s principle.
Later, in 1970, Conant working with Ashby produced the Good Regulator theorem
[5] which required autonomous systems to acquire an internal model of their
environment to persist and achieve stability or dynamic equilibrium.

The idea in that last unfortunate paragraph has steered lots of people into a
blind alley.
While the law of requisite variety may in fact be true (I wouldn’t know), it’s
not sufficient for designing a stable control system, or even a control system
that controls. All it really says is that the control system must have the same
number of output degrees of freedom as the environment to be controlled. It
doesn’t even say they have to be the same degrees of freedom! If the outputs of
the control system can apply forces to an object’s position in x, y, and z, and
the environment controlled can vary in angles rho, theta, and tau, the number
of degrees of freedom of the output is the same as the number of degrees of
freedom of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that means
only that each output variable must have at least the same number of
discriminable states or magnitudes as the corresponding environmental variable.
It still doesn’t say the variables have to correspond in any particular way. If
you match only the number of states, the chances of creating even a closed loop
are pretty small.
Even if all those conditions are met, you still don’t have a control system,
much less a stable one. To have a control system, you need to give it the
ability to sense the state of the environment in each independent dimension (a
subject Ashby totally ignored, apparently), to compare what is sensed with a
reference condition, and to generate an output that affects the same
variable that is sensed
in such a way that the difference between the
sensory signal and the reference magnitude is minimized and kept small despite
unpredictable disturbances of the environment. The law of requisite variety
says nothing helpful about those fundamental requirements. It’s one of those
generalizations that, while quite possibly true, is useless for designing or
understanding anything.

In “Design for a Brain” Ashby abandoned the best approach to control
theory and switched to a very bad version in which the variables are discrete
and enumerable. I think this is what gave rise to the current fad called
“modern control theory,” and that the underlying principle he and his
followers adopted is completely impractical as a model of living systems (or
the systems they control). Ashby thought you could design a system so it would
compute how much action and what kind of action were needed to produce a
desired result, and then execute the action and get the result. He thought this
would provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error. That is,
of course, not physically possible for any real system no matter how it’s
designed, including the systems Ashby imagined. But systems of the kind Ashby
finally chose are illusory, because simply expressing the variable magnitudes
as small whole numbers by no means shows that any real system would behave in
that sort of infinitely precise instantaneous steps. 2 - 2 is zero in the world
of integers, but in the real world it’s anywhere between -0.4999… and
+0.4999… . When you add 1 to 1 in the real world, you get something close to
2, but not right away. Everything in the real world takes time to happen, and
Ashby chose an approach in which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of admiration,
and it took me quite a while to realize that his acquaintance with real control
systems was rather sparse. I think he just had the bad luck to have an insight
that led him straight off the productive path on which he started. If he had
been any kind of engineer he might have realized his mistake, but he was a
psychiatrist and more of a hobbyist than an engineer. And like many in
cybernetics in the early days, he was engaged in that very popular contest of
seeing who could come up with the most general possible statements. What a
coup, to boil it all down to “Only variety can destroy variety”! Wow!
And what a bummer to be topped by Boltzman, who shortened that terse
generalization by two whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

(Gavin Ritz, 2009.04.18.23.31NZT)

Arthur

You are going up a cul de sac with VSM.

Let me suggest you map out your entire organizational
business processes with the accountabilities “in what is called the swim lane
method” use Visio for this. BPM (business process mapping) at least with
give you a pictorial view of your organisations processes with the accountabilities
attached to the processes.

This will at least give you a situational analysis
of what you have and who is accountable, then you can fit your safely processes
over that. Ie your “ends”.

So you have at least a picture of your
situation and how you want it to look (ends), you can then devise your “means”.

I suggest in your means (you may have new
roles) you specify those accountabilities for each role, ie its authorities. Called
a TIRR Task Initiating Role Relationships (ie for cross functional roles). A safety
officer may be able to tell the production manager to do something, stop something
or whatever authority.

Good luck

Regards

Gavin

Thanks Bill,

You enriched my view on Ashby’s law of req var. I have
never found such a critical view on his work, did I look in the wrong place ?

May I ask you your view on Ashby’s concept of an ultrastable
system which must keep the essential variables within physiological limits (in
chapter 7 of Design for a brain) ?

I am trying to use these concepts including the Viable System
Model for the description of a safety management system so your (and others of
course) comments are very welcome.

Regards,

Arthur

Van:
Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill Powers
Verzonden: zaterdag 18 april 2009
1:28
cybernetics (was RE: The reality of “information”)

[From Bill Powers (2009.04.17.1624 MDT)]

AD: Dear Bill,

With great interest I am trying to catch up on these interesting posts.

Can you please elaborate a bit more on what you say about cybernetics and Ashby
in the following part:

BP earlier:I
saw this happen in cybernetics, with Ashby’s “Law of Requisite
Variety.”

BP: I assume this is the passage that caught your eye. I was referring to the
concept of requisite variety as a kind of measure of variability, which is
related to uncertainty and the concept of information. Ashby maintained that
the actions of a control system had to have at least as much
“variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from

http://en.wikipedia.org/wiki/Variety_(cybernetics

···

Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to
At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

============================================================================

The Law of Requisite Variety

If a system is
to be stable the number of states of its control mechanism must be greater than
or equal to the number of states in the system being controlled. Ashby states
the Law as “only variety can destroy variety”[4]. He sees this as
aiding the study of problems in biology and a “wealth of possible
applications” . He sees his approach as introductory to Shannon
Information Theory (1948) which deals with the case of “incessant
fluctuations” or noise. The Requisite Variety condition can be seen as a
simple statement of a necessary dynamic equilibrium condition in information
theory terms c.f. Newton’s third law, Le Chatelier’s principle.
Later, in 1970, Conant working with Ashby produced the Good Regulator theorem
[5] which required autonomous systems to acquire an internal model of their
environment to persist and achieve stability or dynamic equilibrium.

The idea in that last unfortunate paragraph has steered lots of people into a
blind alley.
While the law of requisite variety may in fact be true (I wouldn’t know), it’s
not sufficient for designing a stable control system, or even a control system
that controls. All it really says is that the control system must have the same
number of output degrees of freedom as the environment to be controlled. It
doesn’t even say they have to be the same degrees of freedom! If the outputs of
the control system can apply forces to an object’s position in x, y, and z, and
the environment controlled can vary in angles rho, theta, and tau, the number
of degrees of freedom of the output is the same as the number of degrees of
freedom of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that means
only that each output variable must have at least the same number of
discriminable states or magnitudes as the corresponding environmental variable.
It still doesn’t say the variables have to correspond in any particular way. If
you match only the number of states, the chances of creating even a closed loop
are pretty small.
Even if all those conditions are met, you still don’t have a control system,
much less a stable one. To have a control system, you need to give it the
ability to sense the state of the environment in each independent dimension (a
subject Ashby totally ignored, apparently), to compare what is sensed with a
reference condition, and to generate an output that affects the same variable that is sensed in such a
way that the difference between the sensory signal and the reference magnitude
is minimized and kept small despite unpredictable disturbances of the
environment. The law of requisite variety says nothing helpful about those
fundamental requirements. It’s one of those generalizations that, while quite
possibly true, is useless for designing or understanding anything.

In “Design for a Brain” Ashby abandoned the best approach to control
theory and switched to a very bad version in which the variables are discrete
and enumerable. I think this is what gave rise to the current fad called
“modern control theory,” and that the underlying principle he and his
followers adopted is completely impractical as a model of living systems (or
the systems they control). Ashby thought you could design a system so it would
compute how much action and what kind of action were needed to produce a
desired result, and then execute the action and get the result. He thought this
would provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error. That is,
of course, not physically possible for any real system no matter how it’s
designed, including the systems Ashby imagined. But systems of the kind Ashby
finally chose are illusory, because simply expressing the variable magnitudes
as small whole numbers by no means shows that any real system would behave in
that sort of infinitely precise instantaneous steps. 2 - 2 is zero in the world
of integers, but in the real world it’s anywhere between -0.4999… and
+0.4999… . When you add 1 to 1 in the real world, you get something close to
2, but not right away. Everything in the real world takes time to happen, and
Ashby chose an approach in which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of admiration,
and it took me quite a while to realize that his acquaintance with real control
systems was rather sparse. I think he just had the bad luck to have an insight
that led him straight off the productive path on which he started. If he had
been any kind of engineer he might have realized his mistake, but he was a
psychiatrist and more of a hobbyist than an engineer. And like many in
cybernetics in the early days, he was engaged in that very popular contest of
seeing who could come up with the most general possible statements. What a
coup, to boil it all down to “Only variety can destroy variety”! Wow!
And what a bummer to be topped by Boltzman, who shortened that terse
generalization by two whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

[From Bill Powers (2009.04.18.0809 MDT)]

Thanks Bill,

You enriched my view on Ashby�s law of req var. I have never found such a
critical view on his work, did I look in the wrong place
?

Lots of people think Ashby’s work, particularly his later work on
cybernetics, was wonderful. I went down a different path, developed the
model that he abandoned, and concluded that he made the wrong choice,
with what I see as unfortunate consequences for those who put much time
and effort into following his reasoning.

When I first read Design for a Brain, I thought it was wonderful, too.
The parts of the book in which negative feedback control was explained in
relation to human behavior got me started on the path to PCT. I just sort
of ignored the rest, which made little sense to me. But who knows? Maybe
I’m the one who made the mistake. Others will have to decide that for
themselves.

May I ask you your view on
Ashby�s concept of an ultrastable system which must keep the essential
variables within physiological limits (in chapter 7 of Design for a
brain) ?

That was a brilliant idea and I adopted it in the mid-1950s; it solved a
lot of problems about learning and adaptation. Later on, I read Daniel
Koshland’s book on bacterial chemotaxis where a principle is laid out
that shows how systematic learning can result from random variations ( a
super-efficient form of natural selection), and put that together with
Ashby’s idea to form what I now call “E-coli reorganization
theory.” In my new book there are some respectably complex
simulations of control systems getting themselves organized by this
method.

I am trying to use these
concepts including the Viable System Model for the description of a
safety management system so your (and others of course) comments are very
welcome.

Tell me something. Suppose you design a system for managing safety, and
it doesn’t work as well as you want. You hire a consultant, and he
reports to you that your system’s output actions don’t have enough
variety to match the variety of the environment you’re trying to control.
What does that tell you about how to make the system work
better?

I have a strong impression that starting with Ashby, a lot of people
found the logic of negative feedback control baffling, while the logic of
the kind of organization Ashby proposed made good common sense. Ashby
himself got the idea of negative feedback control pretty well, but only
superficially; he never really explored systems of that sort, and drew
some conclusions about them that were just wrong enough to throw him off
the track.

One very wrong conclusion was that negative feedback control systems,
being error-driven, necessarily controlled imperfectly, nowhere near as
well as a system that could compute exactly the required action and carry
it out. What he didn’t realize, simply because of limited knowledge about
real error-driven control systems, is that the “imperfections”
in control of this sort can be as small as one part per million, or even
much less. I once built one that positioned a 300-pound carriage on its
ways over a 20-inch range with an accuracy of one tenth of a wavelength
of red light. Actually, most real control systems can achieve accuracies
of a percent or two, which is plenty close for most of the control tasks
anyone performs – how close to the center of your lane do you really
have to keep your car? One inch would be better than necessary, and
that’s about one percent of the lane width.

Also, not being an engineer, Ashby seemed to be under the impression that
just because you can compute the required action perfectly (given a big
enough and fast enough computer), a real system could produce it
perfectly (and instantly). In fact, one reason that negative feedback
control systems were invented was that computed-output systems are very
crude in their actual performance, since they have no way to correct for
unanticipated disturbances, changes in friction or other disturbances, or
changes in the efficiency of their own output mechanisms. Negative
feedback control systems are naturally able to maintain the same output
results even in the face of novel disturbances and even if their output
machinery loses a significant part of its effectiveness. That is why they
work so much better, and are so much simpler and faster, than systems
that work by computing what they need to do and then doing it.

The logic of the system Ashby finally settled on is appealing, especially
to engineers. You analyze the system to see what the effect of some
control input is on the system to be controlled. Given a sufficiently
accurate model of the controlled “plant”, you can see exactly
how any given control output, including magnitudes of input and rates of
change and so on, affects the variables you want to control. Equations
can be found that describe the causes and effects between the control
input and the final effect. This part of the process is called
“system identification.” The success of control will depend
crucially on the accuracy of the equations.

Next, you decide what values of the variables in the plant you want to
control, and how you want them to behave – the “trajectories”
of the variables. Given the desired values and the trajectories, you can
then use the inverses of the equations to calculate the values of the
control variables that will generate the desired end-points (inverse
kinematics) and the ways in which the control variables must change
through time to generate the desired trajectories (inverse dynamics).
Once you have completed those inverse calculations, all that remains is
to manipulate the control variables in the way you have deduced, and the
plant will then generate the desired results.

This is a rather complex procedure and involves some challenging
mathematical computations. It requires extensive knowlege about the
physics (and chemistry in some cases) of the plant. The conversion of the
control variable into exactly the computed values of effects on the plant
requires highly precise machinery (often the engineers have to cheat and
use a negative feedback control system to generate precise enough
effects). And since the actual effects will change from time to time
because of disturbances and changes in the environment and wear and tear
on the plant, a large fast computer is needed to update the model and
repetitively recompute all the inverses.

However, computers are small and fast now, and machinery can be made
quite precise, so there is no basic reason why this sort of system can’t
be made to work. And it has the great advantage that it is understandable
without adding anything much to 19th-century engineering knowlege. Anyone
can understand how it works, even if not everyone could design such a
system. The basic idea is very simple: you figure out what you have to do
to get the result you want, and then you do it. Compute and
execute.

Apparently, once a person has this architecture firmly in mind, nothing
can dislodge it. And a negative feedback control system makes no sense at
all in comparison.

The first thing that makes no sense is that the “control
variable” is in the wrong place, and it’s called the
“controlled variable.” Instead of being the variable that is
used to do the controlling, it’s the variable that gets
controlled.

The second puzzle is that the controller contains no model of the plant.
The designer may have a model in mind – probably does – but when the
controller is in operation it does not do any inverse calculations at all
and makes no use of knowlege about how the plant works.

The third weird fact is that the controller can respond to unanticipated
disturbances by adjusting its effects on the plant so as to counteract
those disturbances – which it does not need to sense. And more than
that, if a motor starts to lose torque because of age, or a load is
placed on the plant, or the quality of fuel used in the plant changes,
the action of the control system automatically changes in just the way
needed to cancel 90, or 99, or if you wish 99.9% of the effects of these
changes.

But the worst thing is that this controller mangles cause and effect. The
variable that is controlled is also the variable that causes the action
that is doing the controlling of that variable. Causation runs in a
circle. Every effect in the control loop is part of its own cause. Normal
19th-century thinking simply can’t handle this. Many cyberneticists speak
with delight about “circular causation,” but few of them have
any idea of what that really implies, or how it works.

To understand exactly what a negative feedback control system is and what
it can do, you have to go farther into the details and properties of
these systems than Ashby went. That’s what I did. I didn’t learn
everything that control engineers learn, but I found shortcuts and built
enough control systems to get a pretty clear idea of their properties.
That’s how I came to realize that Ashby had bet on the wrong horse. When
I finally came to that conclusion, I didn’t much like it. I felt
disloyal.

“Requisite variety” is the kind of idea that people get when
they don’t really understand a system but want to say something useful
about it. That’s the impression I got about Ashby. He was looking for
generalizations that seemed true and that didn’t require getting into
details about control systems.

He would have done better to focus on the details before
generalizing.

Best,

Bill P.

···

At 11:21 AM 4/18/2009 +0200, Arthur Dykstra wrote:

Regards,
Arthur

Van: Control Systems Group Network (CSGnet)
[
mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill Powers
Verzonden: zaterdag 18 april 2009 1:28
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality
of “information”)

[From Bill Powers (2009.04.17.1624 MDT)]

At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

AD: Dear Bill,

With great interest I am trying to catch up on these interesting
posts.

Can you please elaborate a bit more on what you say about cybernetics and
Ashby in the following part:

BP earlier:I saw this happen in cybernetics, with Ashby’s “Law
of Requisite Variety.”

BP: I assume this is the passage that caught your eye. I was referring to
the concept of requisite variety as a kind of measure of variability,
which is related to uncertainty and the concept of information. Ashby
maintained that the actions of a control system had to have at least as
much “variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from


http://en.wikipedia.org/wiki/Variety_(cybernetics

============================================================================

The Law of Requisite Variety

If a system is to be stable the number of states of its control mechanism
must be greater than or equal to the number of states in the system being
controlled. Ashby states the Law as “only variety can destroy
variety”[4]. He sees this as aiding the study of problems in biology
and a “wealth of possible applications” . He sees his approach
as introductory to Shannon Information Theory (1948) which deals with the
case of “incessant fluctuations” or noise. The Requisite
Variety condition can be seen as a simple statement of a necessary
dynamic equilibrium condition in information theory terms c.f. Newton’s
third law, Le Chatelier’s principle.

Later, in 1970, Conant working with Ashby produced the Good Regulator
theorem [5] which required autonomous systems to acquire an internal
model of their environment to persist and achieve stability or dynamic
equilibrium.

=============================================================================

The idea in that last unfortunate paragraph has steered lots of people
into a blind alley.

While the law of requisite variety may in fact be true (I wouldn’t know),
it’s not sufficient for designing a stable control system, or even a
control system that controls. All it really says is that the control
system must have the same number of output degrees of freedom as the
environment to be controlled. It doesn’t even say they have to be the
same degrees of freedom! If the outputs of the control system can apply
forces to an object’s position in x, y, and z, and the environment
controlled can vary in angles rho, theta, and tau, the number of degrees
of freedom of the output is the same as the number of degrees of freedom
of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that
means only that each output variable must have at least the same number
of discriminable states or magnitudes as the corresponding environmental
variable. It still doesn’t say the variables have to correspond in any
particular way. If you match only the number of states, the chances of
creating even a closed loop are pretty small.

Even if all those conditions are met, you still don’t have a control
system, much less a stable one. To have a control system, you need to
give it the ability to sense the state of the environment in each
independent dimension (a subject Ashby totally ignored, apparently), to
compare what is sensed with a reference condition, and to generate an
output that affects the same variable that is sensed in such a way
that the difference between the sensory signal and the reference
magnitude is minimized and kept small despite unpredictable disturbances
of the environment. The law of requisite variety says nothing helpful
about those fundamental requirements. It’s one of those generalizations
that, while quite possibly true, is useless for designing or
understanding anything.

In “Design for a Brain” Ashby abandoned the best approach to
control theory and switched to a very bad version in which the variables
are discrete and enumerable. I think this is what gave rise to the
current fad called “modern control theory,” and that the
underlying principle he and his followers adopted is completely
impractical as a model of living systems (or the systems they control).
Ashby thought you could design a system so it would compute how much
action and what kind of action were needed to produce a desired result,
and then execute the action and get the result. He thought this would
provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error.
That is, of course, not physically possible for any real system no matter
how it’s designed, including the systems Ashby imagined. But systems of
the kind Ashby finally chose are illusory, because simply expressing the
variable magnitudes as small whole numbers by no means shows that any
real system would behave in that sort of infinitely precise instantaneous
steps. 2 - 2 is zero in the world of integers, but in the real world it’s
anywhere between -0.4999… and +0.4999… . When you add 1 to 1 in the
real world, you get something close to 2, but not right away. Everything
in the real world takes time to happen, and Ashby chose an approach in
which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of
admiration, and it took me quite a while to realize that his acquaintance
with real control systems was rather sparse. I think he just had the bad
luck to have an insight that led him straight off the productive path on
which he started. If he had been any kind of engineer he might have
realized his mistake, but he was a psychiatrist and more of a hobbyist
than an engineer. And like many in cybernetics in the early days, he was
engaged in that very popular contest of seeing who could come up with the
most general possible statements. What a coup, to boil it all down to
“Only variety can destroy variety”! Wow! And what a bummer to
be topped by Boltzman, who shortened that terse generalization by two
whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

No virus found in this incoming message.

Checked by AVG -
www.avg.com

Version: 8.0.238 / Virus Database: 270.11.58/2062 - Release Date:
04/16/09 08:12:00

[From Fred Nickols (2009.04.18.0958 MST)]

Boy! Bill's post below is a keeper. I don't know anyone who has dared challenge Ashby. Way to go, Bill!

···

--
Regards,

Fred Nickols
Managing Partner
Distance Consulting, LLC
nickols@att.net
www.nickols.us

"Assistance at A Distance"
  
-------------- Original message ----------------------
From: Bill Powers <powers_w@FRONTIER.NET>

Dear Bill:

AD:

I am trying to use these concepts including the Viable System
Model for the description of a safety management system so your (and others of
course) comments are very welcome.

BP:
Tell me something. Suppose you design a system for managing safety, and it
doesn’t work as well as you want. You hire a consultant, and he reports to you
that your system’s output actions don’t have enough variety to match the
variety of the environment you’re trying to control. What does that tell you
about how to make the system work better?

AD:

That would be a very underspecified report and no help at all. I
would ask him on what specific dimensions the system lacks variety. Then I
could think of measures to amplify that specific system variety, or how to
attenuate the environmental variety impacting the system, to recreate
homeostasis. Every recursion of the system (VSM language) must have req var and
this can be achieved by variety amplification and attenuation. I guess you are
probably more familiar with these concepts than I am.

I am an airline pilot and the idea of as a crew matching the
environmental variety to maintain essential variables (e.g. speed, altitude,
direction) within limits feels as a useful model.

Thanks,

Arthur

Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill
Powers
Verzonden: zaterdag 18 april 2009 17:51
“information”)

[From Bill Powers (2009.04.18.0809 MDT)]

···

Van: Control
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality of

At 11:21 AM 4/18/2009 +0200, Arthur Dykstra wrote:

Thanks Bill,
You enriched my view on Ashby’s law of req var. I have never found such a
critical view on his work, did I look in the wrong place ?

Lots of people think Ashby’s work, particularly his later work on cybernetics,
was wonderful. I went down a different path, developed the model that he
abandoned, and concluded that he made the wrong choice, with what I see as
unfortunate consequences for those who put much time and effort into following
his reasoning.

When I first read Design for a Brain, I thought it was wonderful, too. The
parts of the book in which negative feedback control was explained in relation
to human behavior got me started on the path to PCT. I just sort of ignored the
rest, which made little sense to me. But who knows? Maybe I’m the one who made
the mistake. Others will have to decide that for themselves.

May I ask you your view on Ashby’s concept of an
ultrastable system which must keep the essential variables within physiological
limits (in chapter 7 of Design for a brain) ?

That was a brilliant idea and I adopted it in the mid-1950s; it solved a lot of
problems about learning and adaptation. Later on, I read Daniel Koshland’s book
on bacterial chemotaxis where a principle is laid out that shows how systematic
learning can result from random variations ( a super-efficient form of natural
selection), and put that together with Ashby’s idea to form what I now call
“E-coli reorganization theory.” In my new book there are some
respectably complex simulations of control systems getting themselves organized
by this method.

I am trying to use these concepts including the Viable
System Model for the description of a safety management system so your (and
others of course) comments are very welcome.

Tell me something. Suppose you design a system for managing safety, and it
doesn’t work as well as you want. You hire a consultant, and he reports to you
that your system’s output actions don’t have enough variety to match the
variety of the environment you’re trying to control. What does that tell you
about how to make the system work better?

I have a strong impression that starting with Ashby, a lot of people found the
logic of negative feedback control baffling, while the logic of the kind of
organization Ashby proposed made good common sense. Ashby himself got the idea
of negative feedback control pretty well, but only superficially; he never
really explored systems of that sort, and drew some conclusions about them that
were just wrong enough to throw him off the track.

One very wrong conclusion was that negative feedback control systems, being
error-driven, necessarily controlled imperfectly, nowhere near as well as a
system that could compute exactly the required action and carry it out. What he
didn’t realize, simply because of limited knowledge about real error-driven
control systems, is that the “imperfections” in control of this sort
can be as small as one part per million, or even much less. I once built one
that positioned a 300-pound carriage on its ways over a 20-inch range with an
accuracy of one tenth of a wavelength of red light. Actually, most real control
systems can achieve accuracies of a percent or two, which is plenty close for
most of the control tasks anyone performs – how close to the center of your
lane do you really have to keep your car? One inch would be better than
necessary, and that’s about one percent of the lane width.

Also, not being an engineer, Ashby seemed to be under the impression that just
because you can compute the required action perfectly (given a big enough and
fast enough computer), a real system could produce it perfectly (and
instantly). In fact, one reason that negative feedback control systems were
invented was that computed-output systems are very crude in their actual
performance, since they have no way to correct for unanticipated disturbances,
changes in friction or other disturbances, or changes in the efficiency of
their own output mechanisms. Negative feedback control systems are naturally
able to maintain the same output results even in the face of novel disturbances
and even if their output machinery loses a significant part of its
effectiveness. That is why they work so much better, and are so much simpler
and faster, than systems that work by computing what they need to do and then
doing it.

The logic of the system Ashby finally settled on is appealing, especially to
engineers. You analyze the system to see what the effect of some control input
is on the system to be controlled. Given a sufficiently accurate model of the
controlled “plant”, you can see exactly how any given control output,
including magnitudes of input and rates of change and so on, affects the variables
you want to control. Equations can be found that describe the causes and
effects between the control input and the final effect. This part of the
process is called “system identification.” The success of control
will depend crucially on the accuracy of the equations.

Next, you decide what values of the variables in the plant you want to control,
and how you want them to behave – the “trajectories” of the
variables. Given the desired values and the trajectories, you can then use the
inverses of the equations to calculate the values of the control variables that
will generate the desired end-points (inverse kinematics) and the ways in which
the control variables must change through time to generate the desired
trajectories (inverse dynamics). Once you have completed those inverse
calculations, all that remains is to manipulate the control variables in the
way you have deduced, and the plant will then generate the desired results.

This is a rather complex procedure and involves some challenging mathematical
computations. It requires extensive knowlege about the physics (and chemistry
in some cases) of the plant. The conversion of the control variable into
exactly the computed values of effects on the plant requires highly precise
machinery (often the engineers have to cheat and use a negative feedback
control system to generate precise enough effects). And since the actual
effects will change from time to time because of disturbances and changes in
the environment and wear and tear on the plant, a large fast computer is needed
to update the model and repetitively recompute all the inverses.

However, computers are small and fast now, and machinery can be made quite
precise, so there is no basic reason why this sort of system can’t be made to
work. And it has the great advantage that it is understandable without adding
anything much to 19th-century engineering knowlege. Anyone can understand how
it works, even if not everyone could design such a system. The basic idea is
very simple: you figure out what you have to do to get the result you want, and
then you do it. Compute and execute.

Apparently, once a person has this architecture firmly in mind, nothing can
dislodge it. And a negative feedback control system makes no sense at all in
comparison.

The first thing that makes no sense is that the “control variable” is
in the wrong place, and it’s called the “controlled variable.”
Instead of being the variable that is used to do the controlling, it’s the
variable that gets controlled.

The second puzzle is that the controller contains no model of the plant. The
designer may have a model in mind – probably does – but when the controller
is in operation it does not do any inverse calculations at all and makes no use
of knowlege about how the plant works.

The third weird fact is that the controller can respond to unanticipated
disturbances by adjusting its effects on the plant so as to counteract those
disturbances – which it does not need to sense. And more than that, if a motor
starts to lose torque because of age, or a load is placed on the plant, or the
quality of fuel used in the plant changes, the action of the control system
automatically changes in just the way needed to cancel 90, or 99, or if you
wish 99.9% of the effects of these changes.

But the worst thing is that this controller mangles cause and effect. The
variable that is controlled is also the variable that causes the action that is
doing the controlling of that variable. Causation runs in a circle. Every
effect in the control loop is part of its own cause. Normal 19th-century
thinking simply can’t handle this. Many cyberneticists speak with delight about
“circular causation,” but few of them have any idea of what that
really implies, or how it works.

To understand exactly what a negative feedback control system is and what it
can do, you have to go farther into the details and properties of these systems
than Ashby went. That’s what I did. I didn’t learn everything that control
engineers learn, but I found shortcuts and built enough control systems to get
a pretty clear idea of their properties. That’s how I came to realize that
Ashby had bet on the wrong horse. When I finally came to that conclusion, I
didn’t much like it. I felt disloyal.

“Requisite variety” is the kind of idea that people get when they
don’t really understand a system but want to say something useful about it.
That’s the impression I got about Ashby. He was looking for generalizations
that seemed true and that didn’t require getting into details about control
systems.

He would have done better to focus on the details before generalizing.

Best,

Bill P.

Regards,
Arthur

Van: Control Systems Group Network (CSGnet) [ mailto:CSGNET@LISTSERV.ILLINOIS.EDU]
Namens Bill Powers
Verzonden: zaterdag 18 april 2009 1:28
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality of
“information”)

[From Bill Powers (2009.04.17.1624 MDT)]

At 09:45 PM 4/17/2009 +0200, Arthur Dykstra wrote:

AD: Dear Bill,
With great interest I am trying to catch up on these interesting posts.
Can you please elaborate a bit more on what you say about cybernetics and Ashby
in the following part:

BP earlier:I saw this happen in cybernetics, with
Ashby’s “Law of Requisite Variety.”

BP: I assume this is the passage that caught your eye. I was referring to the
concept of requisite variety as a kind of measure of variability, which is
related to uncertainty and the concept of information. Ashby maintained that
the actions of a control system had to have at least as much
“variety” as the environment to be controlled.

For the save of PCTers not acquainted with this law, here is a bit from
http://en.wikipedia.org/wiki/Variety_(cybernetics

============================================================================

The Law of Requisite Variety

If a system is to be stable the number of states of its control mechanism must
be greater than or equal to the number of states in the system being
controlled. Ashby states the Law as “only variety can destroy
variety”[4]. He sees this as aiding the study of problems in biology and a
“wealth of possible applications” . He sees his approach as introductory
to Shannon Information Theory (1948) which deals with the case of
“incessant fluctuations” or noise. The Requisite Variety condition
can be seen as a simple statement of a necessary dynamic equilibrium condition
in information theory terms c.f. Newton’s third law, Le Chatelier’s principle.

Later, in 1970, Conant working with Ashby produced the Good Regulator theorem
[5] which required autonomous systems to acquire an internal model of their
environment to persist and achieve stability or dynamic equilibrium.

The idea in that last unfortunate paragraph has steered lots of people into a
blind alley.

While the law of requisite variety may in fact be true (I wouldn’t know), it’s
not sufficient for designing a stable control system, or even a control system
that controls. All it really says is that the control system must have the same
number of output degrees of freedom as the environment to be controlled. It
doesn’t even say they have to be the same degrees of freedom! If the outputs of
the control system can apply forces to an object’s position in x, y, and z, and
the environment controlled can vary in angles rho, theta, and tau, the number
of degrees of freedom of the output is the same as the number of degrees of
freedom of the environment, but nothing in the environment will be controlled.
Ashby referred to matching the number of “states,” but that means
only that each output variable must have at least the same number of discriminable
states or magnitudes as the corresponding environmental variable. It still
doesn’t say the variables have to correspond in any particular way. If you
match only the number of states, the chances of creating even a closed loop are
pretty small.

Even if all those conditions are met, you still don’t have a control system,
much less a stable one. To have a control system, you need to give it the
ability to sense the state of the environment in each independent dimension (a
subject Ashby totally ignored, apparently), to compare what is sensed with a
reference condition, and to generate an output that affects the same variable
that is sensed in such a way that the difference between the sensory signal and
the reference magnitude is minimized and kept small despite unpredictable
disturbances of the environment. The law of requisite variety says nothing
helpful about those fundamental requirements. It’s one of those generalizations
that, while quite possibly true, is useless for designing or understanding
anything.

In “Design for a Brain” Ashby abandoned the best approach to control
theory and switched to a very bad version in which the variables are discrete
and enumerable. I think this is what gave rise to the current fad called
“modern control theory,” and that the underlying principle he and his
followers adopted is completely impractical as a model of living systems (or
the systems they control). Ashby thought you could design a system so it would
compute how much action and what kind of action were needed to produce a
desired result, and then execute the action and get the result. He thought this
would provide instantaneous and perfect control, as compared to error-driven
systems which could not even in principle achieve EXACTLY zero error. That is, of
course, not physically possible for any real system no matter how it’s
designed, including the systems Ashby imagined. But systems of the kind Ashby
finally chose are illusory, because simply expressing the variable magnitudes
as small whole numbers by no means shows that any real system would behave in
that sort of infinitely precise instantaneous steps. 2 - 2 is zero in the world
of integers, but in the real world it’s anywhere between -0.4999… and
+0.4999… . When you add 1 to 1 in the real world, you get something close to
2, but not right away. Everything in the real world takes time to happen, and
Ashby chose an approach in which that simple fact is ignored.

All this is a great pity since Ashby was one of my early objects of admiration,
and it took me quite a while to realize that his acquaintance with real control
systems was rather sparse. I think he just had the bad luck to have an insight
that led him straight off the productive path on which he started. If he had
been any kind of engineer he might have realized his mistake, but he was a
psychiatrist and more of a hobbyist than an engineer. And like many in
cybernetics in the early days, he was engaged in that very popular contest of
seeing who could come up with the most general possible statements. What a
coup, to boil it all down to “Only variety can destroy variety”! Wow!
And what a bummer to be topped by Boltzman, who shortened that terse
generalization by two whole words. saying “Variety absorbs variety.”

Not my kind of game.

Best,

Bill P.

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.0.238 / Virus Database: 270.11.58/2062 - Release Date: 04/16/09
08:12:00

[From Bill Powers (2009.04.19.0-712 MDT)]

AD:
I am trying to use these concepts including the Viable System Model for the description of a safety management system so your (and others of course) comments are very welcome.
BP:
Tell me something. Suppose you design a system for managing safety, and it doesn't work as well as you want. You hire a consultant, and he reports to you that your system's output actions don't have enough variety to match the variety of the environment you're trying to control. What does that tell you about how to make the system work better?

AD:
That would be a very underspecified report and no help at all. I would ask him on what specific dimensions the system lacks variety. Then I could think of measures to amplify that specific system variety, or how to attenuate the environmental variety impacting the system, to recreate homeostasis.

BP: Fine, but you need to know something about control systems to do all that successfully. Behind what you call "amplifying variety" is something much more specific: identifying all the variables that need to be controlled, and providing some means of affecting each of those variables. That's all that "requisite variety" means. And you need to develop some way to monitor the results to see if you're getting the result you want or something else. That's the rest of the control system. In fact, to find those variables and figure out how to control them doesn't require thinking about variety at all, though there's nothing to stop you from doing so if you wish, after you've solved all the real problems. As you say, variety is a rather underspecified concept.

Just trying to eliminate disturbances won't "recreate homeostasis," either. You can't eliminate all disturbances, especially in airplanes. What you need is an actual homeostatic system or as we call them here, a control system. And it had better not be just a homeostatic system; what you need is a RHEOstatic system (as Mrosovski calls them), better known as a hierachy of control systems in PCT circles. You wouldn't want an autopilot that could only keep you, homeostatically, at one heading, speed, and altitude. First you need to be able to maintain each important variable in a specific state, and then you need to be able to vary the state in which each variable is being controlled, so the lower-order control systems can be put into use by more general, higher-order control systems. The means of varying the homeostatic state is what we call a reference signal, and that is how higher-order systems can change what lower systems are doing without coming into conflict with them. The higher systems tell the lower ones what state to maintain, and leave the actual maintaining up to them. It's like the pilot entering the desired heading, airspeed, and altitude into the autopilot. He doesn't tell the autopilot how to manipulate the ailerons, elevators, and throttle -- he just tells it what result to achieve. If he tried to operate the controls he would be fighting the autopilot. There's a reason for making the autopilot cut out if the pilot starts using the controls himself. In living systems, that sort of micromanagement isn't allowed.

Every recursion of the system (VSM language) must have req var and this can be achieved by variety amplification and attenuation. I guess you are probably more familiar with these concepts than I am.

No, actually I don't use those terms at all. What I do in designing control systems or models of them could probably be classified in such abstract terms, but it's not the abstract terms that do the heavy lifting. I design control systems and make them work, and never once even think about variety. So far that hasn't proved to be a serious omission.

I am an airline pilot and the idea of a a crew matching the environmental variety to maintain essential variables (e.g. speed, altitude, direction) within limits feels as a useful model.

Perhaps that works for you, but I also recommend seeing how PCT looks to you as a model of those processes. You're making me wonder if we don't need yet another book, something like "How to use PCT in the real world."

Best,

Bill P.

···

At 10:40 PM 4/18/2009 +0200, Arthur Dykstra wrote:

Dear Bill,
Thanks for your response. I agree that the operationalisation of all these
concepts is a critical and difficult task. I just bought all your books and
will explore how PCT and VSM can maybe be supplemental.
To what extend can the PCT concept be transferred to an organisational
environment like an airline ? Has the PCT concept explicitly been used to
model the control structure of a company ? It would be helpful to read about
this.
Thanks for any suggestions or links.
Arthur

-----Oorspronkelijk bericht-----
[mailto:CSGNET@LISTSERV.ILLINOIS.EDU] Namens Bill Powers
Verzonden: zondag 19 april 2009 15:43
"information")

[From Bill Powers (2009.04.19.0-712 MDT)]

AD:
I am trying to use these concepts including the Viable System Model
for the description of a safety management system so your (and
others of course) comments are very welcome.
BP:
Tell me something. Suppose you design a system for managing safety,
and it doesn't work as well as you want. You hire a consultant, and
he reports to you that your system's output actions don't have
enough variety to match the variety of the environment you're trying
to control. What does that tell you about how to make the system work

better?

AD:
That would be a very underspecified report and no help at all. I
would ask him on what specific dimensions the system lacks variety.
Then I could think of measures to amplify that specific system
variety, or how to attenuate the environmental variety impacting the
system, to recreate homeostasis.

BP: Fine, but you need to know something about control systems to do
all that successfully. Behind what you call "amplifying variety" is
something much more specific: identifying all the variables that need
to be controlled, and providing some means of affecting each of those
variables. That's all that "requisite variety" means. And you need to
develop some way to monitor the results to see if you're getting the
result you want or something else. That's the rest of the control
system. In fact, to find those variables and figure out how to
control them doesn't require thinking about variety at all, though
there's nothing to stop you from doing so if you wish, after you've
solved all the real problems. As you say, variety is a rather
underspecified concept.

Just trying to eliminate disturbances won't "recreate homeostasis,"
either. You can't eliminate all disturbances, especially in
airplanes. What you need is an actual homeostatic system or as we
call them here, a control system. And it had better not be just a
homeostatic system; what you need is a RHEOstatic system (as
Mrosovski calls them), better known as a hierachy of control systems
in PCT circles. You wouldn't want an autopilot that could only keep
you, homeostatically, at one heading, speed, and altitude. First you
need to be able to maintain each important variable in a specific
state, and then you need to be able to vary the state in which each
variable is being controlled, so the lower-order control systems can
be put into use by more general, higher-order control systems. The
means of varying the homeostatic state is what we call a reference
signal, and that is how higher-order systems can change what lower
systems are doing without coming into conflict with them. The higher
systems tell the lower ones what state to maintain, and leave the
actual maintaining up to them. It's like the pilot entering the
desired heading, airspeed, and altitude into the autopilot. He
doesn't tell the autopilot how to manipulate the ailerons, elevators,
and throttle -- he just tells it what result to achieve. If he tried
to operate the controls he would be fighting the autopilot. There's a
reason for making the autopilot cut out if the pilot starts using the
controls himself. In living systems, that sort of micromanagement
isn't allowed.

Every recursion of the system (VSM language) must have req var and
this can be achieved by variety amplification and attenuation. I
guess you are probably more familiar with these concepts than I am.

No, actually I don't use those terms at all. What I do in designing
control systems or models of them could probably be classified in
such abstract terms, but it's not the abstract terms that do the
heavy lifting. I design control systems and make them work, and never
once even think about variety. So far that hasn't proved to be a
serious omission.

I am an airline pilot and the idea of a a crew matching the
environmental variety to maintain essential variables (e.g. speed,
altitude, direction) within limits feels as a useful model.

Perhaps that works for you, but I also recommend seeing how PCT looks
to you as a model of those processes. You're making me wonder if we
don't need yet another book, something like "How to use PCT in the real
world."

Best,

Bill P.

···

Van: Control Systems Group Network (CSGnet)
Aan: CSGNET@LISTSERV.ILLINOIS.EDU
Onderwerp: Re: What happened to cybernetics (was RE: The reality of
At 10:40 PM 4/18/2009 +0200, Arthur Dykstra wrote:

[From Bill Powers (2009.04.19.0-712 MDT)]

(Gavin Ritz 2009.04.20.10.58NZT)

BP says" "Perhaps that works for you, but I also recommend seeing how PCT
looks to you as a model of those processes. You're making me wonder if we
don't need yet another book, something like "How to use PCT in the real
world."

GR says: I would buy this book.

[From Fred Nickols (2009.04.19.1606 MST)]
  

From: Gavin Ritz <garritz@XTRA.CO.NZ>

[From Bill Powers (2009.04.19.0-712 MDT)]

(Gavin Ritz 2009.04.20.10.58NZT)

BP says" "Perhaps that works for you, but I also recommend seeing how PCT
looks to you as a model of those processes. You're making me wonder if we
don't need yet another book, something like "How to use PCT in the real
world."

GR says: I would buy this book.

FN says, "Me, too."

Regards,

Fred Nickols
nickols@att.net