Self-Organization

[From Boss Man (2004.05.07)]

Whenever a system is sufficiently complex that it becomes difficult to understand, it appears that
describing it as "self-organizing" works wonders.

[From Boss Man (2004.05.07)]

For those of you yet to be totally enthralled by self-organization, I recommend and an article by
David Ruelle in the May 2004 _Physics Today_ entitled "Conversations on Nonequilibrium Physics
with an Extraterrestrial." Since everyone may not have access to this fine periodical, I quote the
caption of an image of a cell.

'The interior of a cell is not a dilute system close to equilibrium. It is densely packed with a
hierarchical organization going all the way from small ions to large structures easily visible with a
light microscope. Complex chemical and mechanical activity, mediated by enzymes and motor
proteins, is far from equilibrium. To make sense of that activitity, one uses a combination of
fundamental and phenomenological ideas that is not guaranteed to be fully coherent--for
example, current approaches might not ensure that entropy increases. It would be nice, for both
theoretical and practical reasons, if one had a fundamental understanding of nonequilibrium
systems far from equilibrium before starting to make approximations."

If anyone is interested in actually understanding what complexity is about, I recommend Ruelle's
charming introduction _Chance and Chaos_. The fact that you know nothing about the subject,
however, should not stand in the way of your expressing strong opinions.

[From Bill Powers (2004.05.07.0821 MST)]

Boss Man (2004.05.07) --

'The interior of a cell is not a dilute system close to equilibrium. It is
densely packed with a hierarchical organization going all the way from
small ions to large structures easily visible with a light microscope.
Complex chemical and mechanical activity, mediated by enzymes and motor
proteins, is far from equilibrium. To make sense of that activitity, one
uses a combination of fundamental and phenomenological ideas that is not
guaranteed to be fully coherent--for example, current approaches might not
ensure that entropy increases. It would be nice, for both theoretical and
practical reasons, if one had a fundamental understanding of
nonequilibrium systems far from equilibrium before starting to make
approximations."

I appreciate the Prigogine-like sentiment, not being much of a fan of
theory at this level of abstraction but still being unable to shake off of
the feeling that there may be something to learn from it. As I probably
said recently, the abstractions are not what make the system work; it's
whatever does make the system work that makes the abstractions hold true.
It's not conservation of energy that makes the water pumped up into the
holding pond create electricity as it falls back down, but the weight and
speed of the water pushing on the turbine blades. You could still get work
out of the water even if energy were not faithfully conserved. It's not an
increase of entropy that prevents as much energy from being removed from
the falling water as was put into it to raise it, but friction and other
dissipative processes. The generalizations of thermodynamics may hold true
over all systems, and it's sort of amazing that they do, but it's the
nature of generalizations to tell us nothing about HOW things work. I'm
much more interested in the HOW, and am willing to let the generalizations
come after we know the details.

I'm reminded of W. Ross Ashby's concept of "variety," loosely related to
information theory and entropy. Ashby concluded that a control system has
to have an amount of variety equal [at least equal?] to the variety of the
controlled system. This is sort of like saying that a machine has to have
a temperature higher than the temperature of its surroundings to do work on
the surroundings. These generalizations may be true, but they say nothing
about how you distinguish useful from useless variety, nor do they help you
design a system that will exhibit the required variety-matching while also
performing some useful kind of control. It's possible to decrease the
entropy in a system by feeding organization to it, and then let the entropy
increase again while the system does nothing of any interest. The
difference between that system and another one that writes symphonies is
not in the fluctuations of entropy, but in the _kind_ (not just the amount)
of organization represented in it.

As it happens, Ashby chose the wrong design for a brain, though it could be
shown to obey the "laws of variety" just as well as a negative feedback
control system does. He thought that the most effective design for a
controller was one in which the controller sensed the states of all
disturbances of the controlled quantity and computed just the output that
would cancel all their effects. He pointed out that in principle, this kind
of system could achieve zero error, while a negative feedback control
system always had to allow some error to exist, since its output was
error-driven. "In principle," of course, is a long way from "in practice,"
and Ashby, being a psychiatrist, was not enough of an engineer to see how
impracticable his design was. I got a good part of my inspiration for what
is now PCT from the parts of Ashby's book that dealt with the kind of
control system he ultimately rejected.

So maybe you can see why I react to generalizations like the "law of
variety" with something elss than unrestrained enthusiasm.

Best,

Bill P.

[From Boss Man (2004.05.07)]

Bill Powers (2004.05.07.0821 MST)

So maybe you can see why I react to generalizations like the "law of
variety" with something elss than unrestrained enthusiasm.

Thank you for the informative response. For what it may be worth, we apparently see eye to eye on
the general question. The devil, and god, are in the details.