Entropy & Reorganization

[Martin Taylor 2009.08.21.21.57]

(Gavin Ritz
2009.08.22.13.46NZT)

Martin Taylor 2009.08.21.17.37]

Rick Marken (2009.08.20.1020)]

Martin Taylor (2009.08.20.15.01)-
Rick Marken (2009.08.20.0900)
  
I don’t fully get it Martin, but anything with life that has some control to a greater or lesser degree has limits (within itself & the environment) and therefore is entropy reducing, both exporting (limits other life forms- through whatever means) and itself.
 
Humans found this out centuries ago when to order society (themselves) they started creating limits, (eg laws, customs, rights etc) which is all entropy reducing. Which is control so maybe control and entropy are the same thing in different forms.
 
Regards
Gavin

I’m not clear what relation you see between limits and entropy. When
you mentioned limits as entropy reducing in an earlier message, I
thought you were referring to limit cycles, and convergence to an
attractor does indeed reduce Boltzmann entropy. Now you seem to be
talking about limits on the distributions of the values of different
perceptions. Certainly cutting the tails of uncertainty distributions
will reduce entropy, but only marginally – unless the limits are
severe. Control, on the other hand, reduces the breadth of the
uncertainty distribution of the controlled variable, whether there are
or are not limits on the permitted values of the variable. That means
significant and continuing entropy export from the controlled variable,
to balance that imported from the disturbance and the reference signal.

Maybe you can explain what you mean by the relation between entropy and
limits. It would be easier for me to understand if you could use the
Boltzmann conceptual background I sketched in my repost of a 1993-6
message [Martin Taylor 2009.08.18.14.41]. What are the macrostates and
microstates, and how does the imposition of limits affect them?

Martin

( Gavin
Ritz 2009.08.22.15.00NZT)

[Martin Taylor
2009.08.21.21.57]

(Gavin
Ritz 2009.08.22.13.46NZT)

Martin Taylor
2009.08.21.17.37]
Rick Marken (2009.08.20.1020)]

Martin Taylor (2009.08.20.15.01)-
Rick Marken (2009.08.20.0900)
  
I don’t fully get it Martin, but anything with life that has some control to a greater or lesser degree has limits (within itself & the environment) and therefore is entropy reducing, both exporting (limits other life forms- through whatever means) and itself.
 
Humans found this out centuries ago when to order society (themselves) they started creating limits, (eg laws, customs, rights etc) which is all entropy reducing. Which is control so maybe control and entropy are the same thing in different forms.
 
Regards
Gavin

I’m not clear what relation you see between limits and entropy. When you
mentioned limits as entropy reducing in an earlier message, I thought you were
referring to limit cycles, and convergence to an attractor does indeed reduce Boltzmann
entropy. Now you seem to be talking about limits on the distributions of the
values of different perceptions. Certainly cutting the tails of uncertainty
distributions will reduce entropy, but only marginally – unless the limits are
severe. Control, on the other hand, reduces the breadth of the uncertainty distribution
of the controlled variable, whether there are or are not limits on the
permitted values of the variable. That means significant and continuing entropy
export from the controlled variable, to balance that imported from the
disturbance and the reference signal.

Maybe you can explain what you mean by the relation between entropy and limits.
It would be easier for me to understand if you could use the Boltzmann
conceptual background I sketched in my repost of a 1993-6 message [Martin Taylor
2009.08.18.14.41]. What are the macrostates and microstates, and how does the
imposition of limits affect them?

Okay Martin

Do we
agree that for simplicity that entropy in the Boltzmann sense it’s really
just the “quantity of possibilities of elements arrangements”. (to
be sure both positional & momentum). This would apply to microstates of a
gas or even arrangement of letters in Shannon’s information theory.

So it’s
a philosophical conclusion then if one limits something like the walls of the
vessel (a limitation) or a separation (partition- a limitation) between gasses
then that would diminish disorder hence constitute an increase in order. The
problem with gasses is then how combinations of limitations would affect order.
E.g. if I separate oxygen and hydrogen in two separate vessels then the entropy
cannot increase but if I combine them and they form water then there is more
order (internal forces of bonding of atoms) also a limitation. The crucial
question when it comes to life is what combinations of limitations affect
order. Of course control is one of them.

Limitations
could be external say outside magnetic forces. Or in the brain the “Sonic
hedgehog molecule” which supposedly organizes the brain into special
functions.

Disorder
grows in the Shannon sense of entropy if I make no effort to control it. So I limit
certain things.

So the
point is control is a limitation and hence will decrease entropy.

I could
conclude then that PCT is an entropy reducing theory. And possibly a major breakthrough
in explaining how order comes about in living organism.

Kind
regards

Gavin

Martin

( Gavin
Ritz 2009.08.22.15.25NZT)

[Martin Taylor 2009.08.21.21.57]

( Gavin
Ritz 2009.08.22.13.46NZT)

Martin Taylor 2009.08.21.17.37]
Rick Marken (2009.08.20.1020)]

Martin Taylor (2009.08.20.15.01)
Rick Marken (2009.08.20.0900)
 
Martin
 
My last answer still doesn’t tell me anything about entropy and reorganization, I just have a hunch that the answer to my conundrum lies here.  Or maybe I should re-phrase that, what is it about re-organisation that creates more order (or organisation) it is after all organizing something so there must be a reduction of entropy. It’s in the word RE- organization.
 
Regards
Gavin

[From Rick Marken (2009.08.21.2310)]

Gavin Ritz (2009.08.22.13.42NZT)

Well, there go Rick, I think everyone’s Reference signals aren’t working.

I think everyone’s reference signals are working just fine; they’re just not being set to where I think everyone should be setting them. But, then, I often think my reference signals are not being set where I think I should be setting them, either. So who am I to talk;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

(gavin ritz 2009.08.22.18.32NZT)

[From Rick Marken
(2009.08.21.2310)]

Gavin Ritz (2009.08.22.13.42NZT)

Well, there go Rick, I think everyone’s Reference signals
aren’t working.

I think everyone’s
reference signals are working just fine; they’re just not being set to where I
think everyone should be setting them. But, then, I often think my reference
signals are not being set where I think I should be setting them, either. So
who am I to talk;-)

You got the first part, but I wasn’t eluding
to the second that’s your addition.

Regards

Gavin

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2009.08.22.0710 MDT)]

Martin Taylor 2009.08.21.21.57 –

MT (To Gavin Ritz): Maybe you
can explain what you mean by the relation between entropy and limits. It
would be easier for me to understand if you could use the Boltzmann
conceptual background I sketched in my repost of a 1993-6 message [Martin
Taylor 2009.08.18.14.41]. What are the macrostates and microstates, and
how does the imposition of limits affect them?

I think that marks out your territory very nicely. I shall try not to
encroach on it.

Best,

Bill P.

[Martin Taylor 2009.08.22.14.53]

[From Bill Powers (2009.08.22.0710 MDT)]

Martin Taylor 2009.08.21.21.57 --

MT (To Gavin Ritz): Maybe you can explain what you mean by the relation between entropy and limits. It would be easier for me to understand if you could use the Boltzmann conceptual background I sketched in my repost of a 1993-6 message [Martin Taylor 2009.08.18.14.41]. What are the macrostates and microstates, and how does the imposition of limits affect them?

I think that marks out your territory very nicely. I shall try not to encroach on it.

I don't claim any "territory" when it comes to Science, and nor do I acknowledge any other such claim.

What I do claim is some, though always inadequate, knowledge of some fields of Science and engineering, and I acknowledge the same claims in others. I can easily say wrong or stupid things even in my area of supposed expertise, and I imagine others to have similar foibles. When I do say something wrong or silly, I expect to be called out by others with appropriate expertise, or to be questioned by others who might fail to understand because of my inadequacies. Likewise, I expect to do my own questioning or criticising. That's how our knowledge progresses, isn't it?

So don't worry about encroaching, please.

Martin

Gavin Ritz 2009.23.9.00NZT)

···

Martin Taylor 2009.08.22.14.53]

[From Bill Powers (2009.08.22.0710 MDT)]

Martin Taylor 2009.08.21.21.57 –

Good point below Martin.

Did my explanation of limits make any sense to you or not?

Regards
Gavin

MT (To Gavin Ritz): Maybe you can explain what you mean by the relation between entropy and limits. It would be easier for me to understand if you could use the Boltzmann conceptual background I sketched in my repost of a 1993-6 message [Martin Taylor 2009.08.18.14.41].
What are the macrostates and microstates, and how does the imposition of limits affect them?

I think that marks out your territory very nicely. I shall try not to encroach on it.

I don’t claim any “territory” when it comes to Science, and nor do I acknowledge any other such claim.

What I do claim is some, though always inadequate, knowledge of some fields of Science and engineering, and I acknowledge the same claims in others. I can easily say wrong or stupid things even in my area of supposed expertise, and I imagine others to have similar foibles. When I do say something wrong or silly, I expect to be called out by others with appropriate expertise, or to be questioned by others who might fail to understand because of my inadequacies. Likewise, I expect to do my own questioning or criticising. That’s how our knowledge progresses, isn’t it?

So don’t worry about encroaching,
please.

Martin

[From Bill Powers (2009.08.24.0912 MDT)]

Martin Taylor 2009.08.22.14.53 --

I don't claim any "territory" when it comes to Science, and nor do I acknowledge any other such claim.

So don't worry about encroaching, please.

One can claim territory by ringing it around with barricades of jargon, special knowledge, and complex webs of background assumptions and analytical methods. This renders what is inside the barricades inaccessible to those outside them, and protects those inside from judgment by those outside. So when I decide not to encroach it is less to avoid offending those inside than to protect my tender feelings from ridicule and censure. I doubt many things which I am unequipped to criticize competently. I think you know pretty much what they are.

If something good comes into PCT by use of concepts like entropy and information I am sure I will hear of it, and am prepared to be impressed. In the meantime I will stick to what I do understand and try to make it more understandable rather than less.

Best,

Bill P.

(Gavin Ritz 2009.08.25.14.45NZT)

[From Bill Powers
(2009.08.24.0912 MDT)]

Martin Taylor 2009.08.22.14.53 –

I don’t claim any “territory” when
it comes to Science, and nor do I

acknowledge any other such claim.

So don’t worry about encroaching, please.

One can claim territory by ringing it around with
barricades of

jargon, special knowledge, and complex webs of
background assumptions

and analytical methods. This renders what is
inside the barricades

inaccessible to those outside them, and protects
those inside from

judgment by those outside.

What great example of reducing
entropy.

So when I decide not to encroach it is

less to avoid offending those inside than to
protect my tender

feelings from ridicule and censure. I doubt many
things which I am

unequipped to criticize competently. I think you
know pretty much

what they are.

If something good comes into PCT by use of
concepts like entropy and

information I am sure I will hear of it, and am
prepared to be

impressed. In the meantime I will stick to what I do understand
and

try to make it more understandable rather than
less.

Bill you have just made the very
point I was talking about limits create order. Any living entity by its nature
has to ring itself from the outside but it is required requisitely to expel waste
(and thus as a side effect increasing the entropy of the environment) whilst
internally reducing entropy or increasing order.

I.e. the separation of order from
entropic waste is the first goal of life.

PCT is a great example of a
barricading model of human life.

However my question is still
reorganisation and entropy must be related I still don’t know how.

Can you explain to me again when
and why and what specifically reorganisation occurs?

Regards

Gavin

[Martin Taylor 2009.08.24.23.20]

(Gavin
Ritz 2009.08.25.14.45NZT)

However my question
is still
reorganisation and entropy must be related I still don’t know how.

I think I’m beginning to get an idea about that, but it’s still too
vague to propose, and unfortunately I’m still bogged down with other
things – I’m also probably not even going to attend my NATO group
meeting in early October for that reason, even though informational
analysis of networks is likely to be a major topic of discussion.

Here’s the outline of the direction I’m thinking.

The structure of a network can be ordered (low entropy) or disordered
(high entropy), modular (low entropy) or tightly interlinked (high
entropy). The entropy of the network structure interacts with the
entropy associated with the signal flows in the pathways. If the
controlled variables are to be kept in a low entropy state (the nature
of control), the signal entropy of the internal signals of the network
probably will have to be low also, and that is more easily achieved if
the network structural entropy is low, because low structural entropy
implies predictable signal interactions. I know that’s hand-waving, and
I don’t know whether its provable or whether it might have been proved,
but my intuition leads me to look in that direction for a solution to
the question.

Now consider reorganization. I very glibly said the other day that if
the reorganizing system is a control system, then its job was to bring
its controlled variable to a low entropy state, and took the controlled
variable to be the entire perceptual control hierarchy. That’s
obviously oversimplified, and not only because “the perceptual control
hierarchy” is a very high-dimensional variable. What are the controlled
variables of a reorganizing system? Bill has argued that they are of
two kinds, the intrinsic variables of life chemistry that must be kept
within narrow bounds if the organism is to survive, and the error
values within the perceptual control hierarchy. It is the entropy of
these error values that I can see as possibly being implicated with the
structural entropy of the network. The entropy of the life-chemistry
variables, if it is affected at all by the actions of the perceptual
control system, must be lower if the perceptual control error entropy
is low (generally good control) than if it is high (generally poor
control).

None of that deals with reorganization as such. But I think it may
possibly be a direction that could lead to an improved understanding of
reorganization that with luck would complement Bill’s new thinking.
With luck, in due course we shall see.

Martin

[From Bill Powers (2009.08.25.0715 MDT)]

(Gavin Ritz
2009.08.25.14.45NZT) –

GR: However my
question is still reorganisation and entropy must be related I still
don’t know how.

Can you explain to me again when and why and what specifically
reorganisation
occurs?

BP: My hypothesis is taken from ordinary observations. We have to
learn some new method of control when all the methods we already know
fail. The reason we have to learn is that when we lose control, errors in
the hierarchy grow. The most important errors are those having to do with
the basic conditions that support life; if we lose control of those we
die. And finally, however reorganization works, it has to work even
before there are any organized control systems (other than inherited
ones) in the organism.

Because reorganization has to work from the beginning, it can’t rely on
anything learned, such as mathematical algorithms or logic and reasoning,
and it can’t rely on past experience with the current environment.
Because it is required only when all known methods fail, its actions
can’t be organized by any known principle or procedure. The only
description of its action that remains (when the organism is trying to
understand itself) is “random.”

And because the most important variables that need to be controlled are
those concerning the basic life support systems, I hypothesized that
reorganization starts when the homeostatic systems fail to keep those
variables under control – when there are large chronic error signals in
the homeostatic systems.

In the latest revision of reorganization theory, I have relaxed the
specification that only “intrinsic error” in the homeostatic
systems causes reorganization, and have given up the idea of indirect
reorganization (intrinsic error resulting in reorganization at all levels
in the hierarchy). Now every control system reorganizes locally on the
basic of its own error signals. This is true whether the control system
is in the genome (RNA and DNA), at the organ level (homeostasis), or
behavioral (the neural hierarchy).

How long these revisions will hold up is unpredictable.

I have nothing to contribute concerning how these hypotheses about
reorganization relate to entropy.

Best,

Bill P.

( Gavin
Ritz 2009.08.26.10.15NZT)

[From Bill Powers
(2009.08.25.0715 MDT)]

(Gavin
Ritz 2009.08.25.14.45NZT) –

GR: However my question is still reorganisation and entropy must be
related I still don’t know how.

Can you explain to me again when and why and what specifically reorganisation
occurs?

BP: My hypothesis is taken from ordinary observations. We have to
learn some new method of control when all the methods we already know fail. The
reason we have to learn is that when we lose control,

errors in the hierarchy
grow.

That would seem like all the time with some people.

The most important errors
are those having to do with the basic conditions that support life; if we lose
control of those we die. And finally, however reorganization works, it has to
work even

before there are any
organized control systems (other than inherited ones) in the organism.

This is an interesting notion. This opens a
whole vista of concepts and questions. Ideas like Informed Energy and Potential
Orders.

Because reorganization has to work from the beginning, it can’t rely on
anything learned, such as mathematical algorithms or

logic

That’s if you accept that logic is
learned. Requisite Organisation has show that this maybe there (e.g. Informed
Energy) before any organized control system in the organism.

and reasoning, and it
can’t rely on past experience with the current environment. Because it is
required only when all known methods fail, its actions can’t be organized by
any known principle or procedure. The only description of its action that
remains (when the organism is trying to understand itself) is
“random.”

And because the most important variables that need to be controlled are those
concerning the basic life support systems, I hypothesized that reorganization
starts when the homeostatic systems fail to keep those variables under control
– when there are large chronic error signals in the homeostatic systems.

Okay, so would you regard high tension in
say a group of control systems would cause reorganization.

In the latest revision of reorganization theory, I have relaxed the
specification that only “intrinsic error” in the homeostatic systems
causes reorganization, and have given up the idea of indirect reorganization
(intrinsic error resulting in reorganization at all levels in the hierarchy).
Now every control system reorganizes locally on the basic of its own error
signals.

This also opens up another vista of ideas.
Ie that is high tension states in a CS module then can cause reorganization.

So how would you give a definition of
re-organization then under this scenario? I’m still not fully clear what reorganization
is.

This is true whether the
control system is in the genome (RNA and DNA), at the organ level
(homeostasis), or behavioral (the neural hierarchy).
How long these revisions will hold up is unpredictable.

I have nothing to contribute concerning how these hypotheses about
reorganization relate to entropy.

That’s fair enough.

Regards

Gavin

Best,

Bill P.

So how would you give a
definition of re-organization then under this scenario? I’m still not
fully clear what reorganization is.
[From Bill Powers (2009.08.26.1014 MDT)]
Gavin Ritz
2009.08.26.10.15NZT –
Reorganization is changing the parameters of a system, as opposed to
changing the data with which the system works or the actions it
performs.
The “parameters” of a system are measures of the properties
that determine how the system will act when presented with some standard
input or disturbance. I once lived in a place that had an adjustable
doorbell; you could change how loud it was by turning an adjustment screw
next to the chimes. This screw didn’t produce a chime when it was turned
– in fact nothing happened at all until the button next to the door was
pressed. What the turning the screw did was change the loudness of the
sound that occurred when the button was pressed.
Reorganization turns the adjustment screws that determine how a control
system will act when its reference input changes, when its perception is
disturbed, or when the physical connection from its actions to its inputs
is altered. It determines how large a perceptual signal will be generated
when the sensory inputs are stimulated to some particular degree. It
determines how much output will be generated per 100 impulses per second
of error signal. It determines how strong the connection will be between
the output signal from a control system’s output function and lower-level
reference inputs to comparators – by setting some output weights to zero
and others to non-zero values, it determines which lower-level control
systems are used as a way of controlling the higher-level system’s
perceptions. The “ArmControlReorg” demo in LCS3 uses that kind
of reorganization. On the perceptual side, it changes which inputs
contribute to a perceptual signal’s magnitude, and how they contribute to
it.
So reorganization isn’t changing what a control system does, not
directly. It changes the properties of a control system that affect
everything it does.

Best,

Bill P.

Wonderful explanation bill and most definitely descriptive of how families work in family therapy.thanks again for your encouragement for me to write on this. My reorganization processes are churning. Best to you bill. gary padover

( Gavin
Ritz 2009.08.27.9.36NZT)

[From Bill Powers
(2009.08.26.1014 MDT)]

Gavin Ritz 2009.08.26.10.15NZT –

So how
would you give a definition of re-organization then under this scenario?
I’m still not fully clear what reorganization is.

Reorganization is changing the parameters of a system, as opposed to changing
the data with which the system works or the actions it performs.

The “parameters” of a system are measures of the properties that
determine how the system will act when presented with some standard input or
disturbance. I once lived in a place that had an adjustable doorbell; you could
change how loud it was by turning an adjustment screw next to the chimes. This
screw didn’t produce a chime when it was turned – in fact nothing happened at
all until the button next to the door was pressed. What the turning the screw
did was change the loudness of the sound that occurred when the button was
pressed.

Reorganization turns the adjustment screws that determine how a control system
will act when its reference input changes, when its perception is disturbed, or
when the physical connection from its actions to its inputs is altered. It
determines how large a perceptual signal will be generated when the sensory
inputs are stimulated to some particular degree. It determines how much output
will be generated per 100 impulses per second of error signal. It determines
how strong the connection will be between the output signal from a control
system’s output function and lower-level reference inputs to comparators – by
setting some output weights to zero and others to non-zero values, it
determines which lower-level control systems are used as a way of controlling
the higher-level system’s perceptions. The “ArmControlReorg” demo in
LCS3 uses that kind of reorganization.

Great I
will have a look at the demo. The demos make a huge amount of sense.

On
the perceptual side, it changes which inputs contribute to a perceptual
signal’s magnitude, and how they contribute to it.
So reorganization isn’t changing what a control system does, not directly. It
changes the properties of a control system that affect everything it does.

Thanks
for this that makes it much clearer.

Regards

Gavin

···