Attractors & Evolution [was: Helping; also Ambiguous figures (was Self Interest)]

[From Bruce Nevin (2004.05.11 14:43 EDT)]

Bill Williams 7 May 2004 11:25 PM CST--
(11:48 PM 5/7/2004)
You also asked Peter Small about the evolutionary significance of
attractors. I didn't see any reply. However, much earlier, toward the end
of March, he had said

Peter Small (2004.03.27) and again (30th April 2004) --

Chaos is a phased space that describes all the possible states that a
dynamic system can be in. Most of the states are unstable, but within
this space there are many states that are stable (called attractors).
Evolution is about jumping between these stable states.

Evolution (extracting low entropy) is about a strategy that causes
these jumps to proceed in a direction that decreases the entropy of a
system. This is arranged by allowing a system to be easily disturbed
so that it will jump out of stable states from time to time. A
selection process (survival of the fittest) ensures that the system
is allowed to resettle only into new stable states that have lower
entropy.

Peter, you must have liked this passage a lot, because over a month later,
on the last day of April, you recycled it under the subject heading "Chen's
Manifesto" with the ID "[Peter Small 30th April 2004]" as context for an
attack on Chen's use of Brownian motion as a metaphor.

However, Peter, you never did reply to Bill Powers' (2004.03.27.0556 MST)
question

What does this searching? How does it know if efficiency is increasing or
decreasing? What tells it to seek a higher value of efficiency? Aren't
there some missing mechanisms in this picture?

The closest thing I could find was your (2004.03.29), saying of that paper
by Lewis that:

Bill Powers dismissed it out of hand on the basis that he'd rejected
that stuff fifty years ago.

So I am glad to see you, Bill Williams, raise the question again. It could
be phrased in a different way. The passage that you were questioning was

Peter Small (2004.05.08) --

Brain imaging techniques seem to support the view that perceptions take
the form of attractors rather than be the product of a control system as
visualized in PCT.

Peter, what if the ebb and flow of perceptions in a perceptual control
hierarchy happened to take the form of attractors? Could you tell the
difference? How? Or do you (Peter) assert that perceptions in a perceptual
control hierarchy "as visualized in PCT" could not possibly take the form
of attractors? If so, what is the basis of this claim? I bet that Martin
would debate that with you.

In the April 30 email quoted above, you (Peter) also said in a similar vein

it is not possible to predict the behavior of complex dynamic systems
using mathematical formulae.

So here are two negatives that you have asserted. Can you prove them? Note
that you would have to prove that the interaction of perceptual control
hierarchies, which can be described using mathematical formulae, cannot
also be described as a complex dynamic system. So these are really two
versions of the same question.

It may boil down to the difference between a descriptive model from the
observer's point of view and a generative model from the point of view of
that which is modeled. Both may concurrently be "true". But, to paraphrase
Hank Folson (2004.03.27), the two should not be confused.

         /Bruce Nevin

From [Marc Abrams (2004.05.11.1456)]

[From Bruce Nevin (2004.05.11 14:43 EDT)]

Or do you
(Peter) assert that perceptions in a perceptual control
hierarchy "as visualized in PCT" could not possibly take the
form of attractors? If so, what is the basis of this claim? I
bet that Martin would debate that with you.

Among _many_ other things, can you prove a perceptual hierarchy exists
as stipulated in PCT?

Can you prove that perceptions as conceptualized and defined by Powers
is in fact correct?

Can you prove that the HPCT hierarchy is isomorphic to our own
physiology in any manner?

Can you prove that attractors are not involved with perceptions?

Can you prove that control is involved with cognition? We have a fairly
good idea about motor control, what about the construction of ideas and
cognition?

What empirical observations and research are you basing any answers to
any of the questions above on that you might provide cites to besides
introspection.

Any insights into any of these questions would be welcome Bruce. I'm not
challenging you here, I asking the very questions I would like answered.
Anything that would point me in a constructive direction would be
appreciated

Marc

Considering how often throughout history even intelligent people have
been proved to be wrong, it is amazing that there are still people who
are convinced that the only reason anyone could possibly say something
different from what they believe is stupidity or dishonesty.

Being smart is what keeps some people from being intelligent.

Thomas Sowell

From[Bill Williams 11 May 2004 3:25 PM CST]

[From Bruce Nevin (2004.05.11 14:43 EDT)]

I think your review and re-ask questions concerning Peter's claims might
clarifity what Peter is claiming, except I would like to make a small
change in the following regarding your rephrased and extended question

In the April 30 email quoted above, you (Peter) also said in a similar

vein

>it is not possible to predict the behavior of complex dynamic systems
>using mathematical formulae.

One of the advantages of using a computer algorithm is the possibility of
making predictions for which it would be very difficulty or impossible to
construct and compute a mathematical solution.

I wonder, would Peter be inclined to extend his claim about prediction to
the method of algorithmic based computer simulations?

Bill Williams

From [Marc Abrams (2004.05.11.1628)

Bruce, one other thought and question. If the tracking task is such
irrefutable proof of control, what do you attribute the lack of
enthusiasm to PCT too?

Marc

Considering how often throughout history even intelligent people have
been proved to be wrong, it is amazing that there are still people who
are convinced that the only reason anyone could possibly say something
different from what they believe is stupidity or dishonesty.

Being smart is what keeps some people from being intelligent.

Thomas Sowell

···

[From Bruce Nevin (2004.05.11 14:43 EDT)]

Bill Williams 7 May 2004 11:25 PM CST--
(11:48 PM 5/7/2004)
You also asked Peter Small about the evolutionary
significance of attractors. I didn't see any reply. However,
much earlier, toward the end of March, he had said

Peter Small (2004.03.27) and again (30th April 2004) --
>Chaos is a phased space that describes all the possible
states that a
>dynamic system can be in. Most of the states are unstable,
but within
>this space there are many states that are stable (called
attractors).
>Evolution is about jumping between these stable states.
>
>Evolution (extracting low entropy) is about a strategy that causes
>these jumps to proceed in a direction that decreases the
entropy of a
>system. This is arranged by allowing a system to be easily
disturbed so
>that it will jump out of stable states from time to time. A
selection
>process (survival of the fittest) ensures that the system is
allowed to
>resettle only into new stable states that have lower entropy.

Peter, you must have liked this passage a lot, because over a
month later, on the last day of April, you recycled it under
the subject heading "Chen's Manifesto" with the ID "[Peter
Small 30th April 2004]" as context for an attack on Chen's
use of Brownian motion as a metaphor.

However, Peter, you never did reply to Bill Powers'
(2004.03.27.0556 MST) question
>What does this searching? How does it know if efficiency is
increasing
>or decreasing? What tells it to seek a higher value of efficiency?
>Aren't there some missing mechanisms in this picture?

The closest thing I could find was your (2004.03.29), saying
of that paper by Lewis that:
>Bill Powers dismissed it out of hand on the basis that he'd rejected
>that stuff fifty years ago.

So I am glad to see you, Bill Williams, raise the question
again. It could be phrased in a different way. The passage
that you were questioning was

Peter Small (2004.05.08) --
>Brain imaging techniques seem to support the view that perceptions
>take the form of attractors rather than be the product of a control
>system as visualized in PCT.

Peter, what if the ebb and flow of perceptions in a
perceptual control hierarchy happened to take the form of
attractors? Could you tell the difference? How? Or do you
(Peter) assert that perceptions in a perceptual control
hierarchy "as visualized in PCT" could not possibly take the
form of attractors? If so, what is the basis of this claim? I
bet that Martin would debate that with you.

In the April 30 email quoted above, you (Peter) also said in
a similar vein
>it is not possible to predict the behavior of complex
dynamic systems
>using mathematical formulae.

So here are two negatives that you have asserted. Can you
prove them? Note that you would have to prove that the
interaction of perceptual control hierarchies, which can be
described using mathematical formulae, cannot also be
described as a complex dynamic system. So these are really
two versions of the same question.

It may boil down to the difference between a descriptive
model from the observer's point of view and a generative
model from the point of view of that which is modeled. Both
may concurrently be "true". But, to paraphrase Hank Folson
(2004.03.27), the two should not be confused.

         /Bruce Nevin

[From Peter Small (2004.05.11)]

[From Bruce Nevin (2004.05.11 14:43 EDT)]

Bill Williams 7 May 2004 11:25 PM CST--
(11:48 PM 5/7/2004)
You also asked Peter Small about the evolutionary significance of
attractors. I didn't see any reply.

I'm afraid I don't take Bill William's posts very seriously.

Peter Small (2004.03.27) and again (30th April 2004) --

Chaos is a phased space that describes all the possible states that a
dynamic system can be in. Most of the states are unstable, but within
this space there are many states that are stable (called attractors).
Evolution is about jumping between these stable states.

Evolution (extracting low entropy) is about a strategy that causes
these jumps to proceed in a direction that decreases the entropy of a
system. This is arranged by allowing a system to be easily disturbed
so that it will jump out of stable states from time to time. A
selection process (survival of the fittest) ensures that the system
is allowed to resettle only into new stable states that have lower
entropy.

Peter, you must have liked this passage a lot, because over a month later,
on the last day of April, you recycled it under the subject heading "Chen's
Manifesto" with the ID "[Peter Small 30th April 2004]" as context for an
attack on Chen's use of Brownian motion as a metaphor.

Yes. I do like this because it is a short but precise description of
of the evolutionary process.

However, Peter, you never did reply to Bill Powers' (2004.03.27.0556 MST)
question

What does this searching? How does it know if efficiency is increasing or
decreasing? What tells it to seek a higher value of efficiency? Aren't
there some missing mechanisms in this picture?

I didn't answer because the answer was already contained in the text:
"A selection process (survival of the fittest) ensures that the
system is allowed to resettle only into new stable states that have
lower entropy"

A system doesn't know whether it has settled into a more efficient
state. But, the evolutionary process ensures that only the systems
that have settled into the most efficient states are replicated in
the next generation. This is not a search strategy, it is simply a
strategy of selection.

The closest thing I could find was your (2004.03.29), saying of that paper
by Lewis that:

Bill Powers dismissed it out of hand on the basis that he'd rejected
that stuff fifty years ago.

So I am glad to see you, Bill Williams, raise the question again. It could
be phrased in a different way. The passage that you were questioning was

Peter Small (2004.05.08) --

Brain imaging techniques seem to support the view that perceptions take
the form of attractors rather than be the product of a control system as
visualized in PCT.

Peter, what if the ebb and flow of perceptions in a perceptual control
hierarchy happened to take the form of attractors? Could you tell the
difference? How? Or do you (Peter) assert that perceptions in a perceptual
control hierarchy "as visualized in PCT" could not possibly take the form
of attractors? If so, what is the basis of this claim? I bet that Martin
would debate that with you.

My impression is that Martin hasn't yet cottoned on to the the
concept of attractors as they are applied to biological systems. His
description of attractors is limited to describing patterns of
attractors as they would appear on a computer screen as a result of a
simple mathematical formula. There is no attempt to relate this to
how they might be represented in the brain and how the concept can be
applied to memory and perceptions.

From what I've read of Martin's descriptions, he appears to confuse
attractors with catastrophe theory - which is a totally different
concept.

Chaos theory is about the iteration of a function that repeats and
has a variable, which is the result of its previous iteration. i.e.

Phase 1 = f(x) = a
Phase 2 = f(a) = b
Phase 3 = f(b) = c
Phase 4 = f(c) = d
etc., etc., etc.,

This is visualized as happening in a phase space (i.e. a space that
contains all possible phases). The function will trace a path (a
trajectory) through this phase space until it comes to a place where
it stops (or keeps repeating a sequence of phases). This stopping
place is called the attractor.

By changing the initial values of the function, it will have a
different trajectory and may have a different attractor in this space.

A dynamical system can have multiple trajectories and multiple
attractors. The conceptual breakthrough is to see how these
attractors can be regarded as addressable memories in the phase space
- similar to the way the bit space of a computer memory can be
addressed.

I provided you with a reference to a paper that describes this rather
well and which also provides the history of how this way of thinking
about the brain has evolved through the twentieth century:

Gehad Werner's paper "Computation in Nervous Systems"
(http://www.ece.utexas.edu/~werner/Neural_computation.html#Neurodynamics)

Interestingly, one of the stages in the history outlined by Werner is
this paragraph:

     "As another signal event, the year 1948 marked the publication of
Norbert Wiener's book "Cybernetics" (139) with the notable subtitle:
'Control and Communication in the animal and machine'. Wiener's
innovative approaches to control theory originated with the war time
effort to develop control systems for missiles and airline steering.
For the development of the mathematical theory of these and related
processes, he took some of his cues from the notion of homeostasis in
Biology, and from experimenting with tremor and ataxia in animals,
with which he familiarized himself in collaboration with the
Physiologist A. Rosenbluth. This collaboration led in 1943 to a
landmark paper entitled "Behavior, Purpose and Teleology" (110)
which delineated a taxonomy of forms of behavior on the basis of
feedback: Teleology -in the sense of Aristotle's "final cause" -
could be reconciled with determinism if governed by negative
feedback, regulated by deviation from a specified goal state."

Isn't this 1940's way at looking at the brain somewhat similar to the
underlying concept of PCT?

Peter Small

Author of: Lingo Sorcery, Magical A-Life Avatars, The Entrepreneurial
Web, The Ultimate Game of Strategy and Web Presence
http://www.stigmergicsystems.com

···

--

[From Bruce Nevin (2004.05.11 17:38 EDT)]

Marc Abrams (2004.05.11.1628)--

···

At 04:32 PM 5/11/2004 -0400, Marc Abrams wrote:

If the tracking task is such
irrefutable proof of control, what do you attribute the lack of
enthusiasm to PCT too?

Refutation is a matter of logic. Seems like a presupposition that human
behavior is governed by logic. Do you want to rephrase the question?

         /Bruce Nevin

[From Bruce Nevin (2004.05.11 18:30 EDT)]

Peter Small (2004.05.11)–

[…] a short but precise description of

of the evolutionary process.

[…]

"A selection process (survival of the fittest) ensures that
the

system is allowed to resettle only into new stable states that have

lower entropy"

A system doesn’t know whether it has settled into a more efficient

state. But, the evolutionary process ensures that only the systems

that have settled into the most efficient states are replicated in

the next generation. This is not a search strategy, it is simply a

strategy of selection.

This sounds like a PCT account of how “innate” references get
established and, by extension to the selectionist view of learning, how
other references get established (G. Cziko, Without Miracles). I
can understand a confusion of references with perceptions: when the
latter are controlled, they conform to the former, and psychologists know
as much about the former as they know about control – typically,
nothing.

Bruce Nevin (2004.05.11 14:43 EDT)–

… what if the ebb
and flow of perceptions in a perceptual control

hierarchy happened to take the form of attractors? Could you tell
the

difference? How? Or do you (Peter) assert that perceptions in a
perceptual

control hierarchy “as visualized in PCT” could not possibly
take the form

of attractors? If so, what is the basis of this claim?

In the April 30 email quoted above, you
(Peter) also said in a similar vein

it is not possible to predict the behavior of complex dynamic systems
using mathematical formulae.

Are you prepared to prove a negative?

Note that you would have to prove that the
interaction of perceptual control hierarchies, which can be described
using mathematical formulae, cannot also be described as a complex
dynamic system. So these are really two versions of the same
question.

It may boil down to the difference between a descriptive model from the
observer’s point of view and a generative model from the point of view of
that which is modeled. Both may concurrently be “true”. But, to
paraphrase Hank Folson (2004.03.27), the two should not be
confused.

No response to this?

Interestingly, one of the stages in the
history outlined by Werner is

this paragraph:

"As another signal event, the year 1948 marked

the publication of

Norbert Wiener’s book “Cybernetics” […]

Isn’t this 1940’s way at looking at the brain somewhat similar to
the

underlying concept of PCT?

Of course it is. But you need to reach back to the 1930s, before Weiner.
The history is documented in various places in the literature of PCT. It
might be useful to you to become familiar with that literature. Even if
your aim is to persuade us that PCT is all wrong and that we should drop
it and take up what you’re doing, it would be useful to you to know what
we’re talking about. If in fact we are all wrong, there’s no risk to you,
and your persuasion will be more effective.

My further responses may be spotty for a while.

    /Bruce

Nevin

···

At 10:19 PM 5/11/2004 +0100, Peter Small wrote: