PCT and Chaos (was Helping?)

[Martin Taylor 2004.03.26.1017]

[From Bill Powers (2004.03.26.0609 MST)]

Peter Small (2004.03.26) --

Bill has said quite a lot of what I had intended to say in response
to Peter, so I will deal with other matters.

Before you characterize
PCT, I think it would be a good idea to understand it. So far you haven't
shown very strong indications of that.

It may be helpful to relate my own experience. I once considered
doing graduate work in control theory, but instead moved into
psychology, particularly perception, by way of Operations Research.
During my official working life I alternated between computer
development and psychology research, and in the 80's and 90's
developed a "Layered Protocol" theory of interpersonal and
person-to-computer communication. Before that, I had co-authored a
book chapter on haptic touch, which argued that the perceptions
available through touch were in large part determined by a mult-level
feedback process. In the early 90's I was pointed toward PCT by a
person who read my postings on the system dynamics mailing list. It
took a year or two, but eventually I realized that my LP Theory was
just a special case of PCT.

Now, despite that background, which should have made me an ideal
candidate for quickly understanding PCT, it took at least a year,
maybe longer, of getting into arguments on the predecessor of CSGnet
(mostly with Rick), before I really began to understand PCT to the
extent where I felt I could make a substantive contribution.

The point I'm trying to get across is that it isn't really a very
good idea to criticise PCT on short acquaintance, unles that
criticism is done for the purpose of discovering in what ways one's
understanding of "conventional" PCT differs from how it is normally
understood.

Enhancements and illumination of PCT from other disciplines are
always to be welcomed. PCT is a _functional_ description of what may
be happening when people perceive and act. If PCT is basically
correct (and I think elementary physics says it must be), then it
will have experiential consequences that are worthy of study. And it
must have physical/physiological mechanisms that support it, and that
might help to argue for one or other particular implementation of the
basic idea of PCT--that behaviour is the control of perception, where
"perception" is taken to mean the state of some variable inside the
organism, and "control" means maintaining the state of that variable
near some reference value in the face of influences external to the
organism that might otherwise change it.

If you have noticed these phenomena you will see why PCT is constructed as
it is, and you will understand what PCT is about. I don't know what the
"chaotic oscillators" model is about -- there don't seem to be any
phenomena that relate to it at an observable level (without the aid of a
lot of imagination).

Here, I'm on Peter's side. I do (I think) understand what he is
talking about, though he uses a bit of shorthand. Here's a slightly
more longhand version.

In reasonably simple terms, whenever you have a feedback
process--meaning a situation in which a change in a variable in some
way influences the state of that variable at a later time--you have a
"dynamic". "Dynamic" is just a fancy term for describing all the
different ways that system might behave, given any prescribed
starting state.

If you look at a system with a dynamic, and find it in a particular
starting state, you can trace its future evolution in the absence of
external influences. That state evolution is called an "orbit."
Orbits can behave in one of three ways: (1) An orbit might spiral in
toward some fixed point, (2) it might spiral toward some path that
repeats endlessly, or (3) it might follow some more complex path in
which very slight differences in the starting conditions eventually
result in very different traces for the orbit.

The three possibilities for the eventual destination of an orbit are
called "attractors". They are called: (1) fixed point, (2)
oscillator, and (3) strange. When Peter talks about "Chaotic
oscillators" he is talking about a particular kind of "strange
attractor", which has the property that for a long time, if you look
at the system state, its orbit looks quite like that of a simple
oscillator, but at some unpredictable moment it takes off in some
other direction, quite probably coming to look again as if it is a
simple oscillator, but a different one.

All of this may seem very abstract, but I think it is essential for
any serious analysis of PCT, because PCT is essentially based around
the idea of feedback, and feedback systems with nonlinearities that
are greater than a square law, or that are transcendental, are very
likely to have attractors that are "strange" for lare ranges of
values of their internal parameter settings.

The "classic" HPCT hierarchy is as liable to exhibiting strange
attractors as is any other complex feedback structure. The different
stages in the development of complex perceptions are necessarily
non-linear, not only because of the impossibility of creating
physical devices with infinite range, but also because there would be
no value of having more than one level if the perceptual functions
were linear. However, HPCT simulations always are done with parameter
settings that result in fixed point attractors. Set the parameters a
bit wrong, and the hierarchy goes wild.

The fact that the system dynamic is parameterized to give fixed point
attractors is what allows (in fact almost defines) control. What I
mean is that an influence from outside will move the system state to
a new place in the dynamic, and if the dynamic has a fixed point
attractor, the orbit will lead the system state back to that
attractor--unless the disturbance was sufficient to move the system
state into a completely different attractor basin. The new basin
might even have an oscillator as an attractor rather than a fixed
point.

But the ability to control is not limited to dunamics with fixed
point attractors. Oscillators also can be controlled. A disturbance
simply moves the system state to a new point on the dynamic, and its
orbit returns the system back to its original oscillator attractor.
Here we are dealing with what in HPCT is a higher-level control
system, one in which time matters.

When the attractor is "strange", however, things are a little
different. Control is still possible, if the strange attractor has a
quasi-oscillatory form, since orbits do then converge in at least
some regions of the attractor basin. But control is not guaranteed in
the presence of arbitrary disturbances, because the new system state
may be at a point on the dynamic at which the orbit leads directly to
a new quasi-oscillator part of the strange attractor. Might this
sound like reorganization? Might it relate to the shifting ways in
which ambiguous perceptions or perceptions of over-stabilized inputs
are resolved?

I'm not going to pursue this further, here, but non-linear dynamics
does seem to me to offer a fruitful way to view PCT. However fruitful
or fruitless that view may turn out to be, it is certainly not wrong.
Moreover, I think that the chaotic (strange attractor) possibilities
afforded by the interactions of control systems are a very plausible
way of generating not only new perceptions, but new kinds of
perceptions. Parameter refinement, or the introduction of prescribed
disturbances, can shift quasi-oscillator strange attractors into
acting like true oscillators, while allowing the possibilities of
rapid change (of, for example perception) when the occasion arises.

Bottom line: I think both Peter and Bill are right, and I think they
are both wrong, in different ways.

Martin

From[Bill Williams 26 March 2004 10:30 AM CST]

[Martin Taylor 2004.03.26.1017]

>[From Bill Powers (2004.03.26.0609 MST)]
>
>Peter Small (2004.03.26) --

Peter you are, of course, free to dismiss control theory and argue that
Gintis _et al_ are the wave of the future. However, Martin and Bill Powers
have been generous in responding to your dismissal of their work with what
seem to me to be excellent expositions of a control theory analysis of human
behavior. However, it is my impression that a verbal, or written,
exposition of what Bill Powers' calls PCT or HPCT, however well constructed
the exposition, is only very rarely sufficient to communicate the merits of
a control theory analysis of human behavior. Of the people I know who I
believe, more or less understand control theory, nearly all of them have had
either an enginneering background, or have been employed as enginners
devising applications of control theory. I am not saying that this is an
absolute neccesity, but as practical matter, it comes rather close.

If you are interested in learning control theory, there are ways to go about
learning it. If you want to argue, well some of the partisipants here seem
more than willing to acomodate you, in their various ways. I hope that
you've found my approach to economic applications of control theory, uhm,
well at least novel. Some of the CSGnet people consider at least some of
the models I've developed to be quite persuasive. If you studied them so
might you.

Bill Williams

[From Bjorn Simonsen(2004.03.27.14:35 EuST)]

I am afraid I will write things I should understand better before
writing, but this action helps me to perceive what I control at the time.

[Martin Taylor 2004.03.26.1017]

[From Bill Powers (2004.03.26.0609
MST)]

If you have noticed these
phenomena you will see why PCT is constructed as

it is, and you will understand
what PCT is about. I don’t know what the

“chaotic oscillators”
model is about – there don’t seem to be any

phenomena that relate to it at an
observable level (without the aid of a

lot of imagination).

Imagination is helpful for me. I went through Marc D. Lewis, Ph.D. ”Bridging emotion
theory and neurobiology through dynamic systems modelling”. I evaluate PCT/HPCT
higher than the neuropsychological model Lewis concludes with.

After reading Lewis and some references I am strengthen in my confidence
to PCT/HPCT.

I also think it is necessary learn other nomenclatures and accounts be
in a dialogue with other interesting research programmes. Many of them should
really look to PCT/HPCT.

And I appreciated your
comments Martin. I think your [Martin Taylor 2004.03.26.1017] is a
compressed account which is easy to tie my comments to.

In reasonably simple terms, whenever you have a
feedback

process–meaning a situation in which a change in
a variable in some

way influences the state of that variable at a
later time–you have a

“dynamic”. “Dynamic” is just a
fancy term for describing all the

different ways that system might behave, given any
prescribed

starting state.

PCT has as we
all know a feedback process and I appraise a p, e, o and the feed back signal at
the first level as a “dynamic”. Why not use this concept in PCT. It would be a
sign that we don’t reject other accounts.

If you look at a system with a dynamic, and find
it in a particular

starting state, you can trace its future evolution
in the absence of

external influences. That state evolution is
called an “orbit.”

Orbits can behave in one of three ways: (1) An
orbit might spiral in

toward some fixed point, (2) it might spiral
toward some path that

repeats endlessly, or (3) it might follow some
more complex path in

which very slight differences in the starting
conditions eventually

result in very different traces for the orbit.

In PCT we can
trace a future evolution of a loop system. It is the state of a Reference. I
prefer the concepts Reference and Purpose before “orbit”.

  1. In PCT the
    state of the Reference is the normal future evolution. 2) When two or more
    different references take part in controlling a perception we say we have an “endless”
    (not endless) conflict. 3) In PCT we often have more complex and really endless
    conflicts, which lead to Reorganization.

The three possibilities for the eventual
destination of an orbit are

called “attractors”. They are called:
(1) fixed point, (2)

oscillator, and (3) strange. When Peter talks
about "Chaotic

oscillators" he is talking about a particular
kind of "strange

attractor", which has the property that for a
long time, if you look

at the system state, its orbit looks quite like
that of a simple

oscillator, but at some unpredictable moment it
takes off in some

other direction, quite probably coming to look
again as if it is a

simple oscillator, but a different one.

I don’t see any
advantages with the concept “attractors”, but I’ll put it in my memory. I am as
Peter absorbed in Reorganization and I have confidence to the growth trend; 1)
endless conflict, 2)production of a secretion in some glands, 3)Molecular biological
processes where ligands send signals to receptors leading to change of proteins
(disease).

PCT may demonstrate
how to solve the conflict by MOL and Molecular Biologists need not to develop
medicines. The body is able to heal itself. Sometimes it happens automatically
and sometimes we can use MOL.

(nota bene. I am
using my language of causality with the flow of activation among neural
components as the Emotion theorists, but I look forward to a meet a simulation
where GoodsInventory Ref is substituted with ProteinAInventoryRef. –I don’t use
smiley.)

Peter has a
chance to learn PCT/HPCT equivalent with Martin. Maybe he will be a bridge to
neurologist (not Emotion theorists because PCT is better on emotion that the cognitive
cause effect oriented Emotion theorists. (nota bene – I don’t have any wishes
for what Peter in the future will control).

All of this may seem very abstract, but I think it
is essential for

any serious analysis of PCT, because PCT is
essentially based around

the idea of feedback, and feedback systems with
nonlinearities that

are greater than a square law, or that are transcendental,
are very

likely to have attractors that are
“strange” for lare ranges of

values of their internal parameter settings.

So far I have
seen other DS theorists talking about feedback systems with nonlinearities that

are greater than
a square law. I have seen that they use language of causality with the flow of
activation among neural components.

The “classic” HPCT hierarchy is as
liable to exhibiting strange

attractors as is any other complex feedback
structure. The different

stages in the development of complex perceptions
are necessarily

non-linear, not only because of the impossibility
of creating

physical devices with infinite range, but also
because there would be

no value of having more than one level if the
perceptual functions

were linear. However, HPCT simulations always are
done with parameter

settings that result in fixed point attractors. Set
the parameters a

bit wrong, and the hierarchy goes wild.

I appraise your
comments in the way that HPCT may explain control of complex perceptions in
linear way when other DSs have to use non-linearity in a language of causality.

Set the
parameters in a functional way and HPCT don’t need to go wild. (Am I on thin
ice here?)

But the ability to control is not limited to
dunamics with fixed

point attractors. Oscillators also can be
controlled. A disturbance

simply moves the system state to a new point on
the dynamic, and its

orbit returns the system back to its original
oscillator attractor.

Here we are dealing with what in HPCT is a
higher-level control

system, one in which time matters.

I still don’t
master DS well enough, but also I see the conformity.

some regions of the attractor basin. But control
is not guaranteed in

the presence of arbitrary disturbances, because
the new system state

may be at a point on the dynamic at which the
orbit leads directly to

a new quasi-oscillator part of the strange
attractor. Might this

sound like reorganization.

And what the
Reference will be is nor guaranteed and is not predictable. But Reorganization
leads to system where perceptions can be controlled. Now or later.

I’m not going to pursue this further, here, but
non-linear dynamics

does seem to me to offer a fruitful way to view
PCT. However fruitful

or fruitless that view may turn out to be, it is
certainly not wrong.

Moreover, I think that the chaotic (strange attractor)
possibilities

afforded by the interactions of control systems
are a very plausible

way of generating not only new perceptions, but
new kinds of

perceptions. Parameter refinement, or the
introduction of prescribed

disturbances, can shift quasi-oscillator strange
attractors into

acting like true oscillators, while allowing the
possibilities of

rapid change (of, for example perception) when the
occasion arises.

Here I miss. We
had an exchange of views about RIF. Without putting it on print I thought of
RIFs as boxes where non-linear References were made. I don’t think PCT handle
non-linear signals very well. I think the result is Reorganization where
non-linear References become linear.

Is it time to
introduce non-linearity and see what it results in? Maybe (I guess) you have
talked about it earlier?

I think PCTers
are good on Perception. And I am not satisfied with the way Lewis explain
Perception in 4.2.1. Later I will compare his account of Attention, Memory, and
Planning. Last time we talked about Planning we came to the result that Planning
doesn’t exist (?).

At the moment I
work with * Nested feedback loops
and self-synchronization* and temporary I have a feeling that Bill
P described HPCT in 1973 and somebody talks about new knowledge in the 80s and
90s. Moreover I am impressed over Bill’s humility regarding HPCT and many DSers
know-all behavior.

bjorn

···

Re: PCT and Chaos (was
Helping?)
[Martin Taylor 2004.03.27 1000]

[From Bjorn Simonsen(2004.03.27.14:35
EuST)]
I am
afraid I will write things I should understand better before writing,
but this action helps me to perceive what I control at the
time.

That’s a good approach, I think. I’m just going to deal with a
couple of little technical points here, without touching the main
thrust of your message.

PCT has as we all know a feedback process and I
appraise a p, e, o and the feed back signal at the first level as a
“dynamic”. Why not use this concept in PCT. It would be a sign that
we don’t reject other accounts.

I guess it’s a question of which kind of language is more familiar to
the readers. Talking about the “dynamid” emphasises the time
course of control (the orbits), whereas the more normal language
emphsises quasi-static end states (the attractors). But they are
talking about the same thing, looking from a different starting
point.

If you look at a system with a dynamic, and find
it in a particular
starting state, you can trace its future evolution
in the absence of
external influences. That state evolution is
called an “orbit.”
Orbits can behave in one of three ways: (1) An
orbit might spiral in
toward some fixed point, (2) it might spiral
toward some path that
repeats endlessly, or (3) it might follow some
more complex path in
which very slight differences in the starting
conditions eventually
result in very different traces for the
orbit.

In PCT we can trace a future evolution of a loop
system. It is the state of a Reference. I prefer the concepts
Reference and Purpose before “orbit”.

Reference and Purpose, to me, are the same thing. If the system
is actually controlling, they determine the attractor, not the orbit.
The orbit is a description of how the system behaves after a
disturbnace. The attractor describes the behaviour of the system a
long (ideally infinite) time after the last disturbance.

  1. In PCT the state of the Reference is the normal
    future evolution.

Not of the evolution, but of then end state of the evolution.

  1. When two or more different references take
    part in controlling a perception we say we have an “endless” (not
    endless) conflict.

The conflict is between the two or more ECUs providing those
references, not in the one receiving them.

Here I miss. We had an exchange of views about RIF.
Without putting it on print I thought of RIFs as boxes where
non-linear References were made.

RIFs and nonlinearity don’t have much to do with one another. You
need a RIF whenever two or more ECUs send their outputs to the
reference input of another ECU. The RIF just determines how they are
combined. If the single-valued (scalar) reference signal is anything
other than the direct output from one and only one other ECU, you have
a RIF of some kind.

I don’t think PCT handle non-linear signals
very well.

Signals aren’t linear or nonlinear. The processes that transform
the signals are. For example, if the input to a Perceptual Input
Function is x(t), and the output is 3x(t)+7, the PIF is linear, but if
the output is log(x(t)), the PIF is non-linear. If there are two
inputs, x and y, and the output is integral(3x + 4y + 7)dt, the PIF is
linear. If the output is x^2 + 2x*y + y^2, it’s non-linear. Same if we
were talking about RIFs instead of PIFs.

PCT does handle non-linear processes well. HPCT is based on them,
since the effect of a linear transformation is simply a rotation of
the initial data space. It is only nonlinearity of the PIFs that
allows the concept of the hierarchy to make any sense at all,
regardless of whether it is factually what actually happens in real
organisms.

What the essential non-linearity does is to allow the hierarchy
to have parameter settings that could lead its orbits into a chaotic
regime. That seems to be a considerable benefit, as it allows the
structure to adapt quickly and to learn completely new things from
time to time.

Martin

[From Bruce Nevin (2004.03.27 11:32 EST)]

Bjorn Simonsen(2004.03.27.14:35 EuST)–

In PCT we can trace a future evolution
of a loop system. It is the state of a Reference. I prefer the concepts
Reference and Purpose before “orbit”.

Martin Taylor 2004.03.27 1000–

Reference and Purpose, to me, are the same
thing. If the system is actually controlling, they determine the
attractor, not the orbit. The orbit is a description of how the system
behaves after a disturbnace. The attractor describes the behaviour of the
system a long (ideally infinite) time after the last
disturbance.

It’s often useful to distinguish the point of view of the outside
observer from a point of view within the control system whose behavior is
being observed.

Orbit and attractor are descriptions from the point of view of an
observer of the behavior. They are not attributes of the system whose
behavior they (in part) describe. What do they describe? They say
something about the stability and instability of the system in context of
disturbances. They do not themselves cause or govern behavior. Settings
of various parameters within the system affect its ability to control
inputs, such that the stability of its control of inputs varies in the
patterns that an outside observer may describe as orbits and
attractors.

    /Bruce

Nevin

···

At 10:23 AM 3/27/2004 -0500, Martin Taylor wrote:

[From Bill Powers (2004.03.27.1123 MST)]

Bruce Nevin (2004.03.27 11:32 EST) --

Orbit and attractor are descriptions from the point of view of an observer
of the behavior. They are not attributes of the system whose behavior they
(in part) describe. What do they describe? They say something about the
stability and instability of the system in context of disturbances. They
do not themselves cause or govern behavior. Settings of various parameters
within the system affect its ability to control inputs, such that the
stability of its control of inputs varies in the patterns that an outside
observer may describe as orbits and attractors.

This is a nice approach to a very important point. What is it that makes a
system behave as it does? It is all the reciprocal forces or influences
that one part of the system exerts on other parts and its surroundings,
together with the properties that determine how the parts and surroundings
behave when subjected to the forces or influences.. It is NOT the
principles, generalizations, descriptions, theorems, or laws that we
formulate in the attempt to describe behavior. Conservation of momentum,
for example, describes what happens when systems exchange momenta, but the
systems do not exchange momenta in that manner because of conservation
laws. Mute Nature determines what happens; we then attempt to describe it
in words or symbols. We should not lose track of where the real causes exist.

Best,

Bill P.

[Martin Taylor 2004.03.27.1453]

[From Bruce Nevin (2004.03.27 11:32 EST)]

Reference and Purpose, to me, are the same thing. If the system is
actually controlling, they determine the attractor, not the orbit.
The orbit is a description of how the system behaves after a
disturbnace. The attractor describes the behaviour of the system a
long (ideally infinite) time after the last disturbance.

It's often useful to distinguish the point of view of the outside
observer from a point of view within the control system whose
behavior is being observed.

Very important to do that!

Orbit and attractor are descriptions from the point of view of an
observer of the behavior. They are not attributes of the system
whose behavior they (in part) describe.

Actually, they are precisely attributes of the system. They describe
not just its behaviour, but the set of orbits and attractors desribes
all its possible behaviours. And yes, they are descriptions from the
point of view of an outside oberver.

What do they describe? They say something about the stability and
instability of the system in context of disturbances.

Yes, they do that, and more.

They do not themselves cause or govern behavior.

Correct. They describe it.

Settings of various parameters within the system affect its ability
to control inputs, such that the stability of its control of inputs
varies in the patterns that an outside observer may describe as
orbits and attractors.

Correct again. More to the point, changing the values of any of the
parameters alters the dynamic, changing the orbits and attractors,
and possibly even the topology of the attractor basins themselves.
The totality of the spcase of possible dynamics attainable by
changing the parameters is called the "superdynamic" of the system.

It is in the superdynamic that interesting bifurcations can be
discovered, such as, to take a simple example, the point on the line
of varying lag where the system ceases to have an attractor at a
fixed local point (i.e. where it ceases to be an effective
controller) and becomes a system in which the attractor is the point
at infinity (i.e. the output grows exponentially or oscillates with
ever-increasing amplitude). In this case, the topology of the system
changed in that the point that for low lag is an attractor becomes a
repellor for sufficiently high lag.

I'd like to reinforce Bruce's point about the importance of
distinguishing the view by an outside observer and the view from the
point of view of the control system itself. It's a point that is
raised periodically on CSGnet, when the memory of its importance
seems to have faded.

Martin

···

At 10:23 AM 3/27/2004 -0500, Martin Taylor wrote:

[From
Bjorn Simonsen(2004.03.29,15:08 EuST)]

Martin Taylor
2004.03.27 1000

I’m just going to deal with a couple
of little technical points

here, without touching the main thrust
of your message

Thank
you. Maybe you have to deal with my further uncertainty.

In PCT the state of the Reference is the normal future evolution.

Not of the evolution, but of then end state of the evolution.

RIFs and nonlinearity don’t have much
to do with one another.

You need a RIF whenever two or more
ECUs send their outputs to

the reference input of another ECU. The
RIF just determines how

they are combined. If the
single-valued (scalar) reference signal is

anything other than the direct output
from one and only one other

ECU, you have a RIF of some kind.

OK

    >>I

don’t think PCT handle non-linear signals very well.

Signals aren’t linear or nonlinear. The
processes that transform the

signals are. For example, if the input
to a Perceptual Input Function is

x(t), and the output is 3x(t)+7, the
PIF is linear, but if the output is

log(x(t)), the PIF is non-linear. If
there are two inputs, x and y, and the

output is integral(3x + 4y + 7)dt, the
PIF is linear. If the

output is x^2 + 2x*y + y^2, it’s
non-linear. Same if we were talking

about RIFs instead of PIFs.

How
can you write above “RIFs and non-linearity don’t have much to do with one
another…” when there in PIFs and RIFs can be processes that transform the
Input variables from e.g. x(t) to
log(x(t))?

I
Understand/agree your comments that signals aren’t linear or non-linear. I also understand/agree
that PIFs and RIFs are linear or non-linear, dependent on their transformation ability. But I
would say that PIFs (and RIFs) has something with non
*-linearity
to do when they are able to transform an input x(t) to a perceptual signal;
logx(t))*
.

I
hope we agree about 1) signals aren’t linear or non-linear, 2) PIFs and RIFs
can transform an x(t) inputvalue to another function of x, e.g. to logx(t)). In
this example the PIF is non-linear.

PCT does handle non-linear processes
well. HPCT is based on them,

since the effect of a linear
transformation is simply a rotation of the

initial data space. It is only
nonlinearity of the PIFs that allows the

concept of the hierarchy to make any
sense at all, regardless of whether

it is factually what actually happens
in real organisms.

When
an ECU’s Reference is a result of more than 1 output from other ECUs there must
be a HPCT because an ECU’s output can’t be combined with another ECU than
through the Reference. And when a Reference is dependent on other outputs there
is a dependence on at least two levels.

What the essential non-linearity does
is to allow the hierarchy to have

parameter settings that could lead its
orbits into a chaotic regime. That

seems to be a considerable benefit, as
it allows the structure to adapt

quickly and to learn completely new
things from time to time.

Are
you here saying that non-linearity/hierarchy may have a parameter setting/(gain,
slowing and structure of ECUs) that brings along an endless controlling. With
other words; the control of perceptions doesn’t lead to a reference perception.
Either will the organism die or unexpected, there will come a Reference value
that makes control of Perception possible. We call it Reorganization

bjorn

···

Re: PCT and Chaos (was
Helping?)
[Martin Taylor 2004.03.29.1037]

[From Bjorn Simonsen(2004.03.29,15:08
EuST)]

Martin Taylor 2004.03.27 1000

RIFs and nonlinearity don’t have much to do with
one another.

How
can you write above “RIFs and non-linearity don’t have much to do with one another…”
when there in PIFs and RIFs can be processes that transform the Input
variables from e.g. x(t) to
log(x(t))?

Equally, there can be PIFs and RIFs that simply take an integral,
or make a weighted sum (as in the case of most of the well-known
simulations). A PIF or an PIF can be linear or non-linear. The two
concepts “RIF” and “linear-nonlinear” are
orthogonal.

When
an ECU’s Reference is a result of more than 1 output from other ECUs
there must be a HPCT because an ECU’s output can’t be combined
with another ECU than through the Reference. And when a Reference is
dependent on other outputs there is a dependence on at least two
levels.

The problem is your use of the word “level”. You have
the same picture as I do when talking only about the set of ECUs that
send their outputs to the Reference input of another ECU. But what if
it so happened that ECU A fed its output to the RIF of ECU B (I’ll say
A -> B to indicate this structure), and B -> C, and C->A?
What is the relative level of A, B, and C?

I’m not trying to say that there is no hierarchy. I’m just saying
that the need for an RIF for each ECU is independent of whether there
is a hierarchy of ECUs at different levels.

What the essential non-linearity does is to allow
the hierarchy to have
parameter settings that could lead its orbits into
a chaotic regime. That
seems to be a considerable benefit, as it allows
the structure to adapt
quickly and to learn completely new things from
time to time.

Are
you here saying that non-linearity/hierarchy may have a parameter
setting/(gain, slowing and structure of ECUs) that brings along an
endless controlling. With other words; the control of perceptions
doesn’t lead to a reference perception. Either will the organism die
or unexpected, there will come a Reference value that makes control of
Perception possible. We call it Reorganization

No, nothing like that. Here you and I are talking at
cross-purposes. I think it would take a lot of conceptual interchange
before this one is easy to sort out. We do make contact, though, at
the notion of reorganization, but for the wrong reason.

If you know the concept of a “sandpile avalanche” you
understand how the “on the enge of chaos” state is
self-sustaining, and why naturally evolving systems tend to reach that
state (in the case of PCT, through reorganization). But that wasn’t
what I was getting at. I was getting at points such as the ability
suddenly to see a new structure even though the data haven’t changed
(the Necker Cube is a trivial example at a very low level). This
happens without reorganization, and that’s what I was getting at when
I said: " it allows the
structure to adaptquickly and to learn completely new things from time
to time"

Martin

[From Bjorn Simonsen(2004.30.03,13:00 EuST)]

Martin
Taylor 2004.03.29.1037

When an ECU’s Reference is a
result of more than 1 output

from other ECUs there must be a HPCT because an ECU’s

output can’t be combined with another ECU than through

the Reference. And when a Reference is dependent on

other outputs there is a dependence on at least two levels.

The problem is your use of the word
“level”. You have the same picture as I do when talking

only about the set of ECUs that send their outputs to the Reference input
of another ECU.

But what if it so happened that ECU A fed its output to the RIF of ECU B
(I’ll say A -> B to

Indicate this structure), and B -> C, and C->A? What is the relative
level of A, B, and C?

I’m not trying to say that there is no
hierarchy. I’m just saying that the need for an RIF for

each ECU is independent of whether there is a hierarchy of ECUs at
different levels.

Would
it be better if I had written:

When an ECU’s Reference is a result of more than 1 output from other
ECUs there must be a HPCT or a network because an ECU’s output can’t be combined
with another ECU than through the Reference. And when a Reference is dependent
on other outputs there is a dependence on at least two levels or a
network
.

I will leave the network. The idea is good enough, but I will confine
myself to HPCT (for the time being). Please comment my speculations.

I tried your A -> B and B ->
C, and C->A in Rick’s hier.exl. I guided the Output from an ECU at level 3 to the Reference of an ECU at level 2
and I guided the Output for that ECU to the Reference in an other ECU at level 3.
The Output of the last ECU was then guided to the Reference in the first
mentioned ECU. And it worked (for the time being).

I know there is “no” connection from the output at level n to the reference
signal at level (n+1). But some fantasy and Imagination mode should make it. I
can imagine a glass of beer and stretch my arm for it. (I know I jump over a
level in my imagination, but why not?)

Are you here saying that
non-linearity/hierarchy may

have a parameter setting/(gain, slowing and structure

of ECUs) that brings along an endless controlling. With

other words; the control of perceptions doesn’t lead to

a reference perception. Either will the organism die or

unexpected, there will come a Reference value that

makes control of Perception possible. We call it Reorganization

No, nothing like that. Here you and I
are talking at cross-purposes. I think it would take a

lot of conceptual interchange before this one is easy to sort out. We do
make contact, though,

at the notion of reorganization, but for the wrong reason.

If you know the concept of a
“sandpile avalanche” you understand how the "on the enge of

chaos" state is self-sustaining, and why naturally evolving systems tend
to reach that state

(in the case of PCT, through reorganization). But that wasn’t what I was getting
at. I was

getting at points such as the ability suddenly to see a new structure even
though the data

haven’t changed (the Necker Cube is a trivial example at a very low level). This
happens

without reorganization, and that’s what I was getting at when I said: " it
allows the

structure to adaptquickly and to learn completely new things from

time to time"

I don’t dare to say I know the concept of a “sandpile avalanche”, but I
imagine it as modulational wave produced in a RIF. Maybe it is built up to certain
quantity before it is effective.

Maybe you could give me correct account.

I look at your Necker Cube example in an other way. My way is equal to
the way I explained why you took “Who was Jesus” from your book shelf.

The matrix of perceptual signals leaving retina continue their way to
two ECUs. The one saying, “perceive the Cube from the front”, the other saying “, perceive the Cube from
above”. I am only conscious one of the perceptions. But a disturbance and also
an imagination can change what I am conscious and the experience changes.

Should I be in charge of changing my understanding?

bjorn

Re: PCT and Chaos (was
Helping?)
Martin Taylor 2004.03.30.1008]

[From Bjorn Simonsen(2004.30.03,13:00
EuST)]
Martin Taylor 2004.03.29.1037

The problem is your use of the word
“level”. You have the same picture as I do when talking

only about the set of ECUs that send their outputs to the
Reference input of another ECU.

But what if it so happened that ECU A fed its output to the RIF of
ECU B (I’ll say A -> B to

Indicate this structure), and B -> C, and C->A? What is the
relative level of A, B, and C?

I’m not trying to say that there is no hierarchy.
I’m just saying that the need for an RIF for
each ECU is independent of whether there is a
hierarchy of ECUs at different levels.

I
tried your A -> B and B -> C, and C->A in Rick’s
hier.exl.

Wow–I never would have done that, as I would have assumed that the
result would probably be unstable. I only made the suggestion to
indicate that I didn’t need to assume the existence of a hierarchy in
order to show the need for any ECU to contain a Reference Input
Function.

I
guided the Output from an ECU at level 3 to the Reference of an
ECU at level 2 and I guided the Output for that ECU to the Reference
in an other ECU at level 3. The Output of the last ECU was then guided
to the Reference in the first mentioned ECU. And it worked (for the
time being).
I
know there is “no” connection from the output at level n to the
reference signal at level (n+1). But some fantasy and Imagination mode
should make it. I can imagine a glass of beer and stretch my arm for
it. (I know I jump over a level in my imagination, but why
not?)

Why not, indeed!

I
don’t dare to say I know the concept of a “sandpile avalanche”,
but I imagine it as modulational wave produced in a RIF. Maybe it is
built up to certain quantity before it is
effective.
Maybe you could give me correct
account.

It’s much simpler than you think. Imagine a flat table onto which
you drop one grain of sand at a time from a fixed location some heigth
above the table. The first grain bounces somwhere. So does the second
and a few more. but eventually there are enough that a new one rests
on earlier ones, and after a while there is a conical pile of sand
grains on the table.

What happens when the next grain arrives? It arrives with some
kinetic energy, and it hits a grain sitting near the top of the pile.
At that point three different things might happen: (1), the grain
might settle onto the tops of some grains at the top of the pile,
distributing its kinetic energy throughout the pile but not
displodging any earlier grains from their places, (2) the grain might
bouce its way down the slope intil it found such a niche lower down,
or (3 – the interesting case), it might distribute its energy into
one or two other grains, sufficiently to dislodge them and send them
bouncing down the pile, where any one of them might have enough energy
to dislodge one or more others that were precariously supported. This
last is the avalanche.

Why the system develops to a stable “on the edge” state
is the point of the analogy. Imagine a pile that has a rather shallow
slope. When a new grain arrives, it is unlikely to dislodge more than
one existing grain, because each grain is more or less surrounded by
its supporting grains. If it does dislodge another grain, it is
probably because the other grain received quite a large proportion of
the kinetic energy from the new one, but as the grains bounce down the
slope, they are unlikely to dislodge others. The avalanche dissipates
very quickly.

Now imagine a pile very carefully built to be very steep. Most
grains on the outside of the cone are very precariously held in place,
and a tiny nudge would knock them down. The kinetic energy of the new
grain would be likely to dislodge at least one of the just-balanced
grains, and the energy of each would be likely to dislodge others. The
probable number dislodged at each stage is greater than unity, and the
whole carefully built pile collapses until it finds a form in which
all the grains on the surface have a place to sit for which the
potential energy needed to get out of the resting place is greater
than the kinetic energy with which they arrived there.

Between these two states is a state in which the average number
of grains dislodged by another falling down the slope is exactly 1.0.
What happens is that sometimes the first grain dislodges no other,
sometimes it dislodges one, sometimes two, or even three. The
dislodged grains pick up kinetic energy boucning down the slope, and
perhaps each one dislodges zero, one, two, … other grains. Sometimes
the avalanche stops before it starts, sometimes it stops after one or
two successive dislodgment, and sometimes the entire side of the pile
gives way and slips down.

The point about the sandpile is that no matter how the pile
starts, it always winds up in the same state, between “too
steep”, where the average number of dislodged grains is greater
than 1, and “too flat”, where most newly arriving grains
stick before bouncing to the bottom, and each dislodged grain on
average dislodges less than one more.

It’s an analogy for systems in general that have an external
energy source and that are affected by the interactions among their
components as they dissipate that energy. An ecology is of that kind,
and so is an evolved biological organism. A designed robot is not. An
evolved and reorganized network or hierarchy of control systems is.
Such a system is “on the edge of chaos”, unless some outside
controller/designer/manipulator keeps it in some other state.

I
look at your Necker Cube example in an other way. My way is equal to
the way I explained why you took “Who was Jesus” from your book
shelf.
The
matrix of perceptual signals leaving retina continue their way to two
ECUs. The one saying, “perceive the Cube from the front”, the
other saying “,
perceive the Cube from above”. I am only conscious one of the
perceptions.

You have to ask why you are conscious of only one. In some way or
another, perception of one must inhibit perception of the other. They
interact.

Around 1967, Keith Aldridge and I studied this phenomenon, with a
pattern that (unlike the Necker Cube) can be readily seen in only two
ways. It was a plasticene surface dented all over by hitting it with a
table-tennis ball. That surface is seen as dented or as bubbled, but
in no other way.

What we did was to study carefully the timing of changes from one
percept to the other, using four subjects who did the task for a long
time (half an hour?) each day for a week. The timing distributions
followed none of the usually discussed statistics (Poisson, etc.). But
we were able to model them and the way they changed for individual
subjects over time. Our model was essentially the same as what you
suggest, except that rather than there being two perceptual functions,
the model had a well-defined but quite large number. Not having the
paper at hand, I forget the actual numbers, but I think one subject
seemed to be using between 31 and 33 units (changing one unit at a
time over the course of the experiment) and the other a different
range of numbers. Which percept was seen was a kind of majority
decision among the many individual units, but with some history
(hysteresis). As with the total number of units, the numbers of units
that defined the boundaries of the hysteresis region changed by one
from time to time during the study (A change of 1 in this parameter
makes a dramatic difference to the timing curves, and those changes
were fitted very well).

The point of all this is that I think you are right, and I think
that what you say is at least not inconsistent with the “on the
edge of chaos” notion. Clearly there must be interactions
somewhere that allow one perception to inhibit the other. Our
experiment argues that this kind of interaction happens between as
well as within “levels.”

Other related experiments in which I was involved around the same
time also showed that what one perceives can be strongly affected by
what one is told one is likely to perceive. I think this accounts for
the fact that the Necker Cube is usually reported as being perceived
in only two forms. If you just ask a naive subject, who doesn’t know
the Necker Cube, some of them will report as many as 15 different
forms. The same is true of hearing repeated word strings. They change,
and change again, but only one thing is heard at a time. What kind of
things are heard depends on what the subjects are told to expect, even
though everyone hears exactly the same acoustic signal that repeats
over and over.

But a disturbance and also an imagination can
change what I am conscious and the experience
changes.

Yep.

Should I be in charge of changing my
understanding?

I’m not at all clear how you
could do that without already knowing the possibilities for new
understanding. That’s where the interactions of attractors and the
notion of the “edge of chaos” are important. They allow for
new, unexpected, understandings.

Martin