Two counter-intuitive properties of feedback

From Mervyn van Kuyen (971022 20:45 CET)

[From Bill Powers (971021.1528 MDT)]

I'm interested in models that simulate the behavior of real organisms, from
bacteria to bugs to people. Show me a model that can do that, and I'll be
very interested. Your "simplification of reality" is too simple, and too
unconnected to any real phenomenon.

Sofar, I have explored two counter-intuitive properties of a simulated
system - the neural servo - that is required to reduce the mismatch
between its perceptions and its references to the smallest attainable
extend.

If we imagine such a system to be successful, our intuition would tell us
that 'the amount of mismatch' could never become zero, since the network
that produces the references is triggered only by this 'disappearing'
mismatch! Therefore, we expect such a system to 'run dry' whenever it is
perfect, thereby inevitably becoming imperfect again!

Simulation, however, reveals that such a system can become 100%
input-independent while receiving input patterns it has previously been
exposed to. It will use the mismatch signals to use the 'fingerprint' of a
particular mismatch pattern as a 'key' that activates a specific cluster
of recurrent connections. Such a cluster creates references that
anticipate perception to a significant extent. These cluster can use
unexpected mismatch to influence their predictive content (references) in
a corrective way. In practice, the network turns out to crystallize into
input-independent connection-clusters that have world-modeling abilities.
If we (correctly) recognize such a system as being'knowledge-oriented', we
become aware of the fact that we could construct - given we have
sufficient computational power available - a robot that by sensing its
environment models its environment.

If we imagine such a robot to be successful, we expect that by expanding
the robot with actuators, triggered by the very same network, the robot
would inevitably become exposed to a larger and more complex world,
thereby increasing the complexity of its internal world model.

Simulation, however, reveals that this would only be the case if the
network would not rize above its initial ability to exert random
behavioral acts. Even in a, what I call, random environment it does not
exert random control. It has shown to exert control over a random source
of input: two patterns, alternated at random intervals, *or* by the act of
control. Its action will select one the two pattern as a 'preference'
enabling its internal structure to encode only one of the two patterns in
detail. This shows us the second counter-intuitive property of our
expanded world-modeling robot: its world model will become more *simple*
if it is allowed to exert more control control over its environment
instead of more *complex*.

So, these ideas are not about specific acts of control, but they do
explore concepts that are important for our ideas about intelligence,
world models and the role of control in systems that have world model
acquiring properties: these simulations proof that control even in random
environment control limits the complexity of the world itself and
therefore the required modeling power. An important implication is that
even in a very complex (instead of random) environment such a system will
develop preferences for contexts within such an environment in which
either the complexity of the world is low or the amount of exertable
control is high. Obviously, our lives take place somewhere in between. I
think that a model, taking a this or a similar shape, is a powerful means
of communicating the amazing properties of feedback. I expect PCT does a
good job as well.

Regards,

Mervyn

[From Bill Powers (971023.0729 MDT)]

Mervyn van Kuyen (971022 20:45 CET) --

Sofar, I have explored two counter-intuitive properties of a simulated
system - the neural servo - that is required to reduce the mismatch
between its perceptions and its references to the smallest attainable
extend.

If we imagine such a system to be successful, our intuition would tell us
that 'the amount of mismatch' could never become zero, since the network
that produces the references is triggered only by this 'disappearing'
mismatch! Therefore, we expect such a system to 'run dry' whenever it is
perfect, thereby inevitably becoming imperfect again!

In a practical control system, it's most common for the mismatch to be
maintained by the behavior at a small magnitude, just sufficient to keep
the behavior at its present level. To understand how that works, you have
to write the control-system equations (which include the properties of the
external feedback loop) and solve them for the error signal. As far as I
can see, your concepts don't even include the properties of the external
part of the loop, which makes your description look rather naive.

Simulation, however, reveals that such a system can become 100%
input-independent while receiving input patterns it has previously been
exposed to. It will use the mismatch signals to use the 'fingerprint' of a
particular mismatch pattern as a 'key' that activates a specific cluster
of recurrent connections. Such a cluster creates references that
anticipate perception to a significant extent. These cluster can use
unexpected mismatch to influence their predictive content (references) in
a corrective way. In practice, the network turns out to crystallize into
input-independent connection-clusters that have world-modeling abilities.
If we (correctly) recognize such a system as being'knowledge-oriented', we
become aware of the fact that we could construct - given we have
sufficient computational power available - a robot that by sensing its
environment models its environment.

This is all put in such vague and general language that I have no picture
of what your system is supposed to accomplish. This is why I keep trying to
get you to express your idea in terms of some real process, some sort of
observable control process like steering a car. It seems to me that you're
just describing how some of your neural networks behave, trying to read
into this behavior something suggestive of behavior, but without having
anything specific in mind.

If we imagine such a robot to be successful, we expect that by expanding
the robot with actuators, triggered by the very same network, the robot
would inevitably become exposed to a larger and more complex world,
thereby increasing the complexity of its internal world model.

Simulation, however, reveals that this would only be the case if the
network would not rize above its initial ability to exert random
behavioral acts. Even in a, what I call, random environment it does not
exert random control. It has shown to exert control over a random source
of input: two patterns, alternated at random intervals, *or* by the act of
control. Its action will select one the two pattern as a 'preference'
enabling its internal structure to encode only one of the two patterns in
detail. This shows us the second counter-intuitive property of our
expanded world-modeling robot: its world model will become more *simple*
if it is allowed to exert more control control over its environment
instead of more *complex*.

If you're thinking of behavior in terms of "acts" generated by the system,
you're a long way from PCT. A network that has a random effect on its
environment is not controlling anything as we use the word control in PCT.

So, these ideas are not about specific acts of control, but they do
explore concepts that are important for our ideas about intelligence,
world models and the role of control in systems that have world model
acquiring properties: these simulations proof that control even in random
environment control limits the complexity of the world itself and
therefore the required modeling power.

Your concept of control doesn't seem to have anything to do with the
phenomena we study in PCT. You say that these ideas are "important" for
understanding various things, but with only your word for it, and no
examples to back you up, why should anyone believe this? You're going to
have to come down to a much more practical level before we have anything to
discuss together.

Best,

Bill P.

From Mervyn van Kuyen (971023 18:00 CET)

[Bill Powers (971023)]

In a practical control system, it's most common for the mismatch to be
maintained by the behavior at a small magnitude, just sufficient to keep
the behavior at its present level.

So, you take the zero-mismatch limit to be a really out of reach, where my
simulations have focused for a large part on the emergence of continued
prediction at that very limit (completely input-independent). It is this
goal that gives the network quite strange properties. You seem to be
unaware of this:

[Bill Powers (971023)]

This is all put in such vague and general language that I have no picture
of what your system is supposed to accomplish.

About your further comments: nowhere do I propose random behavior to be
useful. One simulation does include an environment which presents two
alternating images at random intervals. This environment can be made less
'complex' by exerting control: flipping back to one image in order to
decrease the learning task. The fact that this is the simplest form of
control thinkable is a virtue for the sake of any discussion, not a
weakness.

Regards,

Mervyn

[From Bill Powers (971024.0104 MDT)]
I suspect that your model is based on computing the output that will
produce a specific result and then ...

[..]

I repeat the advice I just gave Jim Dix. You're playing a totally different
game. Before you decide that it has something to do with PCT, you should do
some reading and find out what PCT is really about. Otherwise you'll just
get one objection after another, and end up totally frustrated with us.

I _am_ frustated now. I suspect your shit smells like PCT as well.

Mervyn

[From Bill Powers (971026.0837 MST)]

I repeat the advice I just gave Jim Dix. You're playing a totally different
game. Before you decide that it has something to do with PCT, you should do
some reading and find out what PCT is really about. Otherwise you'll just
get one objection after another, and end up totally frustrated with us.

I _am_ frustated now. I suspect your shit smells like PCT as well.

Sure does. It seems that you want me to learn about your approach but you
don't want to learn about mine. That doesn't sound like a very good deal.

Best,

Bill P.

In article <3453A1E6.1A67@earthlink.net> Richard Marken,
rmarken@EARTHLINK.NET writes:
<snip>

For my part, I have no interest at all in learning someone else's
theory of behavior unless it's a theory that is based on
1) recognition of the fact that behavior is purposeful and
2) understanding of the nature of purposful behavior. These are
the two principles on which any theory of behavior must be built.
So far, the only theory of behavior I know of that is built on
these principles is PCT. This is why PCT is not just another
theory of behavior; it is the _only_ theory of purposeful behavior.

<snip>

Jim Dix here. Still trying to hang on.
In consideration of some of the comments I have read, it appears that the
group may be private. Nevertheless, I would like to add perhaps a minor
change to your thoughts, lest we get the impression of a teleological
nature to control systems (or to life, for that matter).

I say this because the theoretical "mission" of living organisms is
survival or reproductive success (or some such thing) but that for any
particular organism, there is a provisional aspect to it; it is "under
evaluation", as it were. Thus, purposeful behavior is not a necessary
feature of a particular organism, since conceivably it could be
mal-adaptive. I take it the theory you are pursuing deals only with
behavior that can be considered purposeful. Alternatively, one must make
the assumption that the behavior is purposeful in order to achieve the
results one gets from it.

Regards, Owleye.

From Bill Powers (971028.0118 MST)]

Jim Dix (971027) Replying to Marken --

Jim Dix here. Still trying to hang on.
In consideration of some of the comments I have read, it appears that the
group may be private.

Not at all. But there is a main subject-matter, called PCT or perceptual
control theory, that is the reason for existence of this list. If you were
writing to a Cog Sci list, you wouldn't insist on talking (much) about
horticulture, would you?

Nevertheless, I would like to add perhaps a minor
change to your thoughts, lest we get the impression of a teleological
nature to control systems (or to life, for that matter).

If by "teleological" you mean something like the future determining the
present, I would agree. However, control theory shows us how organisms can
in fact autonomously select future states of affairs and then bring them
about by acting on even a variable environment, which is a very different
proposition that doesn't require violating any principles of causation.
It's this principle of behavior that the life sciences failed to recognize
40 or 50 years ago, and that is apparently still not understood by most.

I say this because the theoretical "mission" of living organisms is
survival or reproductive success (or some such thing) but that for any
particular organism, there is a provisional aspect to it; it is "under
evaluation", as it were.

Survival or reproductive success is an outcome of behavioral organization;
the organisms more capable of controlling what happens to them generally
survive better to reproduce (as a species). However, I consider
reproductive success to be an effect rather than a cause; an outcome rather
than a goal. Organisms that fail to control how the environment affects
them are unlikely to survive, but survival does not explain how organisms
achieve the neccesary control. Neither does unlimited reproductive success
imply unlimited capacity to survive; there is an optimum rate of
reproduction in any given environment, and exceeding the maximum allowable
rate is no better for the species than falling short of the break-even
rate. If the quality of control is too low, you can't "make it up on
volume" like the guy who beat the competition by selling below cost. A
species that controls poorly will be wiped out by an event that a species
that controls well will survive easily.

Thus, purposeful behavior is not a necessary
feature of a particular organism, since conceivably it could be
mal-adaptive.

If you were better versed in PCT you'd realize that this statement is bound
to raise objections in this milieu. The main thesis is that all behavior is
part of some purposive control process (only control systems are
purposive); that control is the basic principle of life at all levels from
the cellular to the cognitive. It's the principle of negative feedback
control that distinguishes living from nonliving systems.

I take it the theory you are pursuing deals only with
behavior that can be considered purposeful.

That's correct. However, we have yet to find an example of behavior that is
clearly (or even possibly) non-purposive. The only possible exception would
involve the mislabeling of unintended side-effects of control actions as
being part of behavior. We have interactive computer demos that show how to
pick out the intended effect of an action from multiple unintended effects
(see Marken's Web page).

Alternatively, one must make
the assumption that the behavior is purposeful in order to achieve the
results one gets from it.

It isn't necessary to assume that behavior is purposive. We have worked out
empirical tests for finding out what an organism is controlling. If the
organism is not controlling anything the test will be failed. It's a simple
procedure. You postulate that some observable variable is under control by
the organism (say, the speed of the car it's driving). If that variable is
actually under control, disturbances that tend to change it will result in
actions by the organism that strongly oppose the effects of the
disturbances. If you can then rule out chance coincidences, prove that it
is the organism's action that prevents the effect of the disturbance from
occurring, and show that control is lost when the organism can no longer
sense the variable, the proposed variable is in fact under control by the
organism, by the definition of control.

Every action has some effect on the environment or the organism's relation
to it. If no effect of a given kind of action is under control by the
organism, then directly disturbing any effect to make it different will not
result in any change of the organism's action to oppose the disturbance.
That would prove that that behavior is not part of a control process; it is
open-loop behavior: stimulus-response behavior or cognitively-driven
behavior. I don't believe that any such behaviors can be found, but I
haven't tested all possible behaviors. Neither has anyone else, of course.
Most behavioral scientists and theorists don't know how to test for
control, and wouldn't even know what to look for. So we can't draw any
conclusions from the fact that most of them fail to find control in behavior.

Best,

Bill P.

[From Hank Folson (971028)

(Jim Dix 971026)

>In consideration of some of the comments I have read, it appears that the
>group may be private.

CSGnet is open, Owleye, but we are not a chat group. The direction of the
discussion is focussed, although you would never guess it from the posts.
:wink:

>Nevertheless, I would like to add perhaps a minor
>change to your thoughts, lest we get the impression of a teleological
>nature to control systems (or to life, for that matter).

It is accepted in mainstream biology that organisms _contain_ control
systems. PCT goes the next step to recognize that organisms _ARE_ control
systems. In other words, organisms do not simply include control systems,
they are composed entirely of hierarchically arranged control systems, and
the living organism is the result of this structure. This doesn't leave
much room for stimulating intellectual discussion. Intellectual discussion
for its own sake is not very useful simply because the basic premises of
PCT can be disproved by scientific experiment (i.e. showing that organisms
are not control systems). So far, a lot of believers in other theories
have said PCT does not work, but they have never proven it.

>Thus, purposeful behavior is not a necessary
>feature of a particular organism, since conceivably it could be
>mal-adaptive. I take it the theory you are pursuing deals only with
>behavior that can be considered purposeful. Alternatively, one must make
>the assumption that the behavior is purposeful in order to achieve the
>results one gets from it.

Purposeful behavior is not a philosophical part of PCT included because of
some emotional desire of the originators. Under PCT, a hierarchical
control system can only be purposeful. If it is not, it is not
functioning, and will die.

Try studying and understanding the introductory info and the literature on
PCT. If you succeed, you will see that an organism that is a living
hierarchical control system is ideally suited for survival in an uncaring
environment.

Sincerely, Hank Folson

[From Bruce Gregory (971028.1315 EST)]

Rick Marken (971028.0930)

Hank Folson (971028) --

Excellent post, Hank.

Hear, Hear!

Bruce

[From Rick Marken (971028.0930)]

Hank Folson (971028) --

Excellent post, Hank.

Best

Rick

ยทยทยท

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Fred Nickols (971028.1740 EST)]

Jim Dix (undated and not date stamped)

Jim Dix here. Still trying to hang on.
In consideration of some of the comments I have read, it appears that the
group may be private.

Fred Nickols here. Hang on, Jim. I'm new, too, but I wouldn't characterize
the list as "private." Instead, I'd use "focused." I'd also call it "very
tightly patrolled" -- by Deputy Marken. :slight_smile:

                        >Nevertheless, I would like to add perhaps a minor

change to your thoughts, lest we get the impression of a teleological
nature to control systems (or to life, for that matter).

What's wrong with "teleological" when applied to people as living control
systems?

I say this because the theoretical "mission" of living organisms is
survival or reproductive success (or some such thing) but that for any
particular organism, there is a provisional aspect to it; it is "under
evaluation", as it were. Thus, purposeful behavior is not a necessary
feature of a particular organism, since conceivably it could be
mal-adaptive. I take it the theory you are pursuing deals only with
behavior that can be considered purposeful. Alternatively, one must make
the assumption that the behavior is purposeful in order to achieve the
results one gets from it.

Hmm. We must be hanging around some different "organisms." The ones with
whom I'm acquainted are a lot more complex than survival or reproductive
success (although lots of them seem caught up in reproductive attempts).

Anyhow, stick around. I think you'll find digging through the chaff yields
a lot of wheat.

Regards,

Fred Nickols
The Distance Consulting Company
nickols@worldnet.att.net