Error in the Absence of Conflict?

[From Bruce Gregory (2004.0905.1628)]

Error is defined as the difference between a perceptual signal and a
reference signal. Normally we think of these in the context of a
control system. But there seems to be no reason that we cannot
experience errors even when we lack the ability to reduce the
difference between a perceptual signal and a reference signal -- even
when there is no control system in place. We may want to be more
athletic, more attractive, or more intelligent than we are, even though
no conceivable reorganization could bring this about. So persisting
errors need not be associated with conflict. Or have I missed the
conflict?

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[From Bill Powers (2004.09.05.1444 MDT)]

Bruce Gregory (2004.0905.1628)--

Error is defined as the difference between a perceptual signal and a
reference signal. Normally we think of these in the context of a
control system. But there seems to be no reason that we cannot
experience errors even when we lack the ability to reduce the
difference between a perceptual signal and a reference signal -- even
when there is no control system in place. We may want to be more
athletic, more attractive, or more intelligent than we are, even though
no conceivable reorganization could bring this about. So persisting
errors need not be associated with conflict. Or have I missed the
conflict?

I think this comes under the heading of errors due to lack of a means of
correcting them. As Rick pointed out, this can lead to bad emotions just as
much as conflict does; it's not the conflict that is the problem, but the
error that results from it.

Best,

Bill P.

[From Bruce gregory (2004.0905.1845)]

Bill Powers (2004.09.05.1444 MDT)

I think this comes under the heading of errors due to lack of a means
of
correcting them. As Rick pointed out, this can lead to bad emotions
just as
much as conflict does; it's not the conflict that is the problem, but
the
error that results from it.

Fair enough. This is consistent with what my guru told me any moons
ago. An upset, he said, is the result of either: (1) a thwarted
intention (failure of control); (2) an unfulfilled expectation (an
error with no means to correct it); or (3) an undelivered communication
(LPT -- an error with no means to reduce it).

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[Martin Taylor 2004.09.05.16.40]

[From Bruce Gregory (2004.0905.1628)]

Error is defined as the difference between a perceptual signal and a
reference signal. Normally we think of these in the context of a
control system. But there seems to be no reason that we cannot
experience errors even when we lack the ability to reduce the
difference between a perceptual signal and a reference signal -- even
when there is no control system in place. We may want to be more
athletic, more attractive, or more intelligent than we are, even though
no conceivable reorganization could bring this about. So persisting
errors need not be associated with conflict. Or have I missed the
conflict?

Conflict isn't the only reason for persistent error, as you point out.

Let's go back to first principles.

Think about a simulation hierarchy (to avoid any issues as to whether
HPCT represents biological reality). Initially all the connetions are
random. We choose to make persistent and (particularly) increasing
error in any region of the network the one and only criterion for
reorganization (that is the only "intrinsic variable" in this
simulation thought experiment), and we place the hierarchy in an
environment with lots of complex possibilities for feedback from the
actions of the hierarchy back to its sensor systems.

Since all connections are random, that means that the perceptions to
be controlled are arbitrary, and the actions caused by error in any
one control system are also arbitrary. Some of the connections will
give rise to actions that create positive feedback, some to actions
that create negative feedback, but in almost all cases the gain is
very low (see http://www.mmtaylor.net/PCT/Mutuality/intrinsic.html
and following pages for more discussion of this).

Because of the choice of "intrinsic variable" those control systems
that show positive feedback (and those at higher levels that use
their perceptual signals) will tend to reorganize, changing their
connections and/or the weights associated with their connections. In
other words, they may alter what they perceive and how they act to
influence what they perceive.

How long will it take before all systems are able to sustain control
over their perceptual variables? That's a mathematical question, the
answer to which is hard (for me) to analyze. But it does have certain
characteristics that lead me to believe that the answer involves a
phase change. If the available action degrees of freedom are fewer
than the number of different perceptions (i.e. the environment is too
simple), then the answer is never. There will _always_ be conflict
internal to the hierarchy. If the environment offers far more degrees
of freedom for action than there are independent perceptual
variables, then effective control will come about quite quickly,
probably scaling with the number of perceptions to be controlled (or
some polynomial function thereof).

For most such situations, there is a critical point, and it is one
found by most evolved systems: the expectation is infinite, but the
probability of finding a solution in finite time is greater than
zero. In other words, the system can luck into a non-conflicted, full
control reorganization, but it is likely to take a very long time.
For most of the life of the system, it will have internal conflicts,
with the inevitable associated error. In such cases, error is
persistent, even though theoretically the hierarchy does have the
means to control all its perceptions.

Even in such a simple thought experiment, error can persist even when
the system does have the resources to exercise full control over all
its perceptual variables. In the biological world, evolution and
adaptation seem normally to lead systems to approach the critical
"edge of chaos", that point at which capacities closely match demand.
One would expect, therefore, that biological systems would have an
infinite expectation of how long reorganization would take to
eliminate internal conflict, but that for some "enlightened"
individuals, it might be possible in finite time.

How's that for going out on a flowery limb?

Martin

[From Bruce Gregory (2004.0906.0722)]

Martin Taylor 2004.09.05.16.40

Think about a simulation hierarchy (to avoid any issues as to whether
HPCT represents biological reality). Initially all the connetions are
random. We choose to make persistent and (particularly) increasing
error in any region of the network the one and only criterion for
reorganization (that is the only "intrinsic variable" in this
simulation thought experiment), and we place the hierarchy in an
environment with lots of complex possibilities for feedback from the
actions of the hierarchy back to its sensor systems.

If error is the only intrinsic variable, I am having trouble
understanding what error refers to. How do reference levels arise?

Bruce Gregory

"Great Doubt: great awakening. Little Doubt: little awakening. No
Doubt: no awakening."

[Martin Taylor 2004.09.06.10.47]

[From Bruce Gregory (2004.0906.0722)]

Martin Taylor 2004.09.05.16.40

Think about a simulation hierarchy (to avoid any issues as to whether
HPCT represents biological reality). Initially all the connetions are
random. We choose to make persistent and (particularly) increasing
error in any region of the network the one and only criterion for
reorganization (that is the only "intrinsic variable" in this
simulation thought experiment), and we place the hierarchy in an
environment with lots of complex possibilities for feedback from the
actions of the hierarchy back to its sensor systems.

If error is the only intrinsic variable, I am having trouble
understanding what error refers to. How do reference levels arise?

Remember, I'm talking specifically about a very simple thought
experiment involving a strict HPCT-style hierarchy. In it, the top
reference levels can be anything you want, but they don't vary during
the course of the experiment. Any reference values at levels below
the top are whatever the outputs of the ECUs at the level above
combine to generate.

Let's say that the top-level references are all zero and stay that
way. It's irrelevant to the question at issue, which is whether the
system can maintain control in the environment that it senses and
acts upon. That depends a lot on the relation between sensory and
action degrees of freedom (which, of course, means degrees of freedom
per second, not a static value). Action degrees of freedom might be
restricted by the number of effectors available, or by restriction in
the number of feedback paths available through the environment from
effectors to sensors. In a natural environment, those feedback paths
are likely to have complicated spectral properties, which complicates
the matter, but for the thought experiment, I don't think we need to
worry about that.

Martin