[Martin Taylor 2004.09.05.16.40]
[From Bruce Gregory (2004.0905.1628)]
Error is defined as the difference between a perceptual signal and a
reference signal. Normally we think of these in the context of a
control system. But there seems to be no reason that we cannot
experience errors even when we lack the ability to reduce the
difference between a perceptual signal and a reference signal -- even
when there is no control system in place. We may want to be more
athletic, more attractive, or more intelligent than we are, even though
no conceivable reorganization could bring this about. So persisting
errors need not be associated with conflict. Or have I missed the
Conflict isn't the only reason for persistent error, as you point out.
Let's go back to first principles.
Think about a simulation hierarchy (to avoid any issues as to whether
HPCT represents biological reality). Initially all the connetions are
random. We choose to make persistent and (particularly) increasing
error in any region of the network the one and only criterion for
reorganization (that is the only "intrinsic variable" in this
simulation thought experiment), and we place the hierarchy in an
environment with lots of complex possibilities for feedback from the
actions of the hierarchy back to its sensor systems.
Since all connections are random, that means that the perceptions to
be controlled are arbitrary, and the actions caused by error in any
one control system are also arbitrary. Some of the connections will
give rise to actions that create positive feedback, some to actions
that create negative feedback, but in almost all cases the gain is
very low (see http://www.mmtaylor.net/PCT/Mutuality/intrinsic.html
and following pages for more discussion of this).
Because of the choice of "intrinsic variable" those control systems
that show positive feedback (and those at higher levels that use
their perceptual signals) will tend to reorganize, changing their
connections and/or the weights associated with their connections. In
other words, they may alter what they perceive and how they act to
influence what they perceive.
How long will it take before all systems are able to sustain control
over their perceptual variables? That's a mathematical question, the
answer to which is hard (for me) to analyze. But it does have certain
characteristics that lead me to believe that the answer involves a
phase change. If the available action degrees of freedom are fewer
than the number of different perceptions (i.e. the environment is too
simple), then the answer is never. There will _always_ be conflict
internal to the hierarchy. If the environment offers far more degrees
of freedom for action than there are independent perceptual
variables, then effective control will come about quite quickly,
probably scaling with the number of perceptions to be controlled (or
some polynomial function thereof).
For most such situations, there is a critical point, and it is one
found by most evolved systems: the expectation is infinite, but the
probability of finding a solution in finite time is greater than
zero. In other words, the system can luck into a non-conflicted, full
control reorganization, but it is likely to take a very long time.
For most of the life of the system, it will have internal conflicts,
with the inevitable associated error. In such cases, error is
persistent, even though theoretically the hierarchy does have the
means to control all its perceptions.
Even in such a simple thought experiment, error can persist even when
the system does have the resources to exercise full control over all
its perceptual variables. In the biological world, evolution and
adaptation seem normally to lead systems to approach the critical
"edge of chaos", that point at which capacities closely match demand.
One would expect, therefore, that biological systems would have an
infinite expectation of how long reorganization would take to
eliminate internal conflict, but that for some "enlightened"
individuals, it might be possible in finite time.
How's that for going out on a flowery limb?