Error signals as intrinsic variables (was Why??)

Bill Powers wrote:

[From Bill Powers (2008.09.21.1031 MDT)]

Martin Taylor 2008.09.21.10.33]

On CSGnet, it is often assumed (and it is no more than an assumption) that the ability to control one or more perceptions is itself an intrinsic variable, and that persistent inability to control represents error in this intrinsic variable.

This isn't quite how I put it. I propose that error signals themselves are intrinsic variables. The reason is that these are the only signals in a control system that always have the same meaning (like "pain signals"), so it is feasible to think of comparators as inherited structures, even though the meaning of the signals they compare and the effects of the error signal output are not inheritable. It is necessary for any process that can be called PCT reorganization be present from the beginning (in the womb or whenever that is), because it has to work in order to construct everything else that is learned -- all the specific systems that have to be learned at all the levels, including all systematic problem-solving methods.

I have never thought of it this way, and I would like to follow the idea, considering how "error" might evolve into becoming an intrinsic variable in a complex structure. It is clear that the ability to control arbitrary aspects of the perceived environment is unlikely to enhance the likelihood an organism will survive to propagate its genes; both the perceptions controlled and the actions used to control them must have consistent influences on the variables that are involved in biochemical survival (and on the physical integrity of the organism). The side-effects of control output on those variables is ultimately what matters. So "error" cannot in itself be a fundamental intrinsic variable leading to reorganization.

However, it seems very reasonable to assert that if the "right" perceptual variables are controlled using the "right" kinds of action, then the more effective and consistent the control of those variables, the more consistent the effects on the "survival" variables are likely to be. From this, it makes sense to guess that if a mechanism evolves that has the statistical effect of reducing overall error (or error in a sub-module of a complex control structure), that mechanisms will increase the likelihood of the organism surviving to propagate its genes. This would be true only in a structure that had organized so that its control actions did tend to have a consistent influence on the "survival" variables.

Assuming that e-coli-type reorganization is the way that control structures change, it then makes sense that overall error in a control structure or part of that structure would have the same effect as would a "survival" biochemical variable -- it would be an intrinsic variable, albeit a derived one. Reducing error would not, in itself, be a survival asset, but reducing error would be a survival asset in a control structure whose general objectives served to maintain the "survival" variables near their genetically determined reference values.

If, from this, we conclude that "error' is a very plausible intrinsic variable, nevertheless it is an intrinsic variable of a different kind than the "survival" variables. It would be interesting to consider whether it might act in some way as a modulator of reorganization, or whether it would act in parallel with the others as a rate-driver.

As I say in B:CP, reorganization (later refined to E. coli reorganization) is the most basic kind of learning, preceding all other kinds except perhaps learning in the sense of memorizing. If there are other, more systematic, modes of learning, they are constructed by E. coli reorganization, and from then on (being more efficient) they keep errors too small to start local E. coli reorganization. When the environment changes enough to invalidate the systematic methods, errors grow and E. coli reorganization comes into play again.

That's been the idea since the start; I simply haven't tried to model any of the systematic means of reorganization that might develop. There doesn't seem to be any basis for expecting any particular method to appear and I don't have any models working at a high enough level for reorganization to produce "methods." And anyway, I've have my hands full just trying to get any sort of fancy reorganization working.

One fact (in the sense that you often use the word) is that specific errors are often perceptible. I can perceive that my hand moving to pick up a cup is not yet at the handle, and at the same time perceive that I want my hand to be at the handle. If there is a "reorganization engine", it is reasonable to suppose that it can use such perceptions, and is itself a control system or complex control structure whose actions affect not the outer world environment, but the structure of the control system that influences the outer environment.

The development of a reorganization engine must precede, evolutionarily, the development of "error" as an intrinsic variable, and "survival" variables must precede the existence of a reorganization engine (other than evolution itself).

This line of thinking leads to the supposition (and we are getting more speculative the further we go along the train of thought) that the reorganization system is likely to evolve to act in a modular fashion, influencing the structure of the control system primarily in those areas that affect some particular intrinsic variable.

Thank you for correcting my earlier comment.

Martin

[From Bill Powers (2008.10.04.0640 MDT)]

I have never thought of it this
way, and I would like to follow the idea, considering how
“error” might evolve into becoming an intrinsic variable in a
complex structure. It is clear that the ability to control arbitrary
aspects of the perceived environment is unlikely to enhance the
likelihood an organism will survive to propagate its genes; both the
perceptions controlled and the actions used to control them must have
consistent influences on the variables that are involved in biochemical
survival (and on the physical integrity of the organism). The
side-effects of control output on those variables is ultimately what
matters. So “error” cannot in itself be a fundamental intrinsic
variable leading to reorganization.

“Error itself” is an abstration. Error signals, on the other
hand, are physical variables, and can definitely be intrinsic variables,
as can any physical variable important to the life-support systems. The
way I see it is that perceptual input and output functions (and
interlevel connections) are the entities being reorganized, so what is
acquired is always control over some aspect of the perceived environment
or the organism itself. It’s not necessary that every variable that we
learn to control have some survival value – only that enough of them
do.

Look at it the other way around. With error as an intrinsic variable,
reorganization can bring the control system from a state of zero control
to a state of almost optimal control (see Chapter 7). Particularly
experiment with the ArmControlReorg program of Chapter 8. You can
“train” the 14 control systems using only disturbances
(constant reference signals), then after the errors have all been greatly
reduced, turn the disturances and reorganization off, and turn on the
patterned variations in the reference signal. Control will be of the same
quality as if learning had been done with the reference signal variations
occurring. So the system is truly not learning behaviors or particular
responses to particular inputs; it is learning to control. I think we can
say that learning to control whatever variable is involved has survival
value (but see below) in the long run – more value than not being able
to optimize any control system.

Finally, reorganizing one level of system so it can control well makes it
possible for higher-level systems to use the lower level for higher
purposes.

However, it seems very
reasonable to assert that if the “right” perceptual variables
are controlled using the “right” kinds of action, then the more
effective and consistent the control of those variables, the more
consistent the effects on the “survival” variables are likely
to be. From this, it makes sense to guess that if a mechanism evolves
that has the statistical effect of reducing overall error (or error in a
sub-module of a complex control structure), that mechanisms will increase
the likelihood of the organism surviving to propagate its genes. This
would be true only in a structure that had organized so that its control
actions did tend to have a consistent influence on the
“survival” variables.

Yes, that’s my point, too. But I don’t think that controlling
“neutral” variables (neutral with respect to survival) would
entail so much cost as to negate the value of being able to optimize a
control system.

Assuming that e-coli-type
reorganization is the way that control structures change, it then makes
sense that overall error in a control structure or part of that structure
would have the same effect as would a “survival” biochemical
variable – it would be an intrinsic variable, albeit a derived one.
Reducing error would not, in itself, be a survival asset, but reducing
error would be a survival asset in a control structure whose general
objectives served to maintain the “survival” variables near
their genetically determined reference values.

Agreed. “Intrinsic”, as I use the term, is synonymous with
“internal.” It has no further significance, other than the fact
that some internal variables affect the ability of the organism to stay
alive. Natural selection (which may also work on the E. coli principle
and be a subsystem in the genome) produces the reference signals (and the
implied comparators), as well as the effects of intrinsic error signals
in causing random changes of organization.

If, from this, we conclude that
"error’ is a very plausible intrinsic variable, nevertheless it is
an intrinsic variable of a different kind than the “survival”
variables. It would be interesting to consider whether it might act in
some way as a modulator of reorganization, or whether it would act in
parallel with the others as a rate-driver.

Again we’re thinking in the same directions. There is similar possibility
suggested by experience with MOL. Error seems to attract attention, and I
have hypothesized that this helps localize reorganization where there is
a need for it rather than letting reorganization act to disrupt working
systems. If we say that the rate of reorganization depends on intrinsic
errors of the former physiological or “survival” kind, the
effects of reorganization might be localized by a second level that
detects large error signals and directs any existing reorganizing process
to the area where they are. If there is no underlying intrinsic error,
directing reorganization to error locations will cause no
changes.

I offered David Goldstein a perhaps useful image. If you imagine
reorganizing effects as coming out of a garden hose, it is intrinsic
error that turns the flow of water – reorganizing effects – up and
down, and hierarchical error, via attention, that points the hose to the
place that needs watering. That overall organization, as you suggest,
would be a late evolutionary development.

I think we should drop the “survival” idea. The implication of
it is “survival to reproduce,” but we know now that there is
another way to “mutate” that does not require death to
eliminate unworkable organizations. And more reproduction is not
necessarily good for a species’ ability to control what happens to it.
The process of natural selection may also be an E. coli reorganization
phenomenon under control of the genome. Only E. coli reorganization can
explain periods of evolution during which forms simply change in an
orderly way (such as size) while retaining most features (such as shapes
of bones) identifiably the same. E. coli swimming in a straight line,
between tumbles.

Nature does not literally do any “selecting”. It doesn’t say
“Let me see – which organism shall I choose to let live?” The
selection is being done by the organism, at first simply on the basis of
the effect of disturbances on the accuracy of replication, but later on
the basis of protecting critical variables (Ashby’s term) from
disturbance. See my World Futures article on the origins of
purpose.

The “standard model” of evolution uses gene-swapping to
introduce variations that are then selected by their consequences, as
Skinner would have put it. It has been sufficiently demonstrated that
this method can indeed lead to convergence on improved organizations. But
it seems to require a lot of help from the programmer, who has to arrange
for organisms to survive which only make a partial move toward what is
actually required for survival. Beer’s cockroach model, for example,
allows cockroaches to reproduce on the basis of turning slightly toward
the direction of food, or (I would suppose) partly away from an
approaching danger. Unfortunately, actual survival can’t be achieved by
any individual that way.

What this external aid does is to create a gradient of effects that can
give direction to the changes in organization resulting from
gene-swapping. Without this gradient it’s impossible to impose order on
the process of reorganization, so the results will then depend strictly
on random trials which either succeed or fail with nothing between those
extremes. The E. coli section of Chapter 7 shows the superiority of
efficiency of the E. coli method over the purely random, live-or-die,
approach: the improvement is almost two orders of magnitude in only two
dimensions, and grows rapidly as more variables are involved. I believe
the reorganization model is going to replace the standard model.

One fact (in the sense that you
often use the word) is that specific errors are often perceptible. I can
perceive that my hand moving to pick up a cup is not yet at the handle,
and at the same time perceive that I want my hand to be at the handle. If
there is a “reorganization engine”, it is reasonable to suppose
that it can use such perceptions, and is itself a control system or
complex control structure whose actions affect not the outer world
environment, but the structure of the control system that influences the
outer environment.

I have pondered at length the question of what aspects of the control
model are consciously perceivable, including the aspect you suggest. In
every case so far, I’ve been able to realize that I was simply describing
more perceptions. The motion of your hand is a transition-level
perception; “toward” is a relationship, as is
“distance”. What you “want” to be happening is a
perception derived from a reference signal via the imagination
connection. Perceiving a difference between one perception and another
one is not perceiving an error signal. An error signal is a single
signal, not two signals. A difference between two signals is a
relationship, even if one of the signals is imaginary, or both. That’s a
perceptual signal, not an error signal.

So far, I have not been able to find any case that can’t be reduced in a
straightforward way, from the basic definitions of the model, to the
experience of a perceptual signal. Even pain, which surely seems to be a
direct experience of an error signal if any experience is, dissolves into
an information-carrying perceptual signal plus perceptions of attempts to
escape it. One way to tolerate pain, many people have discovered and
written, is to cease trying to escape from it – sort of thinking
“yes, that’s the right level of pain.” Then all that is left is
the information in the perceptual signal, which is not in itself anything
more than information. This works very well for me in the dentist’s
chair. It works for kids sticking rings through their noses, and it
worked for that incredible guy who cut off his hand with a pocket-knife
to avoid dying of thirst with his hand trapped under a huge boulder. It
also explains how masochists can exist.

The development of a
reorganization engine must precede, evolutionarily, the development of
“error” as an intrinsic variable, and “survival”
variables must precede the existence of a reorganization engine (other
than evolution itself).

Yes, I agree with that. I attempted to work out that progression in my
Origins of Purpose paper. But I did not use “survival” as the
criterion; I used “accuracy of replication.” Whatever
reorganization makes the accuracy of replication increase will help the
organism (or molecule) survive longer – almost by definition. It could
hardly be otherwise. An organization that counteracts disturbances than
tend to alter accuracy of replication will retain its form, including the
resistance to disturbances of that kind, longer than one that does
not.

This line of thinking leads to
the supposition (and we are getting more speculative the further we go
along the train of thought) that the reorganization system is likely to
evolve to act in a modular fashion, influencing the structure of the
control system primarily in those areas that affect some particular
intrinsic variable.

There we are: two minds with but a single thought, which makes both of us
half-wits.

Isn’t it strange how people can want everyone to think alike, when it is
only independent thinkers who find their own routes to conclusions who
reassure us when they agree with us?

Best,

Bill P.

···

At 07:35 PM 10/3/2008 -0400, Martin aylor wrote:

[Martin Taylor 2008.10.04.12.31]

Sorry about forgetting the date stamp on my last...

I am encouraged about the degree of agreement between us on intrinsic variables. But the agreement isn't 100%, yet. Let us hope it never gets to that level, because therein lies complacency.

[From Bill Powers (2008.10.04.0640 MDT)]

It is clear that the ability to control arbitrary aspects of the perceived environment is unlikely to enhance the likelihood an organism will survive to propagate its genes; both the perceptions controlled and the actions used to control them must have consistent influences on the variables that are involved in biochemical survival (and on the physical integrity of the organism). The side-effects of control output on those variables is ultimately what matters. So "error" cannot in itself be a fundamental intrinsic variable leading to reorganization.

"Error itself" is an abstration. Error signals, on the other hand, are physical variables, and can definitely be intrinsic variables, as can any physical variable important to the life-support systems. The way I see it is that perceptual input and output functions (and interlevel connections) are the entities being reorganized, so what is acquired is always control over some aspect of the perceived environment or the organism itself.

So far, so good.

It's not necessary that every variable that we learn to control have some survival value -- only that enough of them do.

Again, we agree. But my essential point was the flip side of that, that the control of some perceptions must stabilize survival variables. I restate it in the form of bullet points, but it's really only one concept. I will start with point zero:

(0 -- foundational concepts) The initial form of "life" must be a control system that controls a unidimensional variable. Since it has no hierarchy, it is really just a simple negative feedback loop. The variable that it controls is, by definition, internal to itself. The stability of this variable is what defines the continued existence of this ultimately simple "organism". The actions of this organism, again by definition, are the effects on the environment of that variable that oppose some influence from outside the loop that would otherwise alter the value of the variable.

(1) Any perceptual variable in a high-dimensional space of possibilities is almost certainly near to orthogonal to any other chosen at random. The environment is, we believe, just such a high-dimensional space of possibilities (maybe even infinite-dimensional, though that's unlikely).

(2) The variables that need to be controlled are those that the organism needs if it is to survive at least long enough to reproduce -- not necessarily every single organism of a species, but enough of them that the pattern (structure, genome,...) continues to exist. Call those variables "survival variables".

(3) The influences of the environment on any organism are many and varied, but only relatively few of them would have significant effects on the survival variables if left un-countered. Some, however, would be incompatible with survival if left un-countered. The rates of chemical processes change with temperature, for example, so it is likely that excessive heat or cold would affect the activity of any system based on chemical processes. (Yes, I realize spores can survive the cold of outer space, but they are not very active there).

(4) If an organism controls a random influence on its surface (e.g. a perception based on sensory input), that control, no matter how precise, is almost certain NOT to affect the stability of its survival variables.

(5) If an organism controls an influence on its surface that does happen to affect a survival variable, then control of that variable is likely to improve the stability of the survival variable in question, and hence to increase the likelihood that organisms descended from this will continue to exist (assuming that the propensity to control that extrinsic variable is heritable). Call these usefully controlled extrinsic variables ("perceptions" based on external influences) "survival perceptions".

(6) "Reorganization" of any kind is not required for the foregoing to be true.

(7) However, for there to exist organisms that do control "survival perceptions", either there must be a lot of initial variability among the organisms, or what an organism controls and by what means it controls must not only be variable, but the changes must tend to stop when perceptual control does stabilize a survival variable. The former is plausible in the earliest stages of life on Earth, but the latter supposes a mechanism that is unlikely to precede the existence of perceptual controls that by happenstance do stabilize a survival variable.

Look at it the other way around. With error as an intrinsic variable, reorganization can bring the control system from a state of zero control to a state of almost optimal control (see Chapter 7).

Yes.

I think we can say that learning to control whatever variable is involved has survival value (but see below) in the long run -- more value than not being able to optimize any control system.

Yes, more value than not being able to optimize any control system, but again, the flip side is that learning to control a random function of environmental variables has a probability near zero of having survival value.

Finally, reorganizing one level of system so it can control well makes it possible for higher-level systems to use the lower level for higher purposes.

Yes. That is critical in a complex control structure.

However, it seems very reasonable to assert that if the "right" perceptual variables are controlled using the "right" kinds of action, then the more effective and consistent the control of those variables, the more consistent the effects on the "survival" variables are likely to be. From this, it makes sense to guess that if a mechanism evolves that has the statistical effect of reducing overall error (or error in a sub-module of a complex control structure), that mechanisms will increase the likelihood of the organism surviving to propagate its genes. This would be true only in a structure that had organized so that its control actions did tend to have a consistent influence on the "survival" variables.

Yes, that's my point, too. But I don't think that controlling "neutral" variables (neutral with respect to survival) would entail so much cost as to negate the value of being able to optimize a control system.

I wasn't considering "cost", though it is indeed a consideration, given that control of almost all possible functions of environmental variables has no survival value, whereas control of a very few has considerable survival value.

Among all the functions of environmental variables, the side-effects of controlling a randomly chosen one are probably as likely to disturb a survival variable as they are to stabilize it. However, if the variable being controlled has a direct influence on the survival variable, then it is hard to see how controlling a random one would tend to destabilize a survival variable, other than if the controlled variable happened to be a part of a loop of influence in which the survival variable took part.

Assuming that e-coli-type reorganization is the way that control structures change, it then makes sense that overall error in a control structure or part of that structure would have the same effect as would a "survival" biochemical variable -- it would be an intrinsic variable, albeit a derived one. Reducing error would not, in itself, be a survival asset, but reducing error would be a survival asset in a control structure whose general objectives served to maintain the "survival" variables near their genetically determined reference values.

Agreed. "Intrinsic", as I use the term, is synonymous with "internal." It has no further significance, other than the fact that some internal variables affect the ability of the organism to stay alive.

I'd say that would be a fairly significant difference. Wouldn't you?

Natural selection (which may also work on the E. coli principle and be a subsystem in the genome) produces the reference signals (and the implied comparators), as well as the effects of intrinsic error signals in causing random changes of organization.

Yes, once the general structure has evolved to a level of complexity that would support that kind of thing. I'm still thinking of the multi-level control structures that exist within the membrane of a single cell. What gets in and out of a single-celled organism is already pretty structured.

If, from this, we conclude that "error' is a very plausible intrinsic variable, nevertheless it is an intrinsic variable of a different kind than the "survival" variables. It would be interesting to consider whether it might act in some way as a modulator of reorganization, or whether it would act in parallel with the others as a rate-driver.

Again we're thinking in the same directions. There is similar possibility suggested by experience with MOL.

Here we are getting into a different realm of discourse. It's an interesting one, and one about which I have thought (and contributed some thoughts to CSGnet years ago). But it's not one I want to pursue here. Here I'm thinking in the more "mechanistic" mode, that has no concern with consciousness or even humanity. It's about the evolution of control structures and their reorganization capabilities, as such.

If we say that the rate of reorganization depends on intrinsic errors of the former physiological or "survival" kind, the effects of reorganization might be localized by a second level that detects large error signals and directs any existing reorganizing process to the area where they are. If there is no underlying intrinsic error, directing reorganization to error locations will cause no changes.

(I eliminated a sentence where you said this was hypothesised). This is a fair speculation, but I wonder whether the last sentence is supportable. My expectation would be that reorganization of any kind is likely to influence the action output of a control structure. Changes in the action output will alter the side-effects of control. Since these side effects may influence some survival variables, reorganization when there is no underlying intrinsic error (taking that to mean in the survival variables, in this context) is likely to change the stability or the value of the survival variables.

I offered David Goldstein a perhaps useful image. If you imagine reorganizing effects as coming out of a garden hose, it is intrinsic error that turns the flow of water -- reorganizing effects -- up and down, and hierarchical error, via attention, that points the hose to the place that needs watering. That overall organization, as you suggest, would be a late evolutionary development.

A reasonable image. But here we come to a point where I fear we disagree fairly strongly.

I think we should drop the "survival" idea. The implication of it is "survival to reproduce," but we know now that there is another way to "mutate" that does not require death to eliminate unworkable organizations. And more reproduction is not necessarily good for a species' ability to control what happens to it.

I find this concept quite teleological. My view of evolution is quite different. A species has no reference to do anything. "Control" is not involved at that level. All that happens is that those species whose members act so that a recurrence cycle of birth, maturity, reproduction (and usually, though not necessarily) death continues through many cycles are the species that are more likely to exist for a long time. If the cycle changes, we say we see a different species, or if it stops, we say the species went extinct. This doesn't mean that individual members of the species all propagate descendants. It only means that enough do that the same cyclic pattern is maintained.

The process of natural selection may also be an E. coli reorganization phenomenon under control of the genome. Only E. coli reorganization can explain periods of evolution during which forms simply change in an orderly way (such as size) while retaining most features (such as shapes of bones) identifiably the same. E. coli swimming in a straight line, between tumbles.

I hate philosophical arguments that assert "Only" this way is possible. I prefer arguments that say "I can think of only ... ". In this particular case, I don't see how an e-coli process would have the effect you propose, as contrasted to different effects, such as changing bone structure while maintaining size. On the other hand, natural selection among a set of similar organisms (say bigger and smaller members of a species) would have the effect you find problematic, if there were environmental factors making, say, bigger members more likely to get eaten, or more likely to find food in periods of drought.

Nature does not literally do any "selecting". It doesn't say "Let me see -- which organism shall I choose to let live?" The selection is being done by the organism, at first simply on the basis of the effect of disturbances on the accuracy of replication, but later on the basis of protecting critical variables (Ashby's term) from disturbance. See my World Futures article on the origins of purpose.

Where is the World Futures article? However, I'm afraid I don't understand "The selection is being done by the organism" at all. Any individual organism is simply a stage in the recurrence cycle of life. Its actions do affect the likelihood that its progeny will exist and will survive to produce progeny, but "accuracy of replication" is hardly likely to be a perceptual variable under control. For the survival of a species, it's not even a good thing for replication to be too accurate, since that tends to lead to vulnerability of the reproductive cycle in situations of environmental change.

The "standard model" of evolution uses gene-swapping to introduce variations that are then selected by their consequences, as Skinner would have put it.

Actually, that's not all that gene mixing does. E-coli is a hill-climbing optimization technique. It is useful precisely in situations where there can be no immediate measure of the slope of the hill. The entity using the e-coli method, whether it be an organism or a reorganizing structure, has only the current and historic values to go by. If the current value is better than the historic, then a good strategy is to keep going. But if the entity could look around and see the hilltop, it could go straight there.

All hill-climbing techniques have a common vulnerability, and e-coli is no exception. In a complex environment, there are many optima, some better than others. The landscape is more like a mountain range than like a simple hill. Any hill climbing technique is liable to get caught in a local optimum, from which it can't escape because all directions are downhill. Now it is true that in a high-dimensional space there are likely to be many situations in which the ways are downhill in many dimensions while still leaving some escape routes leading uphill, but the problem of local optima nevertheless persists.

Gene-swapping avoids the problem of local optima, by allowing the possibility of jumping to a quite different part of the landscape, from which another hill-climb may find an optimum better than the original. Pure gene-swapping has a complementary vulnerability, which is that hopping all over the landscape can lead away from a good optimum as readily as toward it. However, if the (multidimensional) landscape has subspaces (spaces of fewer dimensions) that don't interact very much, then it is quite a good way to develop segments that affect the somewhat independent subspaces somewhat independently. That's why artificial genetic algorithms work well in realistic environments.

Combining hill-climbing with gene-swapping, in a locally smooth fitness environment, is quite likely to be a good way of finding really good optima. In this context, an optimum is defined on after the fact. Again, there is no way for an organism to detect whether its own structure is near optimum or not. "Optimum" here means "being structured in such a way that the structure is most likely to survive into the far future". Whether an organism was in fact so structured can only be determined by waiting until the far future has become the present.

It has been sufficiently demonstrated that this method [gene-swapping] can indeed lead to convergence on improved organizations. But it seems to require a lot of help from the programmer, who has to arrange for organisms to survive which only make a partial move toward what is actually required for survival.

It would be better to say that in a simulated world, the influences that affect survival must be provided artificially by the programmer. It's the same as what has been done over the centuries to breed dogs that are better for sheep herding, or for going into holes after weasels, or for winning blue ribbons at dog shows.

Beer's cockroach model, for example, allows cockroaches to reproduce on the basis of turning slightly toward the direction of food, or (I would suppose) partly away from an approaching danger. Unfortunately, actual survival can't be achieved by any individual that way.

True, but the probability of survival for any individual can be affected by such small changes. If there are many individuals, what matters is the probability that some individual like that survives to reproduce, not whether any particular individual survives.

What this external aid does is to create a gradient of effects that can give direction to the changes in organization resulting from gene-swapping. Without this gradient it's impossible to impose order on the process of reorganization, so the results will then depend strictly on random trials which either succeed or fail with nothing between those extremes.

If you go back in the archives of the GA (Genetic Algorithms) mailing list, you will find theoretical and simulation studies of the value of different genetic algorithms under different kinds of landscape. GA can find optima that are impossible for hill-climbing techniques, such as needle points in a flat plain. You are supposing here that GA is best used when there is a smooth gradient, but actually that's where a hill-climbing technique such as e-coli is more likely to be effective.

The E. coli section of Chapter 7 shows the superiority of efficiency of the E. coli method over the purely random, live-or-die, approach: the improvement is almost two orders of magnitude in only two dimensions, and grows rapidly as more variables are involved. I believe the reorganization model is going to replace the standard model.

E-coli requires that there be some immediate measure of the current value of what is to be optimized, and a memory of what that value had been, as well as a memory of what direction was the last move. When the criterion is whether this structure will exist in the far future, such a measure is hard to obtain.

OK. That's an area on which we don't (yet) agree. Now we come to something different.

One fact (in the sense that you often use the word) is that specific errors are often perceptible. I can perceive that my hand moving to pick up a cup is not yet at the handle, and at the same time perceive that I want my hand to be at the handle. If there is a "reorganization engine", it is reasonable to suppose that it can use such perceptions, and is itself a control system or complex control structure whose actions affect not the outer world environment, but the structure of the control system that influences the outer environment.

I have pondered at length the question of what aspects of the control model are consciously perceivable, including the aspect you suggest. In every case so far, I've been able to realize that I was simply describing more perceptions. The motion of your hand is a transition-level perception; "toward" is a relationship, as is "distance". What you "want" to be happening is a perception derived from a reference signal via the imagination connection. Perceiving a difference between one perception and another one is not perceiving an error signal. An error signal is a /single/ signal, not two signals. A difference between two signals is a relationship, even if one of the signals is imaginary, or both. That's a perceptual signal, not an error signal.

Yes, but any perceptual signal is a function of something else, and the perception of the difference between the current hand position and the desired hand position is the same function as is the error signal itself. There may be a structural difference between perceiving an error signal and creating a new perceptual signal by comparing a reference signal with the corresponding perceptual signal, but there is no functional difference between the two concepts. Both violate the principle that the only perceptually apparent signals are derived either from the outside or from the imagination equivalent of the outer environment. In the one case, what is perceived is actually an error signal in the control hierarchy; in the other it is a reference signal in the hierarchy.

So far, I have not been able to find any case that can't be reduced in a straightforward way, from the basic definitions of the model, to the experience of a perceptual signal.

Yes we can agree that these are all perceptual signals. The question is only of what these perceptual signals are a function. In a strict HPCT structure, the only inputs to a perceptual function are ultimately derived either from external sensory data or from the imagination loop. Neither permit the direct perception of a reference signal nor an error signal. And yet, we (or at least I) have the impression of perceiving both what I want to achieve (a reference signal) and how far I am from achieving it (an error signal).

The development of a reorganization engine must precede, evolutionarily, the development of "error" as an intrinsic variable, and "survival" variables must precede the existence of a reorganization engine (other than evolution itself).

Yes, I agree with that. I attempted to work out that progression in my Origins of Purpose paper.

Were is that to be found? (I do have LCS 1 and LCS 2).

But I did not use "survival" as the criterion; I used "accuracy of replication." Whatever reorganization makes the accuracy of replication increase will help the organism (or molecule) survive longer -- almost by definition. It could hardly be otherwise.

It could indeed be otherwise. In fact I think it must be otherwise, because too much accuracy of replication ensures inability to compensate for changes in the environment. Replication is an open-loop phenomenon. There is no way that the accuracy or otherwise of replication can feed back into anything the original organism can do. Only after-the-fact observation of the result of selection can tell which structures were accurately replicated, but whether they were accurately replicated or not, all that produced surviving (possibly inaccurately replicated) descendants have been successful.

An organization that counteracts disturbances than tend to alter accuracy of replication will retain its form, including the resistance to disturbances of that kind, longer than one that does not.

True, but there can be no perceptual signal that corresponds to "accuracy of replication", and therefore there can be no control. If, during any period of history, accuracy of replication has for a while happened to correspond with likelihood of producing descendants, then accuracy of replication will tend to be a property of the structures observed. But control cannot be involved, at least not directly.

This line of thinking leads to the supposition (and we are getting more speculative the further we go along the train of thought) that the reorganization system is likely to evolve to act in a modular fashion, influencing the structure of the control system primarily in those areas that affect some particular intrinsic variable.

There we are: two minds with but a single thought, which makes both of us half-wits.

Well said!

Isn't it strange how people can want everyone to think alike, when it is only independent thinkers who find their own routes to conclusions who reassure us when they agree with us?

The problem is to know when the routes are in fact independent (consider all the apparently differently sourced arguments against human influence on global warming, which all come ultimately from one or two sources). I know I have been much influenced by your thinking, even when I argue against it. I always have the suspicion that you are likely to be correct, and that my logic is not. But to be convinced is not necessarily to arrive at the same conclusion independently. I find it is as often disconcerting as reassuring when someone simply agrees with me, without showing the rationale for that agreement. I like to see that there are at least two ways of reaching the apparently correct conclusion.

Martin

···

At 07:35 PM 10/3/2008 -0400, Martin Taylor wrote:

[From Bill Powers (2008.10.04.2043 MDT)]

Martin Taylor 2008.10.04.12.31 --

The World Futures article is attached.

It will take me a while to reply to the rest of your post.

best,
Bill P.

origins1.doc (80 KB)

[Martin Taylor 2008.10.04.22.49]

[From Bill Powers (2008.10.04.2043 MDT)]

Martin Taylor 2008.10.04.12.31 --

The World Futures article is attached.

It will take me a while to reply to the rest of your post.

Thanks. I'll read the article.

When I re-read my posting as it was distributed, I found it rather incoherent. If you can make out what I intended to say, I congratulate you. I'll await your response before trying to rewrite it. I can plead in my defence only that it may have been unwise to try to write such a thing (the latter half of it, anyway) after sharing a bottle of wine with my wife!

Martin

[From Bill Powers[2008.10.04.2213 MDT)]

Martin Taylor 2008.10.04.22.49 --

When I re-read my posting as it was distributed, I found it rather incoherent.

Perhaps I should let you try again before replying, because you will find some things in the article that said poorly in my post, which may alter things a bit.

Best,

Bill P.

[Martin Taylor 2008.10.04.23.32]

[From Bill Powers[2008.10.04.2213 MDT)]

Martin Taylor 2008.10.04.22.49 --

When I re-read my posting as it was distributed, I found it rather incoherent.

Perhaps I should let you try again before replying, because you will find some things in the article that said poorly in my post, which may alter things a bit.

Having now read your paper of 1995, I think the most appropriate comment is to quote Bill Powers (2008.10.04.0640 MDT):

"Isn't it strange how people can want everyone to think alike, when it is only independent thinkers who find their own routes to conclusions who reassure us when they agree with us?"

I find that I completely agree with at least 95% of your 1995 paper, rather more than I agree with the posting that cited it. I'm not sure that I would disagree with anything in it, though I might reserve judgment about one or two minor points. One thing I like is that the paper does not have language that makes it seem as though organisms are controlling for their distant future descendants being like themselves. Instead, it suggests the possibility that environmental stressors (disturbances that render control difficult) might alter the mutation rate. Since the environmental stress, however it is conceived, is a current condition, that suggestion is perfectly reasonable, and normal Darwinian selection does, as always, make it likely that if some mutated variety is more suited to the stressing environment, that variety will tend to be found in greater numbers at later times.

Similar ideas (without the control system analysis) must have been in the zeitgeist in the early 90's. In 1991 Ross Pigeau and I had planned on submitting a paper about cognition to Behavioural and Brain Sciences called "Thoughts on the Edge of Chaos", which included quite a bit on cyclic replication and variation, and we did include the notion of feedback loops creating stability (but then so did Prigogine, from whom we got a lot of inspiration). We argued then, as I have done quite a few times on CSGnet, that evolving self-organized systems are with high probability operating near the edge of chaos. Replicating cycles in this regime can go for a long time seeming to be almost purely repetitive, and then shift abruptly into a completely different form (as in punctuated equilibrium evolution).

I think the paper was never finished, because we were rather pre-empted by the publication of a different paper based in chaos theory by Freeman in BBS, and we thought they would not be likely to accept another so soon. Some of what we wrote has been lost, but what remains is at <http://www.mmtaylor.net/Academic/Thoughts.chaos/Thoughts.chaos.1.html&gt;\. The diagram on Page 1 and the text of parts 2, 5, the last part of 7, and the first part of 8 seem relevant to the present discussion.

Stuart Kauffman's 1995 "At Home in the Universe" also develops life from stable loops. He starts with the probabilistic inevitability that stable negative feedback catalytic loops will come to exist in a sufficiently complex molecular mix with an energy source. Again, he does not introduce the asymmetry of control, which I came to think when I learned about PCT (and as you described in 1995) is the defining feature of life. It's interesting that you introduce catalysis only after stable replicating loops have evolved, whereas Kauffman takes them to be the reason stable loops come to exist. But that's a minor point compared to the major one that control can, and almost must, eventually develop in a "chemical soup".

Anyway, all I can say about the paper is that I am glad you posted it to CSGnet rather than to me personally, because it might help many threads if people were to read it with care.

Martin

[From Dick Robertson,2008.10.04.0855CDT]

Bill,

This is such good stuff, I hope you’re keeping it in order for IMP II, aren’t you?

Best,

Dick R

[from Tracy B. Harms (2008-10-06 13:55 Pacific)]

Bill Powers (2008.10.04.0640 MDT) wrote:

... So the system is truly not learning behaviors or particular responses to
particular inputs; it is learning to control. I think we can say that learning to
control whatever variable is involved has survival value (but see below) in the
long run -- more value than not being able to optimize any control system.

To my mind this amplifies theories that attribute high importance to
play. It seems to me that animals with particularly "tall"
heirarchies of control (e.g. simians, cetaceans, parrots) engage in
play in order to mature and refine general capabilities. The result, I
propose, is greater flexibility for learning.

Tracy