Deficiencies in evolutionary theory

[From Bill Powers (971129.0506 MST)]

I've been re-reading some of Gary Cziko's stuff, and once again find myself
unhappy with the theory of natural selection. I realize that anything I say
against natural selection is bound to be taken as encouragement by those
who believe in divine selection, but that can't be helped. I'll just say
that I think creation theory has far worse problems, and hope that is
enough to rule out one interpretation of my intentions.

The problem with natural selection is the same problem Martin Taylor and
others have pointed out with respect to reorganization theory. As it
stands, reorganization theory says that departures of certain internal
variables from genetically-set reference levels result in random changes of
organization in the brain, which cease only when all intrinsic errors fall
below some threshold or to go zero. Many including me see a problem of
specificity here -- how can random changes result in just those
reorganizations that are needed to correct intrinsic error, at the same
time leaving intact the results of all past useful --essential --
reorganizations?

If this is a problem with brain models, how much more of a problem is it
when we consider the entire body and the genome that results in
construction of the brain? Can random variations in the genome really
accomplish the building of organization that we see in the historical
record? Has there really been enough time for this to happen through purely
random changes?

When I came across the E. coli phenomenon I thought the problem was solved.
Here we have a random process which, through _systematic_ selection of its
results, effective biased the whole system so that the randomness was
largely overcome. The speed with which E. coli progresses up the gradient
of nutrients is enormously greater than the speed with which the random
changes in direction alone could achieve that progress. In fact, truly
random changes in direction, with no added selection process, would be just
as likely to take E. coli in any other direction. It's only the existence
of an internal criterion -- that the concentration should be increasing --
that accounts for the bias.

One would be tempted to say that a similar biasing occurs in natural
selection, since it is reproductive fitness that selects for specific kinds
of changes and against others. But as I found in applying the E. coli
principle to adaptation of control systems, there's a hidden requirement
that determines how well this sort of random variation and selective
retention process works. There must be a certain kind of continuity in the
changes.

I think one requirement is that small changes must have smaller effects on
the outcome than large changes have. In the case of E. coli, the
environmental geometry seems to provide the necessary continuity. A small
change in swimming direction has a small effect on the sensed gradient. The
result is that E. coli can tell the difference between a change that calls
for another change right away, and a change that can be ignored.

But this continuity requirement goes deeper than that. After a tumble, E.
coli swims in essentially a straight line. The result of a tumble is not a
random change in the critical variable, but a change in the _direction_ in
which the variable _continuously changes_. What I found in applying the E.
coli principle to reorganizations of multiple control systems was that the
random variations didn't work out right if I simply chose new values of the
parameters at random during each reorganization.

Consider "parameter space," with as many dimensions as there are
parameters. A truly random change of parameters would produce a series of
jumps in parameter space, with a jump to any location in the space being as
likely as a jump to any other location. So, for example, if the range of
possible loop gains was from 0 to 1000, the next loop gain might be 2 or
700. The result would be that if a change made the loop unstable, the
control system would cease to operate and there would be no chance of
recovery. In other words, we would be back to "survival of the fittest,"
with an unfit control system simply being eliminated (probably along with
the organism in which it exists). There would be nothing like a gradual
tuning of the control system to make it work better and better.

I found that what I had to do was imagine a point "swimming" through
parameter space in some specific direction at some specific (slow enough)
speed. When a reorganization occurred, no parameter changed instantly to a
randomly different value. Instead, _the direction of movement through
parameter space_ changed. I found also that convergence was made more
likely if the speed were proportional to the errors that determined the
frequency of reorganizations, but that was a refinement. The main
requirement was that the parameters change _continuously_ (or at least in
very small increments) rather than in large random jumps.

I think it's obvious why this is necessary. A large change in parameters
mignt be necessary in the long run, but paradoxically if the system is
allowed to change its parameters in large increments, the chances that it
will destroy itself before finding the _right_ large change are overwhelming.

Thus the Principle of Continuity. The changes of organization involved in
reorganization must occur in small increments, so that no parameter can
change very much during a single reorganization. Coupling this with some
systematic way of varying the frequency or reorganization, we have a system
that can detect a change in a wrong direction and make another change
before the deleterious effect of the wrong direction can materially affect
the system. The system can recover from a wrong reorganization.

Now apply these thoughts to natural selection. If mutations were purely
random, we would expect large changes in organization with every mutation.
The individual offspring, most likely, would either survive or die, with
the chances of dying being very much the larger. The organism would get
just one chance to come up with a viable organization, and the less likely
that organization was to arise through single large change, the less likely
it would be that _any_ mutation would result in survival. The organism does
not have to deal only with changes in its environment; it has to deal with
the effects of changes in its own organization. And if those changes are
too large, there would be no way to select efficiently for a beneficial
change.

If large changes have a low probability of allowing survival, then by the
basic principle of natural selection, organisms in which large changes can
occur would quickly be selected out by their failure to survive. Their
survival rate would be too low. So it stands to reason that species which
do survive over many generations must consist of individuals in which the
effects of mutations on viability occur in small increments. A drosophila
in which the _eyeless_ mutation occurred can't survive at all. But if the
mutation simply altered the placement or size or function of the eye by a
small amount, that drosophila line could propagate for many generations, to
give the effect on fitness a chance to show its magnitude.

What this implies is that the real story of mutation lies in the small
changes, not the large ones. The largest changes, those that produce
effects dramatic enough to be visible to our crude observing techniques,
are not the kind that can lead to systematic tuning of organization. They
are more likely simply to be wiped out by natural selection in a single
generation. They represent the tails of the distribution of changes, their
frequency being what one would expect of a long series of unlucky small
mutations.

The main problem here is that we can currently observe only the largest and
most obvious mutations, especially since we have no good models of
organization that might allow us to see subtler changes. The laws that
apply to large mutations are quite likely to be different from those
involved in small ones.

As an analogy, consider the _crashless_ gene, which we could infer from
watching traffic. A mutation in _crashless_ could be observed by
investigating accidents -- but this would not give us much insight into how
a driver who does not crash manages to stay on the road. The _crashless_
gene idea would give us only the crudest picture of what is going on. For
all we know, the _crashless_ gene is really the _steering loop gain_ gene,
or the _staying awake_ gene.

Suppose that the mutations we can observe are only the extremes, with a
multitude of much smaller mutations occurring continuously and in a way
that is systematically related to internal selection criteria. My proposal
about those criteria is that they are related not to survival, but to
control: to the ability to make effects on the organism be what the
organism wants them to be at some basic cellular level. I can't be any more
specific than that, but I think it's clear that within the broad circle I
have drawn, a basic explanation of evolution is to be found.

Reproductive efficiency above some level is, of course, required for
survival of a species. But if the rate of reproduction becomes _too_ large,
the species can exhaust its resources and fail to survive for that reason.
So reproduction rate can't be the ultimate criterion; there is, under any
specific circumstances, an optimum rate of reproduction with either much
lower or much higher rates being disadvantageous.

I propose that we substitute for the old idea a new definition of fitness:
the capacity to control. It follows that organisms which control better are
more likely to continue reproducing at a sufficient rate for the species to
survive, so we would expect rate of reproduction to be one of the variables
that needs to be controlled, just as it is necessary to control for
breathing and locomotion. But what is critical to maintain is control. One
could say that "survival" is merely a side-effect of good control. If too
high a reproduction rate were interfering with control, it would be the
reproduction rate that changes, not the ability to control. Or the species
could even change itself into a different species, so the original species
would be allowed to die off. What matters is to maintain control, not to
maintain any particular form.

As I said in the World Futures paper, one of the effects of control is to
counteract the effects of disturbances on controlled variables. If some of
the basic controlled variables in all living systems are conditions that
affect accuracy of replication, then organisms will replicate more
accurately as disturbances of those basic variables are better resisted.

This puts "survival" in a different light. Survival can mean only
_continuing to control_. It has nothing to do with survival of a species,
because organisms change their species when disturbances that affect
accuracy of reproduction fail to be resisted. If you focus on a particular
species as that which has to survive, then a change of species is
equivalent to extinction of the former species, but in fact control
continues -- it is control that survives, not species.

Best,

Bill P.

[Martin Taylor 971130 10:30]

Bill Powers (971129.0506 MST)

I've been re-reading some of Gary Cziko's stuff, and once again find myself
unhappy with the theory of natural selection.

A long and careful posting, which I'd like to augment, rather than criticize,
since I agree with most of it. (Sorry, Bill, if this leads you to question
what you wrote:-)

As it happens, I gave a talk about a month ago to the University of Toronto
Mathematics Club on what amounts to this very topic, though I approached
it from a different direction. My approach was to link the conceptual
basis of reorganization with the results found by Stuart Kauffman described
in his wonderful book "At Home in the Universe."

My end-point for the talk was the development of the notion of "inside"
and "outside" group membership of control systems. And I think it is
relevant to the issue of "the Observer" that was a thread a little while
ago. But I won't pursue that notion in this posting.

I'd like to waste space by repeating and emphasizing something embedded
in the middle of Bill' posting:

*** The criterion [for retaining mutations is] that they are related not to
*** survival, but to control: to the ability to make effects on the organism
*** be what the organism wants them to be at some basic cellular level.

This seems to me to be one of those "truthsaying" statements.

I suspect it isn't really "natural selection" with which you are unhappy,
so much as an uncritical use of natural selection as the sole mechanism
of evolution. For "natural selection" is what allows those mutations that
help control to be propagated better than those that don't. Individuals
that control best are quite likely to be those that propagate best.

Your posting points up important problems.

···

----------------

In my talk last month, I started by describing the reorganization
process as having the result not only of controlling intrinsic variables,
but also of orthogonalizing same-level control systems so far as possible.

The orthogonalizing result cannot be complete in a world in which the
possibilities for the environmental feedback function are restricted.
For example, e-coli can move only in a 3-D world. If the gradients are
suitable, it could move so as to control simultaneously three properties,
say temperature, light, and acidity. But it could not add to those the
control of salinity, because the world does not allow it enough degrees
of freedom.

Fully orthogonal control systems do not disturb one another by their
actions. By "fully orthogonal" I mean not only that their Perceptual
Input Functions are linearly orthogonal, but also that they share no
physical variables in their respective Controlled Complex Environmental
Variables.

There are two main places in the control loop where orthogonalization
matters: (1) the Perceptual Input Functions (PIF), and (2) the output.

If the PIFs of two control systems use any of the same input variables,
then, even if they are independently controllable, action that influences
one is likely to be a disturbance to the other. This problem will persist
until one or other PIF changes so that the two become based on disparate
subsets of the sensory inputs from the physical world. This is true even
if the two PIFs are linearly orthogonal. Consider the case in which one
is x+y, the other x-y. If the first is disturbed by a disturbance
that influences x and y equally, the second control system is unaffected.
But the action of the first system in bringing its perceptual signal back
to the reference value might be entirely through a change in x, which
does disturb the second system. The second system might act so as to
change y, which disturbs the first system, and so forth. So even if
the PIFs are linearly orthogonal, the two control systems are likely to
disturb each other if they use the same parts of the physical world in
their environmental feedback functions.

The output has two ways in which control systems can be mutually
disturbing, but they both come down to the output of one affecting the
sensory input of the other. One way is through the Controlled Complex
Environmental Variables sharing aspects of the physical world (actually
the same problem as discussed under the PIF heading). The other is
through side-effects--effects on the physical world that are not part
of its own CCEV, but are aspects of the other's CCEV.

Given, then, that in any environment a sufficient number of control systems
will inevitably disturb one another by their control actions, we have to
come up with some idea of how to minimize the amount of disturbance. That
could be done by reducing the gain of some systems to zero (kill the
opposition), allowing the others to control freely. However, the solution
of "kill the opposition" lacks aesthetic value, no matter how frequently
it is used by humans:-)

If we think of the amount of disturbance that one system can inflict on
another as some kind of a coupling constant between them, we can go a
bit further. The optimum coupling is as low as possible, averaged over
all pairs of control systems. We could call this kind of coupling
"negative" because it reduces the quality of control in the coupled
systems, as measured by, say, the RMS deviation of the perceptual signal
from its reference value. (The "monster" simulation of a few months ago
had this problem in spades).

Now imagine a situation in which the side-effects of the output of one
control system just happen to influence some variable that disturbs another
control system in such a way as to reduce that disturbance: the cook
on a boat dumps unwanted oil overboard, simply to get rid of it; the
oil smooths the waves, reducing the disturbance that affects the
steersman. In a randomly connected world, such side-effect support will
not be common, but it could occur. We could call it "positive coupling"
between the processes, because it enhances the quality of control.
Putting together the negative and positive couplings that occur among
a large set of control systems, optimum control comes when the average
coupling has the largest positive value (or lowest negative value, if
that is the best that can be achieved).

Here we begin to come to Kauffman. Kauffman starts by talking about
catalysis, in which the product of one reaction alters to speed of another.
If there are enough individual kinds of molecule around, a loop may form,
in which the product of A influences the production of more B, which
influences the production of more ...of more...of more A. If such a loop
occurs, it is a positive feedback loop, which will tend to runaway, creating
more and more A, B, ... until the supply of source molecules becomes low
enough to create a new stable set of concentrations. (We have discussed
very similar processes as being at the beginning of control system life.)
Even if the chance of any one product catalyzing a specific other reaction
is only one in a million, when the number of molecules available goes
up, the existence of a catalytic loop becomes almost inevitable.

If we see the couplings between control systems as analogues of catalysis
and anticatalysis (better or worse control due to mutual influence),
the normal process of reorganization will tend to disrupt negative
couplings (anticatalytic relationships) and to retain positive couplings.
If there are enough of them, a catalytic loop of supportive relationships
can develop, control by A easing control by B easing...easing...control
by A. This is a positive feedback loop. Even if the probability of the
side effects of one contro systeml being helpful to another is only one
in a million, still if there are enough control systems, the existence
of a mutual support loop becomes almost inevitable.

The possible existence of catalytic loops of control does not solve the
basic problem, which is as Bill says:

As it
stands, reorganization theory says that departures of certain internal
variables from genetically-set reference levels result in random changes of
organization in the brain, which cease only when all intrinsic errors fall
below some threshold or to go zero. Many including me see a problem of
specificity here -- how can random changes result in just those
reorganizations that are needed to correct intrinsic error, at the same
time leaving intact the results of all past useful --essential --
reorganizations?

The answer, of course, is that it can't. The question, then, is how to
minimize collateral damage while enhancing the benefits of random variation.

Kauffman has studied the analogue of this question for a variety of
kinds of interacting systems (not including control systems). In particular,
he has studied the evolutionary question when changes in one system can
affect the ways other pairs of systems interact. Two principles emerge:
(1) things _do_ go haywire if on average one system affects too many
others or affects others too strongly (too much coupling); (2) global
optimization is better if the couplings are largely modular, such that
one can consider individual subsystems to have "inside" and "outside"
elements, of which only the outside elements are affected by and affect
the rest of the total system. The book, so far as I remember, does not
deal with hierarchic systems, but it makes sense that the same would be
true on many scales--cell membranes, body parts and organs, individual
persons, social groups,...

And that is what my talk was about--the inevitable(?) ememrgence of
cooperation in a competitive world, through the interaction of a hierarchy
of mutually supporting control loops. Just incidentally, it is arguable
that the strongest way that one control loop can ease the control problem
of another is if it directly stabilizes one or more of the inputs of the
other at a value the other desires (takes a reference value from the other
and produces a signal equal to that reference value).

We are talking about the development of modular perceptual control
hierarchies as a byproduct of evolving control in an environment of
limited resources.

---------------------

Now, having provided much of the argument, specific comments:

If this is a problem with brain models, how much more of a problem is it
when we consider the entire body and the genome that results in
construction of the brain? Can random variations in the genome really
accomplish the building of organization that we see in the historical
record? Has there really been enough time for this to happen through purely
random changes?

I suppose it depends on what one means by "random." Kauffman explicitly
deals with the genome, and with that question. He deals with the coupling
among genes, which is the critical issue, since if all genes acted
independently, each could be optimized independently rather quickly,
and the result would be an optimum entity. This can't actually happen,
because in the real environment of other evolving entities, the optimum
setting of one variable is affected by the actual setting of others. The
question is how many others, and how strongly affected.

When I came across the E. coli phenomenon I thought the problem was solved.
Here we have a random process which, through _systematic_ selection of its
results, effective biased the whole system so that the randomness was
largely overcome. The speed with which E. coli progresses up the gradient
of nutrients is enormously greater than the speed with which the random
changes in direction alone could achieve that progress. In fact, truly
random changes in direction, with no added selection process, would be just
as likely to take E. coli in any other direction. It's only the existence
of an internal criterion -- that the concentration should be increasing --
that accounts for the bias.

One would be tempted to say that a similar biasing occurs in natural
selection, since it is reproductive fitness that selects for specific kinds
of changes and against others. But as I found in applying the E. coli
principle to adaptation of control systems, there's a hidden requirement
that determines how well this sort of random variation and selective
retention process works. There must be a certain kind of continuity in the
changes.

Yes, again, Kauffman has studied this issue in simulations. For optimum
evolution, changes should not be too big, but they shouldn't be too small,
either. However, I don't remember that he did any simulation of the e-coli
procedure in which a change has a "direction" that can be repeated in the
next generation.

I think one requirement is that small changes must have smaller effects on
the outcome than large changes have.

Here's the rub. In a system in which mutual coupling is important, many
small or moderate changes may have only a trivial effect, since the key
feedback loop among the _negative feedback_ systems can still have a
_positive_ loop gain, allowing it to sustain itself in a consistent
organization, or an organization with very little change. But there are
times when a trivial change can have a drastic effect, destroying or
substantially changing the whole loop. This can mean the sudden development
of entirely new functions in the system as a whole, or the extinction
of species. And one such change can affect other loops, in a cascade of
changes and extinctions. But most of the time, that doesn't happen. Small
changes usually have small effects.

What I found in applying the E.
coli principle to reorganizations of multiple control systems was that the
random variations didn't work out right if I simply chose new values of the
parameters at random during each reorganization.

Consider "parameter space," with as many dimensions as there are
parameters. A truly random change of parameters would produce a series of
jumps in parameter space, with a jump to any location in the space being as
likely as a jump to any other location. So, for example, if the range of
possible loop gains was from 0 to 1000, the next loop gain might be 2 or
700. The result would be that if a change made the loop unstable, the
control system would cease to operate and there would be no chance of
recovery.

Yes. Kauffman made an analogous finding. Changes that are too big lead to
essentially no evolution. The global system stays as disorganized as it
started.

In other words, we would be back to "survival of the fittest,"
with an unfit control system simply being eliminated (probably along with
the organism in which it exists). There would be nothing like a gradual
tuning of the control system to make it work better and better.

Right on! But, as you say later, this _implies_ (probabilistically)
"survival of the fittest," so I don't think you should make the contrast
so strongly.

I found that what I had to do was imagine a point "swimming" through
parameter space in some specific direction at some specific (slow enough)
speed. When a reorganization occurred, no parameter changed instantly to a
randomly different value. Instead, _the direction of movement through
parameter space_ changed.
...
Thus the Principle of Continuity. The changes of organization involved in
reorganization must occur in small increments, so that no parameter can
change very much during a single reorganization. Coupling this with some
systematic way of varying the frequency or reorganization, we have a system
that can detect a change in a wrong direction and make another change
before the deleterious effect of the wrong direction can materially affect
the system. The system can recover from a wrong reorganization.

This clearly works for e-coli. It works for Hebbian-style learning in
reorganization.

I'm unclear as to how it can work across generations of individuals in
evolution. (But see below)

What is there in an individual that retains knowledge of the
difference between the genomes of its two parents and its own genome?
If such a difference is incorporated somewhere, where is that, if not
in the genome itself? And given that it is somehow incorporated in the
genome, what is it about the individual that allows the individual's
particular direction of change to be altered (or not) in propagating
(a random half of) the genome to a child?

If large changes have a low probability of allowing survival, then by the
basic principle of natural selection, organisms in which large changes can
occur would quickly be selected out by their failure to survive. Their
survival rate would be too low. So it stands to reason that species which
do survive over many generations must consist of individuals in which the
effects of mutations on viability occur in small increments.
...
What this implies is that the real story of mutation lies in the small
changes, not the large ones.

Kauffman would, I think, agree. But he would probably add "not too small."

Suppose that the mutations we can observe are only the extremes, with a
multitude of much smaller mutations occurring continuously and in a way
that is systematically related to internal selection criteria. My proposal
about those criteria is that they are related not to survival, but to
control: to the ability to make effects on the organism be what the
organism wants them to be at some basic cellular level. I can't be any more
specific than that, but I think it's clear that within the broad circle I
have drawn, a basic explanation of evolution is to be found.

That sounds so reasonable and clear that it can't possibly be any more true
than the Theory of Perceptual Control is:-)

What I think you are saying is that the difference between individuals of
successive generations is not stored in the genome, but is developed within
an individual. Each individual then would start a new direction of
modification, like the e-coli shift, regardless of whether the parents
had been moving in a good direction at the time of conception.

Parenthetically, I think you ought to take account of the fact that the
most rapid evolution has taken place since the "discovery" of sex. The
recombination of partial genotypes from the two parents matters
a lot more than does mutation, in most approaches to evolution. And this
is as one might expect, if we think in terms of catalytic (or mutual-support
control) loops. The two parents' genomes have to be very nearly identical
or the catalytic loops will simply not exist in the progeny, and no
child will be produced (cross-species couplings don't usually work).
But they can't be too similar or there won't be the kind of loop changes
that I mentioned above.

There has to be room for larger changes, since hill-climbing does
not find the best optimum in a complex landscape. Kauffman discusses this
issue, looking for how the mix between large and small changes affects
the effectiveness of evolution. From Bill's posting and my coupling-loop
ideas, one might guess that mutuation often and most usefully produces
small changes, and the large changes come about from merging and alteration
of the genetic coupling loops through sexual recombination.

-------------------

Reproductive efficiency above some level is, of course, required for
survival of a species. But if the rate of reproduction becomes _too_ large,
the species can exhaust its resources and fail to survive for that reason.

Populations crash under those conditions, but seldom do they go extinct.
They rebuild and crash again, over and over, as a rule.

I propose that we substitute for the old idea a new definition of fitness:
the capacity to control. It follows that organisms which control better are
more likely to continue reproducing at a sufficient rate for the species to
survive, so we would expect rate of reproduction to be one of the variables
that needs to be controlled, just as it is necessary to control for
breathing and locomotion. But what is critical to maintain is control.

Yes, yes, YES!!!

And that's the bottom line.

Martin

[From Bill Powers (971130.1135 MST)]

Martin Taylor 971130 10:30--

A long and careful posting, which I'd like to augment, rather than
criticize, since I agree with most of it. (Sorry, Bill, if this leads you

to >question what you wrote:-)

Actually I enjoyed your post very much and found it essentially free of
alarm bells. That makes two of us who are coming to a new view of
evolution, with others possibly travelling the same way. Gary Cziko, how
are you doing with all this?

Best,

Bill P.

[Martin Taylor 971130 1&:45

Bill Powers (971130.1135 MST)]

Martin Taylor 971130 10:30--

A long and careful posting, which I'd like to augment, rather than
criticize, since I agree with most of it. (Sorry, Bill, if this leads you

to >question what you wrote:-)

Actually I enjoyed your post very much and found it essentially free of
alarm bells.

I've reread your posting, and mine, and I think I may have initially mis-read
you. I said:

Thus the Principle of Continuity. The changes of organization involved in
reorganization must occur in small increments, so that no parameter can
change very much during a single reorganization. Coupling this with some
systematic way of varying the frequency or reorganization, we have a system
that can detect a change in a wrong direction and make another change
before the deleterious effect of the wrong direction can materially affect
the system. The system can recover from a wrong reorganization.

This clearly works for e-coli. It works for Hebbian-style learning in
reorganization.

I'm unclear as to how it can work across generations of individuals in
evolution. (But see below)

What is there in an individual that retains knowledge of the
difference between the genomes of its two parents and its own genome?
If such a difference is incorporated somewhere, where is that, if not
in the genome itself? And given that it is somehow incorporated in the
genome, what is it about the individual that allows the individual's
particular direction of change to be altered (or not) in propagating
(a random half of) the genome to a child?

and "below" I said:

What I think you are saying is that the difference between individuals of
successive generations is not stored in the genome, but is developed within
an individual. Each individual then would start a new direction of
modification, like the e-coli shift, regardless of whether the parents
had been moving in a good direction at the time of conception.

On re-reading, I don't think you did take this neo-Lamarkian position.
Or do you?

Anyway, I am left with either of two questions, depending on your answer.

If you do take the position that micro-mutations with an organism's
lifetime can be propagated, the question is how a mutation that affects
the organism's ability to control (which clearly occur, but affect the
somatic cells) can be expressed in the sperm or eggs.

If you don't take that position, then the question is how "direction" in
the e-coli sense can be maintained across generations.

In other words, I'm happy with the general sense of what you originally
wrote, except that I'm not at all clear as to the applicability of the
e-coli "direction-keeping" property. What I'm most happy about is the
emphasis on quality of control as what matters.

Martin

[Avery Andrews 970112]
(Bill Powers (971129.0506 MST))

I think the big difference between evolution & reorganization is that
evolution proceeds with a population of variants only some of which
have to be viable, whereas reorganization has to avoid killing off the
reorganizee. On this basis, I don't see why random change can't work
for evolution. However, gradual mutation does seem to be sufficient
to produce huge effects very fast, on the geological timescale, so
we don't need `hopeful monsters'.

Avery Andrews

[From Bill Powers (971201.0135 MST)]

Martin Taylor 971130 1&:4 --

I've reread your posting, and mine, and I think I may have initially

mis->read

you. I said:

I'm unclear as to how it can work across generations of individuals in
evolution. (But see below)

What is there in an individual that retains knowledge of the
difference between the genomes of its two parents and its own genome?

I didn't particularly want to quibble about this since we're both guessing.
You assume that in order for all this to work, the individual must retain
knowledge of the parents' genomes. So you must have some particular model
of a mechanism in mind.

My idea is much simpler: that in the DNA and surrounding fluids there are
working control systems concerned with variables that can affect the
accuracy of replication. In all forms of reproduction, more is passed along
than just the genetic code: there are organelles and cytoplasm and complete
working chemical systems that are physically passed from parent to
offspring (mostly through the mother in sexual reproduction). These control
systems operate continuously across the generations.

If such a difference is incorporated somewhere, where is that, if not
in the genome itself? And given that it is somehow incorporated in the
genome, what is it about the individual that allows the individual's
particular direction of change to be altered (or not) in propagating
(a random half of) the genome to a child?

As you can see, I do not propose that "such a difference" is incorporated
anywhere.

and "below" I said:

What I think you are saying is that the difference between individuals of
successive generations is not stored in the genome, but is developed within
an individual. Each individual then would start a new direction of
modification, like the e-coli shift, regardless of whether the parents
had been moving in a good direction at the time of conception.

No, the parents do not "move in a good direction". The "movement" is in
characteristics of the DNA, which is the locus of the effects I am talking
about.

On re-reading, I don't think you did take this neo-Lamarkian position.
Or do you?

The parents' life experiences are largely irrelevant. What matter are the
side-effects on accuracy of replication. As we know, the old idea that the
germ cells are absolutely isolated from the rest of the system is no longer
accepted, which is fortunate because it is simple to show that they are not
-- just administer a mutagen to the organism by mouth. But the effects that
matter are not those that could be influenced by cutting tails off mice.

Anyway, I am left with either of two questions, depending on your answer.

If you do take the position that micro-mutations with an organism's
lifetime can be propagated, the question is how a mutation that affects
the organism's ability to control (which clearly occur, but affect the
somatic cells) can be expressed in the sperm or eggs.

Control WHAT? The ability to drive a car is not relavent to what I'm
talking about. The only controlled variables that matter in my proposal are
those that can reduce accuracy of replication. Those variables have to be
kept under control if the next generation is to resemble the previous one.

If you don't take that position, then the question is how "direction" in
the e-coli sense can be maintained across generations.

What would be involved would be a very slow drift in parameters over many
generations. The direction of this drift would depend on weightings of the
factors that produce the drift. A micromutation would alter these factors
at random -- in fact, it would be what I call the variables that affect
accuracy of replication that would randomly vary, because they are not
under perfect control.

In other words, I'm happy with the general sense of what you originally
wrote, except that I'm not at all clear as to the applicability of the
e-coli "direction-keeping" property. What I'm most happy about is the
emphasis on quality of control as what matters.

The "direction" hypothesis is an ad-hoc idea picked because of the problem
with simply randomly varying the value of the parameters. There has to be
some way of keeping the random changes from going all over the place. One
way to do this is to propose that it is the rate of change of individual
parameters that varies at random, rather than the parameters themselves. In
fact that's the only way I've been able to think of that I can put in a
model and get to work. I can't justify it on any other grounds. It doesn't
seem likely that there are any data on this.

Best,

Bill P.

Right, that's my point. You have to think about what truly random mutations
would mean: they would mean that your next offspring would be just as
likely to sprout a leg from its forehead as to show any other change. If
there are no constraints at all on the sizes of changes that are possible,
fatal mutations would be far more common than any other kind. The whole
problem is one of efficiency -- fifty million monkeys typing, and all that.

It's not that ordinary natural selection doesn't work. It must have been
the primary selection method early in the prehistory of life. But if you
omit finer selection criteria, all you have left is what I call Darwin's
Hammer: splat, you're dead (an exaggeration, of course). Once internal
selection critera appeared, the effect would be like that of connecting a
high-gain control system in parallel with a low-gain one. The low-gain one
might still have some small effects, but they would be completely
overshadowed by the operation of the high-gain system.

Remember, even in the "controlled evolution" model, the change that occur
are still random. It's the internal selection criteria that make the
difference.

Best,

Bill P.

···

At 12:21 PM 12/1/97 +1100, you wrote:

[Avery Andrews 970112]
(Bill Powers (971129.0506 MST))

I think the big difference between evolution & reorganization is that
evolution proceeds with a population of variants only some of which
have to be viable, whereas reorganization has to avoid killing off the
reorganizee. On this basis, I don't see why random change can't work
for evolution. However, gradual mutation does seem to be sufficient
to produce huge effects very fast, on the geological timescale, so
we don't need `hopeful monsters'.

[Martin Taylor 971201 11:00]

Bill Powers (971201.0135 MST)]

Martin Taylor 971130 1&:4 --

I've reread your posting, and mine, and I think I may have initially
misread you. ...

The parents' life experiences are largely irrelevant. What matter are the
side-effects on accuracy of replication.

OK. Now I'm with you.

If you don't take that position, then the question is how "direction" in
the e-coli sense can be maintained across generations.

What would be involved would be a very slow drift in parameters over many
generations. The direction of this drift would depend on weightings of the
factors that produce the drift. A micromutation would alter these factors
at random -- in fact, it would be what I call the variables that affect
accuracy of replication that would randomly vary, because they are not
under perfect control.

I like it. And it deals with Avery's comment as well.

Last year or the year before, I read a study somewhere (in "Science"?) that
said the two strands of a DNA molecule are differentially susceptible to
uncorrected mutations, and that in simulation this improved considerably
the speed with which entities could evolve to take advantage of changed
environments while also stabilizing the "species" in stable environments.

The point, I guess, is that hill-climbing by random moves in different
directions among the individuals in a large population can lead to local
optima, whereas moderate jumps in some individuals occasionally lead to
better hills. Kauffman's simulations, at least in "At Home in the Universe"
deal with populations having small or large jumps, but I don't remember
a simulation of a population with both.

The notion that rate of mutation is influenced by the individual's success
in controlling intrinsic variables makes a great deal of sense. It could
(and probably would) affect both strands of DNA.

Also, your point that more than DNA is transferred between generations,
especially from the mother, is a good one to keep in mind.

Martin

[From Bill Powers (971201.1022 MST)]

Every time I type "971201" the irrational thought crosses my mind, "There
goes December."

Martin Taylor 971201 11:00 --

The point, I guess, is that hill-climbing by random moves in different
directions among the individuals in a large population can lead to local
optima, whereas moderate jumps in some individuals occasionally lead to
better hills. Kauffman's simulations, at least in "At Home in the Universe"
deal with populations having small or large jumps, but I don't remember
a simulation of a population with both.

If we're talking about really random effects, there will be a distribution
of sizes, won't there? Maybe we need to make the speed a random variable, too.

Anyway, I don't think that local minima are a fundamental problem. After
all, cockroaches have been in a local mimimum for a few hundred million
years, haven't they? Of course all we know about changes in cockroaches is
the form of the external shell; for all we know, 200 million years ago they
were all stimulus-response systems. There's no record of changes in
organization.

The notion that rate of mutation is influenced by the individual's success
in controlling intrinsic variables makes a great deal of sense. It could
(and probably would) affect both strands of DNA.

Yes. It could, of course, be less direct than that: the settings of
intrinsic reference levels might affect variables at an even lower level.
It's hard to see through all the layers.

Also, your point that more than DNA is transferred between generations,
especially from the mother, is a good one to keep in mind.

That ought to give the feminists some new ammunition.

Best,

Bill P.