[From Bill Powers (971129.0506 MST)]
I've been re-reading some of Gary Cziko's stuff, and once again find myself
unhappy with the theory of natural selection. I realize that anything I say
against natural selection is bound to be taken as encouragement by those
who believe in divine selection, but that can't be helped. I'll just say
that I think creation theory has far worse problems, and hope that is
enough to rule out one interpretation of my intentions.
The problem with natural selection is the same problem Martin Taylor and
others have pointed out with respect to reorganization theory. As it
stands, reorganization theory says that departures of certain internal
variables from genetically-set reference levels result in random changes of
organization in the brain, which cease only when all intrinsic errors fall
below some threshold or to go zero. Many including me see a problem of
specificity here -- how can random changes result in just those
reorganizations that are needed to correct intrinsic error, at the same
time leaving intact the results of all past useful --essential --
reorganizations?
If this is a problem with brain models, how much more of a problem is it
when we consider the entire body and the genome that results in
construction of the brain? Can random variations in the genome really
accomplish the building of organization that we see in the historical
record? Has there really been enough time for this to happen through purely
random changes?
When I came across the E. coli phenomenon I thought the problem was solved.
Here we have a random process which, through _systematic_ selection of its
results, effective biased the whole system so that the randomness was
largely overcome. The speed with which E. coli progresses up the gradient
of nutrients is enormously greater than the speed with which the random
changes in direction alone could achieve that progress. In fact, truly
random changes in direction, with no added selection process, would be just
as likely to take E. coli in any other direction. It's only the existence
of an internal criterion -- that the concentration should be increasing --
that accounts for the bias.
One would be tempted to say that a similar biasing occurs in natural
selection, since it is reproductive fitness that selects for specific kinds
of changes and against others. But as I found in applying the E. coli
principle to adaptation of control systems, there's a hidden requirement
that determines how well this sort of random variation and selective
retention process works. There must be a certain kind of continuity in the
changes.
I think one requirement is that small changes must have smaller effects on
the outcome than large changes have. In the case of E. coli, the
environmental geometry seems to provide the necessary continuity. A small
change in swimming direction has a small effect on the sensed gradient. The
result is that E. coli can tell the difference between a change that calls
for another change right away, and a change that can be ignored.
But this continuity requirement goes deeper than that. After a tumble, E.
coli swims in essentially a straight line. The result of a tumble is not a
random change in the critical variable, but a change in the _direction_ in
which the variable _continuously changes_. What I found in applying the E.
coli principle to reorganizations of multiple control systems was that the
random variations didn't work out right if I simply chose new values of the
parameters at random during each reorganization.
Consider "parameter space," with as many dimensions as there are
parameters. A truly random change of parameters would produce a series of
jumps in parameter space, with a jump to any location in the space being as
likely as a jump to any other location. So, for example, if the range of
possible loop gains was from 0 to 1000, the next loop gain might be 2 or
700. The result would be that if a change made the loop unstable, the
control system would cease to operate and there would be no chance of
recovery. In other words, we would be back to "survival of the fittest,"
with an unfit control system simply being eliminated (probably along with
the organism in which it exists). There would be nothing like a gradual
tuning of the control system to make it work better and better.
I found that what I had to do was imagine a point "swimming" through
parameter space in some specific direction at some specific (slow enough)
speed. When a reorganization occurred, no parameter changed instantly to a
randomly different value. Instead, _the direction of movement through
parameter space_ changed. I found also that convergence was made more
likely if the speed were proportional to the errors that determined the
frequency of reorganizations, but that was a refinement. The main
requirement was that the parameters change _continuously_ (or at least in
very small increments) rather than in large random jumps.
I think it's obvious why this is necessary. A large change in parameters
mignt be necessary in the long run, but paradoxically if the system is
allowed to change its parameters in large increments, the chances that it
will destroy itself before finding the _right_ large change are overwhelming.
Thus the Principle of Continuity. The changes of organization involved in
reorganization must occur in small increments, so that no parameter can
change very much during a single reorganization. Coupling this with some
systematic way of varying the frequency or reorganization, we have a system
that can detect a change in a wrong direction and make another change
before the deleterious effect of the wrong direction can materially affect
the system. The system can recover from a wrong reorganization.
Now apply these thoughts to natural selection. If mutations were purely
random, we would expect large changes in organization with every mutation.
The individual offspring, most likely, would either survive or die, with
the chances of dying being very much the larger. The organism would get
just one chance to come up with a viable organization, and the less likely
that organization was to arise through single large change, the less likely
it would be that _any_ mutation would result in survival. The organism does
not have to deal only with changes in its environment; it has to deal with
the effects of changes in its own organization. And if those changes are
too large, there would be no way to select efficiently for a beneficial
change.
If large changes have a low probability of allowing survival, then by the
basic principle of natural selection, organisms in which large changes can
occur would quickly be selected out by their failure to survive. Their
survival rate would be too low. So it stands to reason that species which
do survive over many generations must consist of individuals in which the
effects of mutations on viability occur in small increments. A drosophila
in which the _eyeless_ mutation occurred can't survive at all. But if the
mutation simply altered the placement or size or function of the eye by a
small amount, that drosophila line could propagate for many generations, to
give the effect on fitness a chance to show its magnitude.
What this implies is that the real story of mutation lies in the small
changes, not the large ones. The largest changes, those that produce
effects dramatic enough to be visible to our crude observing techniques,
are not the kind that can lead to systematic tuning of organization. They
are more likely simply to be wiped out by natural selection in a single
generation. They represent the tails of the distribution of changes, their
frequency being what one would expect of a long series of unlucky small
mutations.
The main problem here is that we can currently observe only the largest and
most obvious mutations, especially since we have no good models of
organization that might allow us to see subtler changes. The laws that
apply to large mutations are quite likely to be different from those
involved in small ones.
As an analogy, consider the _crashless_ gene, which we could infer from
watching traffic. A mutation in _crashless_ could be observed by
investigating accidents -- but this would not give us much insight into how
a driver who does not crash manages to stay on the road. The _crashless_
gene idea would give us only the crudest picture of what is going on. For
all we know, the _crashless_ gene is really the _steering loop gain_ gene,
or the _staying awake_ gene.
Suppose that the mutations we can observe are only the extremes, with a
multitude of much smaller mutations occurring continuously and in a way
that is systematically related to internal selection criteria. My proposal
about those criteria is that they are related not to survival, but to
control: to the ability to make effects on the organism be what the
organism wants them to be at some basic cellular level. I can't be any more
specific than that, but I think it's clear that within the broad circle I
have drawn, a basic explanation of evolution is to be found.
Reproductive efficiency above some level is, of course, required for
survival of a species. But if the rate of reproduction becomes _too_ large,
the species can exhaust its resources and fail to survive for that reason.
So reproduction rate can't be the ultimate criterion; there is, under any
specific circumstances, an optimum rate of reproduction with either much
lower or much higher rates being disadvantageous.
I propose that we substitute for the old idea a new definition of fitness:
the capacity to control. It follows that organisms which control better are
more likely to continue reproducing at a sufficient rate for the species to
survive, so we would expect rate of reproduction to be one of the variables
that needs to be controlled, just as it is necessary to control for
breathing and locomotion. But what is critical to maintain is control. One
could say that "survival" is merely a side-effect of good control. If too
high a reproduction rate were interfering with control, it would be the
reproduction rate that changes, not the ability to control. Or the species
could even change itself into a different species, so the original species
would be allowed to die off. What matters is to maintain control, not to
maintain any particular form.
As I said in the World Futures paper, one of the effects of control is to
counteract the effects of disturbances on controlled variables. If some of
the basic controlled variables in all living systems are conditions that
affect accuracy of replication, then organisms will replicate more
accurately as disturbances of those basic variables are better resisted.
This puts "survival" in a different light. Survival can mean only
_continuing to control_. It has nothing to do with survival of a species,
because organisms change their species when disturbances that affect
accuracy of reproduction fail to be resisted. If you focus on a particular
species as that which has to survive, then a change of species is
equivalent to extinction of the former species, but in fact control
continues -- it is control that survives, not species.
Best,
Bill P.