Experience and reality; evolutionary efficiency

[From Bill Powers (940910.1100 MDT)]

Bruce Buchanan (940909.2040 EDT)--

I completely agree with you that the basis of everything is experience;
that is what is real. All experiments, all theories, have to agree with
what we experience or we reject their results, quite rightly. Even in
quantum mechanics, the ultimate test of any theory is whether the theory
(plus all the instruments based on it) yield up to human perception some
phenomenon that is predicted, even if it's only a curved track in a
bubble chamber or a reading of "0.227" on a meter readout. But this
leaves experience itself unaccounted for: it is a phenomenon without an
explanation.

In making sense of experiences like "an apple" we have more to account
for than just the apple. We also have experiences relating to the lens
of the eye and lenses in general, relating to exploration of the retina
with microscope and microelectrode, relating to experiments with light
carried on with timing devices, photometers, and spectrographs, and
relating to observations of people suffering various kinds of lesions in
the brain which we can see by various means, including autopsy. These
are all experiences, just as real as any other kind.

From all these experiences, we gain a very strong impression that for

someone else to perceive an apple, it is necessary for a whole train of
physical and neural processes to occur, each one of which is critical.
If they all occur but the neural signal in the brain, no perception of
an apple will occur. Clearly, conscious perception requires that the
chain be intact all the way to the occurrance of neural signals in the
brain. The processes prior to the existence of neural signals, no matter
how flawlessly carried out, are not sufficient to give rise to a
perception, an experience. Whatever the conscious perceiver is, the
final process just prior to conscious experience is the occurrance of a
neural signal. Furthermore, from experience with brain stimulation and
reports of experience made by people undergoing it, we know that the
artificial production of neural signals in various parts of the brain is
sufficient to give rise to conscious experiences of hallucinatory
clarity. We therefore know that presence of a neural signal is both
necessary and sufficient to create conscious perception, _in every
sensory modality_.

Having determined that this is true for other people, we must conclude
rationally that it is true of ourselves as well. The only alternative is
to think of ourselves as unique creations, different from all other
people. But having acknowledged this, we must then backtrack the
reasoning that led to this conclusion.

If the experience of an apple requires presence of a neural signal, then
so does experience of a lens in someone's eye, of a microscope image of
a retina, of the reading from a microelectrode telling us that a neural
signal is present, and of the outputs of instruments like photometers
and spectrographs telling us about light. Each of these experiences
depends just as much on presence of the right neural signals as does the
experience we are trying to explain, the apple. In fact, all observable
counterparts or predecessors of experiences are examples of the same
kind of explanatory problem we began with, explaining the experience of
the apple. It does no good to give one sensory modality a special status
(as Gibson did); to say that something is real if it is "tangible"
(i.e., if we can obtain touch sensations from it). The situation we are
faced with is that every attempt to explain an experience comes up
against one special kind of experience: what we have learned about
nervous systems.

This is not a philosophical problem. It is a simple problem of reasoning
about our experiences, considering all of them instead of one at a time.
We would have the same problem no matter what philosophical stance we
endorsed. A naive realist would be forced to the same conclusions that
Bishop Berkeley would have to accept, if they had both made the same
observations. In fact, about the only use that a philosophical
discussion of these matters might have is to find a way around the basic
problem, so we could continue to believe that experience was something
other than the occurrance of neural signals in spite of the evidence.

In terms of the evidence of direct experience alone, there is only one
honest way around the problem, and that is to accept the most obvious
and simplest explanation: what we experience is a set of neural signals.
This is the only answer that an observer with no stake in the outcome,
making no attempt to look ahead to the implications of this finding,
could reach.

All the arguments against this simple conclusion begin, in effect, by
saying "But if that is true, then [some unacceptable conclusion would
follow]". The unacceptability of the conlusion is taken as sufficient
reason to deny the argument, just as one refuses to believe the bank
statement saying that one is overdrawn because then it would follow that
one is insolvent. This is a psychological response that is quite
understandable, but irrelevant in any search for a viable model of
experience. If we can't trust our most simple and straightforward
processes of reasoning about direct experiences, how can we possibly
trust anything more complex?

Where would you take the argument from here? Is there anyplace else to
go?

ยทยทยท

-----------------------------------------------------------------------
Martin Taylor (940909.1730)--

This sounds as though you are thinking of an organism improving the
genes that affect it OWN functioning, whereas my understanding is that
the organism's genetic structure is determined at birth.

What matters is the control system. At the level of genetics, control
systems can go on operating right across generations. The structures are
passed along not only in the genes, but in mitochondria and other parts
of the cellular material that is shared during reproduction. Even though
genetic material in higher organisms is sequestered soon after birth, it
is still subject to influence through nurse cells and the general
effects of life support systems, and of course it is highly susceptible
to environmental influences prior to sequestration. In higher sexually-
reproducing organisms, evidently the genetic material is specially
protected against environmental influences, and genetic mixing plays a
much larger role. But genetic mixing is highly susceptible to the events
of a single lifetime, to behavioral interactions with other members of
the same species and with the environment, so protection of the genome
from external influences is more than offset by exposure of the new
method of variation to external disturbances.

Imagine a genetic control system existing in DNA, controlling for some
basic chemical variable that signals the state of the system in some
regard. This control system is repeated in many organisms. After a
generation has gone by, we now have variants on this control system,
controlling for slightly different variables or in slightly different
ways, some of which work better than others for keeping the critical
variable under control in the existing environment. Those that work best
simply continue to propagate down the generations; those that work less
well allow more error, and they act by shortening the time to the next
mutation or variation. As a result, the worse variants spend less time
propagating before a new form appears than do the best variants. Of
course some of the second-generation worse variants will also be worse,
so they will "mutate" again sooner than the others. The better variants
will continue to propagate longer before the next mutation.

Note that this model allows for evolution without survival time playing
any part. It would play a part, of course, but the effect of which I
speak is independent of survival time.

This is how I see the timing of the mutations as being connected to the
detection of critical errors in the genome. All laws of inheritance work
just as before, all probabilities of producing good and bad variants
remain exactly the same. But the relationship of the timing of mutations
to intrinsic errors creates, as we know from our E. coli simulations, an
extremely strong bias in terms of movement per unit time toward a better
form of behavior. A strong directionality is imposed on what otherwise
would be a random walk.

Perhaps the key to understanding this method of controlled evolution is
similar to the key to Zeno's Paradox. In Zeno's paradox, the key is to
consider time, not just events. If you simply count halving the
remaining distance as an event, then an infinite number of events must
take place before the tortoise gets to the wall. But if you consider
events per unit time -- that is, velocity -- the paradox disappears.

In controlled evolution, if you just look at each mutation without
considering when it occurs, you end up with a certain distribution of
probabilities that the result will be in a good direction. If all the
probabilities are equal you get a random walk with no systematic
progression in any direction. But if you consider progress per unit
time, you see that the organism, or species, spends less time
progressing in wrong directions than in right directions, which is where
the bias comes from. We know that this bias can be at least half as
effective as simply steering in the right direction.

Mind you, I think you are worrying about a non-problem when you say
that simple random variation and selection couldn't work fast enough
to account for the emergence of the complexity you see in only 4
billion years. That's a loooong time.

It doesn't seem like a long time to anyone with a computer. My computer
does 4 billion machine cycles in two minutes. For a very simple
microorganism or protolife molecule, of course, it is a very long time,
hundreds or thousands of billions of generations. Even at that scale,
however, you have to weigh the generations against the probabilities of
chance variations resulting in a new organization which is more viable
in a given environment, and then the chances of a succession of changes
that increase viability among the already-more-viable. We may in fact be
talking about chances like 1 in 10^9 or far less. In simulations, of
course, we pick probabilities that will yield some results in the length
of time we care to wait for them, which means probabilities like one in
a few hundred. We pick probabilities that give results on a time-scale
comparable to what we seem to find in natural evolution, in terms of
number of generations.

But this may be entirely unrealistic; this is the point I'm trying to
get across. You can _say_ that the result of a mutation has one chance
in two hundred of creating a five percent increase in survival time, but
arranging for actual mutations in real organisms, in real environments,
to have that kind of success rate is an entirely different matter. The
real success rate, considering only undirected changes, might be one in
billions. Yet you still might find that the organisms evolve as if the
success rate were one in thousands, because _some other mechanism of
much greater efficiency is in operation_.

The difference is easy to see in the E. Coli simulation. All you have to
do is set up the system so the timing of changes in direction is
independent of the consequences. Excuse me a moment and I'll go do that
very thing. We start with the organism about 10 cm from a 0.5 cm square,
and run the simulation disabling the dependence of "tumble" frequency on
error. What happens? Back in a flash.

Well, I got tired of waiting. In the systematic control mode, the
organism reaches the small square in around 50 to 100 time units. In the
nonsystematic mode, there were about 20000 time units without the
organism ever passing across the target (before I gave up). Obviously,
since the area of the target is only about 1/3000 the area of the screen
(the organism reflects back off the edges of the screen), the chances of
hitting it at random are pretty small. Without wrapping around at the
screen limits, I think the chances of getting to the target would be
infinitesimal. We are talking about a MONSTROUSLY BIG difference in
efficiency here.

You probably know about the moths (butterflies?) in England that often
rest on tree trunks, and that changed from mostly white to mostly black
after the industrial revolution made the trees sooty, and have now
changed pretty well back again since the anti-smoke laws of the 1960s
were enacted. The selection there is by birds who can see moths that
contrast with the tree trunks better than moths that blend in with
their background.

This is a natural-selectionist's favorite example (along with the
Samurai crabs), but it is based on assuming that natural selection is
responsible. It would be very interesting to see what would happen if
the moths were protected against the birds. The assumption is, of
course, that the moths would stay the same color. But suppose there were
some long-term chameleon-like control process going on, so the moths
simply adapted, for quite other reasons, to have the same color as their
backgrounds. These moths, after all, have the same amount of evolution
behind them as we have behind us, and that simple change would be no
great trick even at the genetic level. Maybe the moths have learned to
avoid mating with other moths which contrast too much with their
backgrounds. Of course birds would eat more of the moths that contrasted
with their backgrounds, so that population would decrease, but that
could have nothing whatsoever to do with the change of color. The moths
might change color anyway.

The natural-selection explanation might well be right, but in fact
people leapt on this example because it seemed to prove their point, and
they never did the experiments that would be needed to admit this
conclusion to the status of a scientific fact. I have always been under
the impression that the first thing a scientist is supposed to do with
his own ideas is look for ways to challenge them, if only to save later
embarrassment. Mere plausibility of an explanation isn't enough. But I
don't see much of this in evolutionary circles.

...if you like to call this control, I won't argue. It's the same kind
of control as in the reorganization in an individual. But the same
effect occurs without changes in the mutation rate, by selection.

My whole point is the difference in efficiency. The "same"
(qualitatively) effect might happen without changes in the mutation
rate, but it will happen thousands or millions of times slower in many
cases (obviously not in all cases). I am saying that natural-selection
buffs assume a high enough efficiency to produce the evolution rates
that are seen, and take this as a proof that blind natural selection is
sufficient. In simulating natural selection, they use probabilities that
will give reasonable rates of evolution. But what is missing is any
demonstration that those probabilities are anywhere near reasonable. So
the demonstrations are empty. And this does not even consider the
unwitting use of the E. coli method in simulations where the authors
think they are using blind natural selection.

This seems a simple point to me. What makes it so hard to understand?
----------------------------

If I want my genes to have an enhanced chance of surviving to another
generation, I can make that true for 98.4% of them by saving a
chimpanzee who then goes on to reproduce.

This is an excellent point that I haven't seen before. It touches again
on a quantitative question. Selection effects have to be pretty
discriminatory to have much influence on survival. If altruism works
just as well as selfishness, then that whole dimension becomes suspect.

The whole problem with qualitative arguments is that everything usually
rests on the numbers. Yes, you can preserve 98.4% of your genes by
taking a starving chimp to dinner, but to what _degree_ does this help
preserve your genes? I would say we're talking numbers on the order of
astrological effects. Sure, the position of a planet against a certain
stellar background has an effect on you. But HOW MUCH effect?
---------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 940912 12:00]

Bill Powers (940910.1100 MDT)

I can't disagree, nor do I necessarily agree, with most of your posting
on evolutionary efficiency, but I must most strongly DISagree with one
comment, and I think that disagreement is the core of why you are interested
in the notion of controlled evolution.

All laws of inheritance work
just as before, all probabilities of producing good and bad variants
remain exactly the same. But the relationship of the timing of mutations
to intrinsic errors creates, as we know from our E. coli simulations, an
extremely strong bias in terms of movement per unit time toward a better
form of behavior. A strong directionality is imposed on what otherwise
would be a random walk.
...
In controlled evolution, if you just look at each mutation without
considering when it occurs, you end up with a certain distribution of
probabilities that the result will be in a good direction. If all the
probabilities are equal you get a random walk with no systematic
progression in any direction. But if you consider progress per unit
time, you see that the organism, or species, spends less time
progressing in wrong directions than in right directions, which is where
the bias comes from. We know that this bias can be at least half as
effective as simply steering in the right direction.
...
Even at that scale [billions of years; MMT],
however, you have to weigh the generations against the probabilities of
chance variations resulting in a new organization which is more viable
in a given environment, and then the chances of a succession of changes
that increase viability among the already-more-viable. We may in fact be
talking about chances like 1 in 10^9 or far less.

It is simply not true that you need an equal probability of good mutations
for the density of the good form of the gene to increase in the population.
If you have a probability of 1/10^9 that any individual will experience
a good mutation and a population of 10^9 individuals, the expected number
of good mutations is one per generation. In a stable population, that one
person will have an expected number of surviving offspring in the next
generation greater than one. Of course, there may be none, but a good
mutation may happen in the next generation, and on average, good mutations
show an exponential increase in their representation in the population,
whereas the representation of bad mutations shows on average an
exponential decrease.

The statistics of small numbers means that this average expectation will
not actually be observed. Lots of good mutations will die out forever, and
bad ones flourish for a short time. But some good ones get lucky initially
and establish a sufficient population that the exponential increase becomes
more and more close to the actual increase. This never happens with the
bad mutations, since the larger the population of carriers, the greater
the probability it will decrease in the next generation.

Think numbers. If there are, at some generation g, gNx carriers of a gene x,
then at generation g+1 there will be some other number of carriers (g+1)Nx.
One cannot determine how many, but one can estimate a probability distribution
based on the distribution of the number of surviving carrier offspring for
each individual. If p(0) is the probability that a given individual has
no carrier offspring, the probability that the gene dies out (none of the
carriers have carrier offspring) will be p(0)^(gNx). If gNx = 1 (the case
for the individual who experienced the initial mutation), then the
probability that the mutation dies out is precisely p(0).

If the mutation does not die out, it is carried by one or more offspring,
with probabilities p(1)....p(n) where n is the largest number of surviving
offspring an individual can possibly have (perhaps 30 for a woman, 10,000
for a man, 10^6 for a fish...). In a stable population, the average
number (the first moment of the distribution p) is unity, but by definition
it is greater than unity for the good mutation (by definition of "good").
How this increase in the first moment is achieved is not an issue. It may
be that the probability of non-survival goes up, as does the probability
of multiple survivals (have triplets more often, but be less likely to
conceive, for example). One can't tell. All one knows is that
sum over i from zero to n i*(p(i)) is increased.

Now we go back to the probability of the "good" gene dying out. It doesn't
depend on the whole distribution, but on gNx and p(0). The probability
is, as mentioned above, p(0)^gNx. Let's consider an example appropriate
to humans, where for a stable population each person born has two children
on average, each shared with another person, so that a person's unique genes
have a probability 0.25 of not appearing in the next generation. If
a mutation happened some generations previously and has survived in C copies,
the probability of it not appearing in the following generation is 0.25^C,
If there are 5 carriers, that probability is less than .001. If the gene
was a "good" gene that enhanced its own survival probability by 5%, that
probability is enhanced with 5 carriers by 25% (.000976 to .001246 non-
survival rate).

No matter what the gene, if it isn't strongly lethal, and makes only a small
change in its own survival probability, its representation over the generations
will fluctuate strongly, at least while the numbers of its copies are small.
But as the numbers increase (by random fluctuation), the probability
distributions of membership in the next generation are convolved, and
the numbers become more and more predictable.

The overall result is a ratchet effect, in which "bad" genes are almost
always eliminated, but "good" genes that once occur and develop more than
a few (say 10-20) copies will be unlikely ever to disappear. It is in that
first stage of acquiring a few copies that a mutated gene is most vulnerable.
It matters not a whit whether the probability of a good mutation is 1/2 or
1/2*10^9. The "good" genes will overwhelm the bad numerically after a few
generations.

You have tendency to confound what is good for the organism with what is
good for the gene. The two often go together, but they don't have to.
The soldier who dies in a successful defence of his tribe probably has done
more to ensure that his genes propagate than he would have by impregnating
every woman in the village.

The whole problem with qualitative arguments is that everything usually
rests on the numbers. Yes, you can preserve 98.4% of your genes by
taking a starving chimp to dinner, but to what _degree_ does this help
preserve your genes?

98.4% precisely. (That number seems to change depending on what authority
you listen to, but nobody seems to give a number less than 96%. Some go
as high as 99.5%).

You say:

I would say we're talking numbers on the order of
astrological effects. Sure, the position of a planet against a certain
stellar background has an effect on you. But HOW MUCH effect?

Which suggests that you missed the point utterly. The point is that the
chimp HAS 98.4% of MY genes, and those genes survive if the chimp has
carrier offspring. My genes survive not only through my offspring, but
through the offspring of every living thing that carries the SAME genes.

I personally get zero intrinsic benefit from having my genes survive. All
the benefit I get is from what would be called side-effects if we were
talking about a control system.

If altruism works
just as well as selfishness, then that whole dimension becomes suspect.

I'm not at all clear what "dimension" you are thinking of. The "selfishness"
of a "selfish gene" is not expressed (necessarily) through the selfishness
of the carrier of the gene. It can be expressed equally well by altruism,
isolation, aggressiveness, .... depending on the circumstances. As with
reorganization of a control system, what works, works, and there's an end
to it. Genes that lead to organisms that behave in a way that enhances
the gene's propagation probability are those we see most of. That's the
only "selfishness" that matters.

Martin