Genetic algorithm in behavioral modeling

[From Bill Powers (920312)]

Hello, Randy --

Thanks for the paper on the Genetic Algorithm and your use of it with
dynamical neural networks. As I understand it, you map selected
characteristics of a neural network (like connection weights) into a
"genome" in which each "gene" is a small number of bits. Then by using a
model involving random mutations and cross-exchanges of genes between
reproducing pairs, you get variations in the behavioral characteristics
from generation to generation. There has to be some selection pressure to
weed out incompetent individuals -- as I understand it, in your model this
criterion would be whether a given individual can get to a food patch by
some sort of chemotaxis (I'll limit my remarks to the first model -- my
comments would apply as well to the locomotion model).

The model of chemotaxis you start with (I'm repeating all this both to
check my understanding and to summarize for others on CSGnet) has a body
with two chemosensors on it and two effectors one of which moves the body
while the other turns it, with velocity proportional to force. There's a
six-node neural net inside, receiving the two chemical signals and emitting
the two motor signals. The connectivity of the network allows 24 parameters
to be adjusted, with 4-bit accuracy; a 96-bit "genome".

To evaluate fitness, you do many runs with varying starting conditions, and
record the squared distance from the food patch at the end of a set number
of runs (puberty?). While you don't describe what happens next, I presume
that if a fitness threshold is passed, "reproduction" takes place among the
survivors, with mutation and gene-swapping to produce individuals for the
next generation of trials. The ones that don't reach the survival threshold
are deleted.

I salute the cleverness of this approach; clearly, given enough trials and
a small enough genome, it will autonomously produce individuals capable of
meeting the criterion. I also want to point out that this model is probably
related to evolution more in the manner of an allegory than of a
description, which you probably realize. It really doesn't make literal
sense to select for anything but successful arrival at the food patch --
merely being "closest" can't have any effect on natural selection. It's the
experimenter, not simulated nature, who decides to reward those who got
closest to the food by letting them reproduce. I suspect that the reason
for these rather lenient definitions of fitness is that using a realistic
definition (get to the food or die) resulted in immediate extinction of all
populations. Without some external intelligence watching progress toward
the required behavior and meting out reward and punishment according to the
"best tries," the Genetic Algorithm would never arrive at the correct
result save through chance mutation (at least in this application). And
even then, arrival may have been accidental and unrepeatable even using
that same organization.

This raises a real problem for evolutionary theory and its application to
complex organizations, which I'm sure I'm not the first to point out.
Whatever the adaptation, it must actually go all the way to success if the
real fitness criterion -- living to reproductive age -- is to be met. The
bug that succeeds in traversing only 99 millimeters of the 100 millimeter
distance between it and food will fail just as surely as the one that
starts immediately by going the wrong way. The fitness criterion of natural
selection is so crude that it can't distinguish degrees of success short of
complete success. If your chemotactic system is taken as a test of natural
selection as an evolutionary theory of behavior, I think you have shown
that this concept of the origins of complex behavioral organization is
weak. It can't work without an external observer who can see which
directions of change would be beneficial if given a chance to go further in
that direction.

This does not mean that the basic approach of GA is impractical -- I think
it is just mis-named. Everything that the GA approach does by applying
fitness criteria to successive generations of organisms can be done by
using a slightly different approach within a single organism in one
lifetime, an approach that I call "reorganization." I don't mean that
reorganization can substitute for natural selection. But it can select for
things like complex behavioral organization that strict natural selection
hasn't a hope of achieving. What is achievable through natural selection, I
think, is something more basic: production of organisms that can
reorganize. Until that level is reached, evolution will be extremely slow
and organized behavior will be very simple, much simpler than your
chemotactic organism.

Reorganization, I now see, works a great deal like the genetic algorithm,
but without the genes and without a binary pass/fail criterion. The
"fitness criterion" is a set of "intrinsic variables" being compared with a
set of "intrinsic reference signals." These variables can be defined much
as you would define fitness criteria, except that they are continuous
variables and not just binary events. The "intrinsic reference levels" are
the states of those variables specified as the "best" state. In fact, these
variables and their reference levels can be considered to be products of
natural selection in the usual way.

The result of comparing intrinsic variables against reference states is a
composite signal I call the "intrinsic error signal." This error signal
drives the process of reorganization. I speak here as if there is a single
reorganizing system, but the same principles would apply if there were
multiple reorganizing systems associated with smaller subdivisions of the
organism -- that's a matter best left open for research and creative

The reorganizing output is a strictly random change in whatever part of the
system is affected by it (changes, for example, in parameters like the ones
you link to the bit-string "genome"). But it is not random in one and only
one respect: the rate at which the random changes are caused. That rate is
proportional to the intrinsic error signal's absolute value.

The overall effect is that when intrinsic error is large, random changes of
organization are caused frequently. If one of those changes results by any
means (or even by accident) in a reduction of the magnitude of intrinsic
error, the interval between reorganizations lengthens. Thus the
organization that was produced last persists a little longer before the
next reorganization. If there are systematic relationships between the
parameters being randomly varied and the various intrinsic variables, then
the result will be a biased random walk that will end with a state of very
small intrinsic error and an organization that is only infrequently changed
(assuming the organism doesn't die first).

This principle of reorganization was described in my 1973 book, Behavior:
the control of perception. It was an elaboration on W. Ross Ashby's
principle of "superstability" and Don Campbell's prior notion of "blind
variation and selective retention." But it wasn't until the 80s that I
found evidence that this kind of system would actually work efficiently
enough to be considered as a realistic possibility. Oddly enough, what
convinced me was a book on chemotaxis by Daniel Koshland.

Koshland studied E. coli, which exhibits efficient chemotaxis without being
able to detect the spatial direction of chemical gradients or steer. All E.
coli can do is swim at a constant speed in a straight line, or tumble
randomly in 3-space by reversing some of its flagellae. E. coli is
sensitive to the time-rate of change of certain chemical concentrations
(over 20 kinds) in its vicinity. For an attractant, the rule is that when
the time rate of change is positive, the random tumbles are spaced far
apart, while for a negative rate, the tumbles occur close together in time
(with proportionality between rate of change and delay period). In a
gradient, the sensed time rate of change of concentration is determined by
the direction of swimming. This is all that is required: the random walk of
swimming segments brings E. coli up the gradient with about 70 percent of
the efficiency it would have if it could turn up the gradient and swim that
way (I did some modeling to verify this).

E. coli's behavior is an example of a "reorganizing" control system,
although it isn't changing its organization. It can be modeled as an
ordinary control system sensing rate of change of concentration, comparing
the sensed rate with a reference specification, and producing an error
signal based on the difference. All that is unusual is that the output is a
random effect varying only in the frequency of its application. This kind
of control system, which is essentially what I had visualized in 1973,
turns out to be several orders of magnitude more effective than I had
imagined when I wrote that book. It is a simple and powerful method of
adaptation that works with far more efficiency and refinement than the
underlying process of natural selection -- from which, presumably, this
kind of system arose.

In my model, there is an acquired hierarchy of control systems, basically a
neural net with many levels, organized to control the relationships between
an organism and its environment. This hierarchy arises partly from
inherited (evolved) prior organization, but mostly through the action of an
inherited reorganizing system that is operating from the beginning of each
organism's life. The reorganizing system senses and controls selected
variables that indicate the status of the organism, independently of what
the acquired neural organization senses and controls. The outcome of this
reorganizing process is, in fact, the adult organization of the nervous
system and its behavior.

The great advantage of the reorganizing process over natural selection is
that the "fitness criterion" is a continuous variable, and can be
multidimensional, instead of being a single crude determination of survival
or failure. It does for the organism, in fact, what you have found
necessary to do for your evolving model organisms: it introduces a
continuous scale that tends to preserve improvements and eliminate worse
behavior without requiring organisms to go all the way to absolute success.

I notice that you mention the use of Gray codes as a way of achieving small
changes from bit-mutations. I have also arrived at this conclusion; that
for reorganization to work, small changes must have small effects. I'm
dubious, however, about the Gray code solution. True, in the Gray code, all
changes in value of one least signficant digit are brought about by single-
bit changes in the binary code. But the converse doesn't follow: that if
you randomly change a bit, you will get only a small step-change in the
value. For that minimum change to occur, you have to change the RIGHT bit,
the next one that will cause the smallest transition of value. If you
change any other bit, as a random change is most likely to do, you no
longer get a small effect. You can actually get a change of value of any

Most of us on CSGnet doing modeling use analog systems. To model
reorganization in an analog model, it won't do to just change parameters
using a random-number generator. What we have done instead is to associate
a small delta with each parameter, and let the reorganizing system's output
select a value of delta at random between small positive and negative
limits. Then that delta is added to the value of the parameter, assuring
that any one change will be small. This means that to get a large change in
a parameter, it's necessary for many successive reorganizations to produce
the same sign of the associated delta. Tom Bourbon, who is on this net, has
used this method to produce a self-adapting control system, and also to
make a model of a control system reorganizing itself to behave like a real
subject. The end-point of the reorganizing process is determined by what
you define as intrinsic variables and their reference levels. For the self-
adapting system, the intrinsic variable was simply the average error signal
in a control system, with an intrinsic reference level of zero. In the case
of matching a model to a subject's behavior, the criterion was the average
difference between the model and the person on some behavioral variable,
again with an intrinsic reference level of zero. Maybe Tom will (at last)
favor us with a brief report on his results -- I hope I have represented
his method accurately.

I think that the principle of reorganization fits between behavior and
natural selection. In fact there may be several layers of reorganizing
processes. This concept, I think, bridges the gap between the genes and the
organization of behavior in a way that's far more believable than trying to
do it in one huge jump. What do you think?


As usual, I'm sending you a copy of this direct and also posting it on
CSGnet. You can reply just be sending to CSG-L as before, and I'll get it.
I have a new address -- my very own logon at last -- for direct
communications: see header.

Bill Powers

[Martin Taylor 930313 11:00]
(Bill Powers 920312)

(One of our system administrators found a way to (manually) redistribute my
mail to this machine while the one I usually use is down over the weekend,
so I am not as cut off as I thought I would be. But delivery is not guaranteed,
especially on Saturday and Sunday.)

Bill, I think you sell Genetic Algorithms (GAs) short. They have, over the
last 20 years or so, proved remarkably robust and able to find solutions
to difficult problems of many degrees of freedom with considerable efficiency.
John Holland has been in much the same situation as you--an early insight
that very few people noted, dogged work for a long time, and a late flowering
of interest and further development in a wider circle of researchers.

The essential core of GAs is usually not mutation, but crossover. Consider
an "organism" as representing an attempted solution to the problem at hand.
This solution consists of N degrees of freedom (genes) that may take on k
values (k may differ across genes, but we usually don't consider that; it
is often set to two). In a simple crossover, two organisms mate to produce
a child that has the first P genes of one parent and the final N-P genes of
the other. It is important that the genes be considered as an ordered set,
because minor disordering during crossover is one of the reasons why GA is
a powerful soution method; genes that form a successful "team" will tend to
be located near each other in successful organisms, and their nearness must
both be allowed to occur by chance (reorganization) and to maintain itself
across generations. I recommend reading Dawkins (e.g. Blind Watchmaker) on
the importance of this in natural evolution.

More complex crossover rules are often more efficient than the simple-minded
one I just described. For example, taking two cuts in the chromosome (gene
string) and giving the offspring the first P and the last Q genes from one
parent and the middle N-(P+Q) genes from the other is often beneficial.

Mutation likewise should be very rare in a GA. One in a thousand is not an
uncommon rate to find in GA experiments.

You unfairly criticize Beer's fitness criterion, the square of the distance
to food, on the grounds that the bug will die if it does not find food. This
assertion depends on an assumption that the bug gets exactly one oppertunity
to feed in its lifetime, and if it misses, it dies without offspring. No
natural organism lives under such severe restrictions. If you miss lunch,
you go hungry until dinner, but you don't die of the hunger. So, if a bug
can get within a radius R of food, it is somewhere in a circle of area
proposrtional to R^2, and has a chance 1/R^2 of hitting the food. Beer's
fitness measure seems to me to be a measure of how often the bug actually
gets a chance to eat, and to be very suitable as a fitness measure for the GA.

Also, you ignore the fact that any COMPLEX organism is the result of a long
sequence of evolutionary changes, almost all of the recent ones involving little
or no alteration in the low-level functions. Our basic chemistry is essentially
the same as that of our cousins, the bacteria and plants. At a higher level,
almost all mobile creatures are more or less bilaterally symmetric with a
single central food tract and paired appendages that serve for grabbing and
moving. You side with the creationists when you talk about "trying to
do it in one huge jump." Evolution doesn't work that way in nature or in a GA.

All the same, GA's have successfully solved problems in which the good solution
is a sharp spike in a wide landscape of non-solutions, so "doing it in one
huge jump" is not always impossible.

There is indeed a lot in common between reorganization and GAs, but I think it
is in the recombination algorithms of the GA rather than in the crossover and
selection mechanisms. But maybe they are closer than I see.