# Analyzing experiment; Genetic Algorithm

[From Bill Powers (930114.1640)]

Martin Taylor (930114.1530) --

Good: a specific experiment. Let's see how it goes.

During the last 1500 msec of the cycle, the subject is supposed
to push one of the two buttons, according to which interval
seemed most likely to have had the tone in it.

There are four independent logical variables: t1, t2, b1, and b2,
where t = tone and b = button. This means there are 65,536
possible states of a logical perceptual function of all four
variables, of which only one is to be perceived as true.
The condition that the subject is to perceive as TRUE is most
briefly stated (I think) as:

(t1 -> b1) OR (t2 -> b2)

If either t1 or t2 is present but neither b1 nor b2 is present,
there is an error calling for pressing a button, because the
value of the above perceptual function is FALSE while the
reference state is TRUE. This might be better characterized as
two control systems, one controlling for t1 -> b1 = 1 and the
other for t2 -> b2 = 1. So each one controls for "it is not the
case that my tone is present and my button is not pressed".

To test this hypothesis for the controlled variable(s) it would
be necessary to disturb the perception. Since t1 and t2 are not
controlled components of the perception (they are disturbances of
it), the experimental disturbance should be applied to b1 and b2.
If t1 is pressed and b1 is also pressed (independently of the
subject), the subject should make no move to press b1 or b2,
because a perception of (t1 AND b1) is not an error (the
implication t1 -> b1 is TRUE). If b1 is (independently) pressed
when there is no tone t1, the subject should make no effort to
correct the error by lifting b1, because that still leaves the
implication TRUE. The only case in which the subject should press
b1 is when b1 is not already pressed and t1 occurs.

IF, that is, the hypothesis about the controlled variable is
correct: that the subject is controlling for the truth of an
implication.

As the subject had difficulty perceiving the tone, the apparatus
provided some "biofeedback" by telling the subject whether the
value of the function was TRUE or FALSE. This might be modeled as
a higher-level system that detects the relationship between the
noisy signal out of the implication-detector and the information
from the apparatus. I don't know how this higher system would
adjust the lower one to take advantage of this information --
change the time constant of the input function? From the numbers,
there wasn't much the subject could actually do to change the
probability of being correct -- just a couple of percent
improvement, meaning that whatever adjustments were made they
didn't help much. I trust that the choices of interval were made
mechanically random, so the experimenter couldn't influence the
next choice after hearing the subject get a right answer.

I don't suppose you happened to put in disturbances of the button
positions, did you?

ยทยทยท

----------------------------------------------------------------
Gary Cziko (930114.2200) --

RE: Genetic Algorithm (GA)

Does

the algorithm need a way of perceiving its distance from the

goal

state?

I think the answer is "yes" to the three last questions.
Genetic programming works by randomly creating (say 500)
computer programs using the functions and terminals believed
appropriate to the problem. The programs that come closest to
the criterion of fit are allowed to "mate" and their offspring
are similarly evaluated, etc. until a perfect or close
enough fit is achieved.

So there is something other than surviving to the age of
reproduction that determines whether mating takes place -- or
rather, something other than the final criterion of fitness
determines that survival. This means that an external criterion
of fitness is being used: the experimenter's brain. It is the
judgement by the external party as to whether a given change
constitutes an "improvement" that determines survival, not
natural selection by failure or success at the task.

If now the experimenter could write into the program a way of
perceiving distance from the desired outcome and making the rate
of mutation depend on whether than distance was increasing or
decreasing, the criteria would be internal -- and you'd have my
model of the reorganizing system.

But I don't see why an "external" criterion is necessarily a
problem. In one of the demonstrations, programs are evolved to
allow an ant to make its way along a path on which food can be
found. The further you get down the path, the better your
program, the more you eat, the more likely your program will
breed and have descendants in the next generation.

But an ant that gets only partway to the food will starve and
fail to reproduce whether it gets 10% of the way or 99% of the
way. You don't get more to eat by getting "farther down the path"
unless the food is distributed in sufficient amounts to sustain
life all along the path. And if it is, the criterion is internal,
not external. You just keep reorganizing until you stay on the
path and keep getting enough to eat to stay alive.

The external criterion uses information that is not available to
the behaving or evolving system: that if it just keeps going in a
certain direction, it will find what it needs to stay alive.
--------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 930115 14:30]
(Bill Powers 930114.1640)

Good: a specific experiment. Let's see how it goes.

During the last 1500 msec of the cycle, the subject is supposed
to push one of the two buttons, according to which interval
seemed most likely to have had the tone in it.

There are four independent logical variables: t1, t2, b1, and b2,
where t = tone and b = button. This means there are 65,536
possible states of a logical perceptual function of all four
variables, of which only one is to be perceived as true.
The condition that the subject is to perceive as TRUE is most
briefly stated (I think) as:

(t1 -> b1) OR (t2 -> b2)

I'm not sure of the implication arrow. If this is actually an implication,
then the experimenter's condition, as stated to the subject is
(t1 -> b1) AND (t2 -> b2)

But maybe I misunderstand your symbols. What the subject should perceive
is "correct", which is achieved when (t1 & b1) OR (t2 & b2). So that
logical combination can serve as a surrogate for "correct" in a sense,
except that the truth value of t1 and t2 is uncertain for the subject.
The experimenter wants to know how much information the subject can gain
about the discrimination (t1 & NOT t2) as opposed to (t2 & NOT t1). So
the control of b1 vs b2 is not an issue in this experiment. As I understand
your posting, you are addressing the control of b1 vs b2.

As the subject had difficulty perceiving the tone, the apparatus
provided some "biofeedback" by telling the subject whether the
value of the function was TRUE or FALSE. This might be modeled as
a higher-level system that detects the relationship between the
noisy signal out of the implication-detector and the information
from the apparatus. I don't know how this higher system would
adjust the lower one to take advantage of this information --
change the time constant of the input function?

My assumption is that what changes is a Perceptual Input Function. Either,
as I suppose, there are many ECSs with slightly different PIFs that are
reasonably well tuned to the signal, or there is an ECS in which the tuning
drifts so that it detects the tone well or badly. But the way I see it,
this ECS cannot at the moment be controlling, at least it can't be
controlling the level of its perceptual signal. But its perceptual
signal must be used as part of the sensory input to an ECS that IS
controlling, or else its actions are so set up (by some higher-level
system influenced by the experimenter's instructions) that its error
signal results in a button push. This latter form makes the situation
temporarily S-R as far as the tuned ECS is concerned. I don't like it
very much.

From the numbers,
there wasn't much the subject could actually do to change the
probability of being correct -- just a couple of percent
improvement, meaning that whatever adjustments were made they
didn't help much.

But how could the changes occur? By the most conservative estimate,
as I pointed out, the subjects were completely mistuned at least 5%
of the time. More probably they were partly mistuned for a longer
time. The adaptive method that should have led to 80% correct on the
fixed-level runs gave the subject fresh chances to listen to the tone
at a higher intensity if they were getting too many wrong answers. In
that situation, the subjects still dodn't have control of the signal
level, except that by deliberately trying to get the answer wrong, they
can make the signal louder. But that conflicts with control of the
perception of their quality as a subject to match a reference level of
"good subject." So all they can do is to try to get as many correct as
they possibly can.

I trust that the choices of interval were made
mechanically random, so the experimenter couldn't influence the
next choice after hearing the subject get a right answer.

Yep. In this kind of experiment, there are all sorts of subtleties you
have to consider. There must be no nearly inaudible clicks of the relays
(nowadays one doesn't worry about such things) or flickering of the lights
or anything else that might indicate the correct interval. The random
number generator has to be a good one (in the old days people sometimes
used radiation detected by things like Geiger counters, but pseudo-random
generators are quite good enough). The experimenter should not know which
interval has the signal, and probably should not know how well the subject
is doing until after the run has finished.

I don't suppose you happened to put in disturbances of the button
positions, did you?

We did nothing to make the linkage between perception of tone-in-interval
and selection of button difficult. Subjectively, in such an experiment,
the whole thing can slip from consciousness into an automatized state.
You don't want to make it hard for the subject to enter this state, if you
want to measure how well a subject CAN perform, which is usually the
objective.

Usually, when the subject reports having gone automatic, performance will
be a little better and more stable than when the subject reports being
alert and hearing well. As a frequent subject myself, I can remember many
occasions when the experimenter opened the door and I would ask "Was I
actually responding?" Usually, those were among my best runs. I have
no data to support this claim, but I think most people who have done
psychoacoustic experiments will support it. Whatever is happening, in a
trained subject it has nothing to do with conscious decisions to push
button 1 or button 2.

The reason for doing these kinds of experiments is to put limits on the
possibilities for real-world activities. If a subject in a psychophysical
experiment cannot resolve lines on a screen more closely spaced than X,

you don't make computer displays that require such resolution but present
the lines closer than X. I would presume that in a control situation,
control could not be more precise than is allowed by the psychophysical
results. Experiments that determine maximum rates of information
acquisition seem to me to be even closer to direct application in a PCT
context, inasmuch as the control information rate could not be greater
than the perceptual acquisition rate. That rate limits the dynamical
possibilities for control.

Rick Marken (930114.1800), in a posting that I don't understand well
enough to answer properly, asks:

Could you elaborate a bit on how the results of this experiment tie
in " with a PCT interpretation of the observed muscle tremor during
fine control".

I though that the elaboration was done in my posting of the paper by Gibbs.
But here it is again, perhaps said a little differently.

The results of the experiment provide an explanation for a commonly observed
phenomenon--a perceptual "dead zone" around zero perceptual magnitude. The
reason is that some fraction of the time (in the experiment) people are
detuned from the perceptual input function that would really be appropriate
to whatever they are controlling. In real life, this detuning should be
even worse. In any case, the dead zone seems to occur in a wide variety
of circumstances, but it does not seem to occur if the observer has an
opportunity to detect the "same" sensory input at a level a trifle higher.
The tremor, I presume, provides the opportunity for an error signal to
be observed on both sides of zero, which would allow a much more precise
setting of the CEV to the "true" level giving an error signal of zero.

What the experiment contributes is a demonstration that some such detuning
must be occurring, unless the subject's periods of poor performance are
on the output side (forgetting which button goes with which signal, for
example) or are generated at a higher level (once upon a time known as
"being Bolshie"). Both of those explanation seem far less likely than
that the problem is a drift of the effective PIF, which agrees with the
subjects' expressed subjective observations (including my own, unfortunately).

Rick also says:

I think experiments like this are supposed to reveal something about
the "resolution" of sensory systems. I doubt that they give us anything
but the grossest kind of information about this. Besides, sensory resolution
information (even if it were available from such experiments) is of
very little use in control modelling.

I don't see the background or the reason for either of these statements.
I accept that Rick has the right to doubt, but it would be more helpful
to say where the doubt comes from. As for the second statement, it seems
to me that you can't expect to model control systems properly if you don't
know where the intrinsic limits are to their ability to control. You can
find good prediction in the central regions (like Newton), but not near
the limits, where it becomes possible to distinguish between specific
proposals for control organizations.

Martin