Put your model where your mouth is.

[From Bill Powers (921015.0700)]

Greg Williams (921014) --

"Override" means that the error relative to an acquired reference
signal, necessary for reorganization to start, is less than the
errors relative to inherited reference signals, necessary for
reorganization to start in the absence of the acquired reference

Still can't figure out what you mean. It would help if you described
the specific situation you have in mind instead of the generalization
you got from it. A diagram would help even more.

RE: new information as rubber-banding.

If you are controlling for phoning someone and I tell you that the
phone number was recently changed, you control by using you fingers
to dial the NEW number -- different actions, same controlled

The actions are different at the level of moving your fingers, but the
same at the level of phoning someone. The plan "Call Joe" remains the
same, and the perception matches it, at the higher level. What has to
change is the way the error at that level is translated into a
specific sequence of digits to dial to correct the error. By asking
for Joe's phone number, I obtain a new number in my memory. This does
not make it a reference signal yet. This new number must be selected
when I want to achieve "Call Joe." This means that the higher system
must, given the same error signal as before, select a new reference-
sequence at the lower level. This requires reorganization. If you have
been calling Joe from a memorized phone number for years, chances are
that the first few times you try to call after the number has changed,
you will "forget" that it's been changed and call the old number. It
takes a few errors to make the new number "stick" -- i.e., to get the
connection to the new reference signal changed.

This involves reorganization, but perhaps not carried out in the way I
visualize for "E. coli" learning. This sort of phenomenon might be a
clue about a more systematic way of reorganizing. But it still takes
some time to make the old connection go away and for the new one to
become as automatic as the old one was. You may have to stop and
consciously "remind yourself" that Joe has a new number. And as
someone remarked on the net once, this is likely to result in a
repetitive sequence of starting to dial the number, reminding yourself
that it's changed, and dialing the new number; it may take a long time
to get to dialing the new number without going through that sequence.
This is probably a clue, too.

To avoid going through this sort of reorganization, people write down
phone numbers in a book; erase the old number and put the new one in.
Then nothing has to reorganize. They just look up Joe's number and
call it.

If this is the crux of our dispute, then our dispute seems to come
down to a quantitative question: loop gain.

I think the loop gain can range from very low to very high for a
swimming teacher (as one instance), just as the loop gain can range
widely for a subject controlling (in Rick's famous experiment) for
keeping a dot near a certain point on a computer screen when the >dot

is subject to a random alteration in the direction of its >movement
each time the subject presses a key. I think the >situations are

I'm not sure what the loop gain of E. coli would be if E. coli's
random actions had to control another E. coli's swimming behavior by
disturbing the other's time rate of change of concentration in order
to control the first E. coli's sensed time rate of concentration. The
thought of a teacher randomly trying different teaching methods as a
way of helping a student randomly reorganize toward a specific
behavior does not impress me as fraught with possibilities.

Your proposal calls at least for some experimental or working-model

"Looseness" and "uncertainty" need to be evaluated by looking at
whether this kind of control works virtually all the time or only
part of the time or never. As I look at the "wild," it works quite
efficiently virtually all the time if the controller has an >accurate

model of something the other wants.

I think it's time for evidence in the form of examples, and some
backing up of the generalities by showing a model that would work as
you suggest.

The controller's effects are small, statistical, unreliable, and
exceedingly slow. The loop gain must be close to zero.

Just as in Rick's experiment, the effects needn't be slow,
unreliable, exceedingly slow, OR statistical ...

But Rick's experiment had to do with an organism reorganizing to
control one of its OWN critical variables, not one organism trying to
use another one doing the same thing to achieve the first organism's
goals. What we need is an experiment with Rick's model in which a
human teacher tries to "facilitate" E. coli's progress up the
gradient. Put your model where your mouth is.

Aside: now that I think about it, there is no need for the class of
possible actions to be finite; it is a CONCEPTUAL class, i.e., "the
infinite class of all possible trajectories of the dot which remain
within an inch of the dot," or "the infinite class of all possible
ways of coming to stay afloat without external support in deep

This is illustrates a problem with arguing at high levels of
abstraction: general statements end up saying much less than they seem
to say. "The infinite class of all possible ways of coming to stay
afloat without external support in deep water" can be stated much more
succinctly: you're trying to describe "swimming." That is the
consequence of the actions that you're trying to get the students to
perform. You are simply describing the teacher's reference level for
what is to be learned: the outcome that is to result. In effect,
you're saying "I can't teach them the actions that will result in
swimming, but by George I'll know swimming when I see it." To predict
that they will then be doing one or more of the things that result in
staying afloat without external support will not be of much use while
they're trying to find some action in that class, and perform it so it
has the desired effect.

You're trying to weasel out of the difficulties in exactly the way
Skinner did. A response is that class of actions that has a particular
consequence. All Skinner did was formalize (vaguely) the same habit
that all behavioral scientists follow: naming behaviors by their
controlled consequences, so as not to have to explain how the organism
could select behaviors that, combined with disturbances, end up
producing the same consequences again and again.


This, too, is merely explaining the outcome by describing the outcome.
If both parties do end up controlling successfully, then the actions
each takes to counteract the disturbances from the other (or from any
source) cause no important errors in either of them. If that's not
true, there will be conflict. It's not necessary for either control
system to refer to this abstract constraint as they learn to interact
with each other. Each system will either meet with no resistance, or
have its actions resisted. If its actions are seriously resisted, and
there's no alternative already known, the organism will begin to
reorganize. If its actions aren't resisted, the organism will simply
control. It doesn't have to know that it's using the actions of the
other as part of its control loop. That doesn't require any planning;
it simply happens, if it happens. If it doesn't happen, that's OK,
too: control occurs either way. The main thing is to control your
perceptions; the actions by which you accomplish that will come to be
whatever is required. In making your way to the other side of a
crowded room, your actions will probably involve the actions of many
other people; in an empty room the same goal will be achieved without
anyone else. You don't have to predict how each person in the crowded
room will react to your disturbances. You just make your way,
muttering "Pardon me, sorry, oops, pardon me" -- or you just put your
head down and push. If someone won't get out of your way you find a
different way through.

There's too much abstract conjecture going on here. Let's try to tie
this argument to specific examples. Better yet, let's stop fooling
around in the stratosphere, and start proposing some experiments to
test all these deductions and pseudo-deductions. We sound like a
couple of psychologists or philosophers. Let's get back to science.



Bill P.

[From Dick Robertson 921019.1140]
New experiments! Yay, I'm for that! Also I have long been curious about why it
what would be a good experiment to test that out?
Best, Dick R.