Imagining

[From Bruce Abbott (971103.2000)]

Rick Marken (971103.1300) --

Bruce Abbott's reinforcement model of E. coli navigation is still
another example. Bruce imagined that his model was controlled
by contingencies. In fact, the contingencies _in the model_
allowed E. coli to control input perception of nutrient
concentration relative to a fixed reference.

Not at all. What I imagined was that my model's functional organization
(which could be altered by changing certain parameters) was influenced by
contingencies which affected those parameters. Those contingencies produced
a functional organization that can be properly described as a perceptual
control system and behaved as such.

Bruce

[From Bill Power (971104.0539 MST)]

[From Bruce Abbott (971103.2000)]

Rick Marken (971103.1300) --

Bruce Abbott's reinforcement model of E. coli navigation is still
another example. Bruce imagined that his model was controlled
by contingencies. In fact, the contingencies _in the model_
allowed E. coli to control input perception of nutrient
concentration relative to a fixed reference.

Not at all. What I imagined was that my model's functional organization
(which could be altered by changing certain parameters) was influenced by
contingencies which affected those parameters. Those contingencies produced
a functional organization that can be properly described as a perceptual
control system and behaved as such.>

The point you're missing in your model is that the _contingencies_ did not
produce the functional organization -- your program did. Your program
amounts to a physical model of _how_ the mere fact of the external
environmental contingencies had a physical influence on behavior. In part,
you proposed that the following mechanism exists inside the organism:

ยทยทยท

=============================================================
procedure StepEColi;
var
  NewNut: real;

  procedure ReinforceOrPunish;
  var
    DeltaNutRate: real;
  begin
    DeltaNutRate := dNut - NutSave; { Change in the rate of change in }
                                    { nutrient following a tumble. }
                                    { + = improvement = reinforcement }
                                    { - = deterioration = punishment }
    If DeltaNutRate > 0 then { Nutrient rate increased by tumble: reinforce }
      begin { tumbling }
        If NutSave > 0 then { If S+ present during tumble then }
          begin { increase probability of tumble given S+ }
            pTumbleGivenSplus := pTumbleGivenSplus + LearnRate;
            if pTumbleGivenSplus > pMax then pTumbleGivenSplus := pMax;
          end
        else { S- present when last tumbled then }
          begin { increase probability of tumble given S- }
            pTumbleGivenSminus := pTumbleGivenSminus + LearnRate;
            if pTumbleGivenSminus > pMax then pTumbleGivenSminus := pMax;
          end
      end
    else
      if DeltaNutrate <= 0 then { Nutrient rate decreased by tumble: punish }
        If NutSave > 0 then { If S+ present when last tumbled then }
          begin { decrease probability of tumble given S+ }
            pTumbleGivenSplus := pTumbleGivenSplus - LearnRate;
            if pTumbleGivenSplus < pMin then pTumbleGivenSplus := pMin;
          end
        else { If S- present when last tumbled then }
          begin { decrease probability of tumble given S- }
            pTumbleGivenSminus := pTumbleGivenSminus - LearnRate;
            if pTumbleGivenSminus < pMin then pTumbleGivenSminus := pMin;
          end;
  end;

When you use a variable like "DeltaNutRate" you're proposing that this
system actually senses the rate of change of nutrients and represents it as
a variable inside the system. When you write "If NutSave > 0 then ..."
you're proposing that there is something that can perceive this logical
condition, and if the outcome is greater than zero perform the indicated
set of operations that follows. Each operation is a proposal concerning
what the system itself is capable of doing.

In proposing this model you have shown only one thing: there is a way to
achieve what EABers imagine to be happening when they think they see
reinforcement going on. We have to consider, however, the plausibility of
this model, not just whether it achieves the desired result. This model is
a guess about mechanisms inside the organism, so we can apply criteria
other than just that of functional sufficiency. Rube Goldberg showed us a
functionally sufficient model of a mousetrap -- but actual mousetraps are
not designed that way.

You have actually done a service to PCT in laying out the nature of the
mechanism require by the concept of reinforcement: you have shown that it
is highly improbable as an explanation of behavior, unless you're
describing how a human being might execute complex programs. I have always
felt that if behaviorists had to produce working models of the mechanisms
they assume, they would quickly find that the models are more complex than
the phenomena they're supposed to explain. Your model is in direct support
of this view.

Yes, your model works. I've accepted that since I first saw it running. But
it's not a model I could accept as an explanation of the observed
phenomenon. It's a Rube Goldberg contraption. In my view, the elements of a
model have to be simpler than the phenomenon the model is supposed to
explain. Your model doesn't meet this requirement.

Best,

Bill P.