[From Bill Powers (951130.0600 MST)]
Kent McClelland (951129.1445 CST) --
I like your analysis of the two-level system. You've really answered
Jeff Vancouver's question [11-29-95, 17.00] which arrived on my machine
right after your post. The higher the gain of the control systems
sharing the common environment, the more exactly their perceptual
weightings and reference signals must match to avoid conflict. This is
your case where the two lines on the graph are almost parallel -- where
the two higher systems are trying to "align their goals and
perceptions." When the perceptions are more orthogonal functions, each
system can set a reference signal independently of the other and control
successfully, even though there are interactions.
When the "lines of satisfaction" are nearly parallel, the output
interactions come nearer to direct opposition. With relatively low loop
gains,
... then they get stuck in a state of low-level conflict, with the
environmental X-Y position hovering between the two lines and
drifting ever so slowly toward the mutual accommodation point,
which if the two lines are almost parallel is likely to be a long
way off.
This can also be viewed as taking a relaxed attitude toward the
remaining error. If you were to raise the output gain factor, the
experimental X-Y position would move much faster toward the mutual
accomodation point, and extreme values of the outputs would be involved
(that point is a "long way off").
Unresolvable conflict results when the two systems share nearly the same
perception, have slightly different reference levels, and react strongly
to any slight error. This is what happens in groups that come together
to pursue a common goal which they think is very important. "Importance"
(to the active control system) can be measured in terms of the amount of
corrective action that will appear in response to some small standard
amount of error. The greater the "importance" of the goal, the less
error it will take to cause maximum output -- and the more difficult it
becomes for that system to cooperate with another that has, nominally,
the same goal.
I can think of the outlines of a nifty demo of this phenomenon. You get
a group of people together and get them to agree to some
multidimensional goal to achieve by cooperating. If the terms in which
the goal is defined are fuzzy enough, it will be hard for the members to
agree on just what mix of the dimensions constitutes satisfying the goal
(like "an aesthetically pleasing pattern of blue and green color
chips"). If each person tries to control all dimensions, reaching
agreement will be hard, but if through negotiation different people are
assigned independent dimensions, the final result should be achieved
quickly. Specialization works better than cooperation!
ยทยทยท
---------------------
I imagine that an analogous state of chronic low-level conflict
over nearly-but-not-exactly-parallel perceptual functions occurs
pretty commonly in human interactions, and perhaps this simulation
could serve as a model for the "spurious agreement" that Bill was
talking about in his post.
Happens all the time on CSG-L, doesn't it?
-----------------------------------------------------------------------
Bruce Abbott (951129.1930 EST) --
Your argument about the explanation of autoshaping is persuasive but I'm
not yet convinced.
But you seem to be assuming that reinforcement theory can only
account for data in this post hoc fashion. On what evidence?
Only negative evidence; I haven't yet seen any experiment in which a new
result was predicted beforehand according to basic principles of
reinforcement theory. For all I know, thousands of examples exist; I
just haven't seen any. In the case you mention, classical conditioning
theory and reinforcement theory both existed before the initial
experiments with autoshaping, yet apparently nobody was able to predict
what would happen in this experiment until after it had been done.
The explanation emerged as a result of experimental testing to
determine what was actually going on in this situation. It had
nothing to do with guesses about what the illuminated key looked
like to the pigeon.
Well, here is how your words went:
The explanation that was eventually developed for autoshaping goes
like this. Hungry pigeons normally exhibit an innate (unlearned)
response toward things that look like they might be edible: they
peck at them. ...
I think you brought up the way keys look to the pigeon; am I misreading
this?
The point is that autoshaping can be understood within the
framework of what we (think) we already know about conditioning,
without adding any new rules, whereas it is not clear that the same
can be said with respect to the framework provided by PCT.
PCTers have ventured some understandings of classical conditioning. But
this is an area where PCT research is lacking: change of organization.
No real hypotheses have yet been tested to destruction or otherwise.
---------------------------
The problem is that you never know WHICH assumptions to make
before the experiment is done. You have to wait until you see the
result, and THEN choose the assumptions that make the explanation fit.
That is truly cargo-cult science: the appearance without the
substance.
True, but then I don't know anyone in EAB who would disagree. It's
not the way they work.
Now that it has been established that classical conditioning can be
involved, have all new experiments been analyzed with that in mind? Or
is this explanation invoked only when standard reinforcement theory
can't explain the results? Does the classical conditioning stop working
when you're not paying attention to it?
From your post yesterday:
We're going to have to start with the kinds of experiments for
which fairly simple PCT models can be developed and tested, as you
did for the Verhave avoidance data, and work up from there.
I agree 100%.
-----------------------------------
I suggest, by the way, that our first rat experiment should be focused
on answering a very simple question: can rats vary their rate of bar-
pressing in a systematic relationship with food intake? From our
previous modeling efforts, there is a strong suggestion that they can't,
or that if they can, the conditions under which they can do this are
ill-defined. I should think it would be very significant to EAB if it
turned out that rats either press at some fixed rate (perhaps related to
deprivation level) or don't press at all. For PCT, the original idea of
what the behavioral control system is demands that in at least some
small region near the free-feeding situation, a decrease in food intake
rate should lead to an increase in pressing rate. If that's not true, we
have to look for a very different model.
Since we will be recording every key-press and every reinforcement, and
will know the food intake with high time resolution, answering this
question should be quite straightforward.
--------------------------------------------------------------------
Best to all,
Bill P.