[From Rick Marken (950629.0910)]
Bill Powers (950628.0945 MDT) --
This is a pretty feeble comment on an observation that goes directly
against the fundamental assumptions behind reinforcement theory itself.
Bruce Abbott (950628.2020 EST) --
If you found some rather odd things happening in a complex tracking
task that seemed to contradict PCT, you might note that the results
don't appear to agree with the PCT prediction but I doubt you'd be
ready to chuck the whole theory.
No, we would not be ready to chuck the whole theory. But we would be on the
apparently contradictory data like white on rice. Examples of such data have
been presented a number of times on the net -- at least one time by yours
truly. For example, while studying conflict, I found that subjects control
better when the disturbance to the controlled variable is the active output
of another (low gain) control system rather than the passive output of a
waveform generator. The basic control model controls the same in both
So here was a result that seemed to contradict a prediction of PCT. And it
caused considerable concern and interest until Bill Powers realized (and
showed) that the result is predicted by a control model with a transport lag
(all real control systems have some transport lag).
The results of my little experiment did not threaten the basic tenets of PCT
(as ratio data threaten the basic tenets of reinforcement theory) but they
certainly were not what was expected, so they demanded explanation. That's
how you feel about deviations from prediction when you are used to a model
of living systems that predicts correctly every time.
The attitude displayed by the authors toward this finding is
interesting from the standpoint of a study in steadfast faith. "No,
madame, there is no danger. A little bit of floating ice could never
damage the Titanic."
No, it's more like "there's something odd going on here, but the
details are rather complex so I'm not exactly sure how to interpret
these results. I'll leave that to someone else."
So where has that someone else been? It's been nearly 20 years since these
results were reported. And these results are not an isolated case; as Bill
noted, Skinner and others have collected tons of "scheduling" data that show
rather conclusively that consequences don't strengthen responses. Shouldn't
the theorists have shown by now that these results are consistent with the
idea of selection by consequences -- if they are? It looks pretty fishy to me.
To say they were assessing the _effect_ of milk concentration is to
assume that milk concentration is _a priori_ a causal variable.
Oh, come ON! If I can't say that a disturbance to a control system
under given conditions _causes_ it to respond with an opposing
action, then I don't know of _any_ circumstances in which the term
would be appropriate.
Oh, come ON, Bruce! The term "disturbance" is not a PCT replacement for the
word "cause". Disturbance refers to a variable that causes another variable
to be moved from (or to) a _preferred state_; a disturbance variable is only
a disturbance to a _controlled variable_; otherwise, a disturbance is just a
variable that causes a change in another variable.
You said "the researchers were assessing the effect of milk concentration on
the length of the postreinforcement pause". This clearly implies that the
researchers thought that milk concentration might have a direct effect, via
the organism, on an aspect of responding (postreinforcement pause). If you
want to play the PCT translation game you have to play fair; if you want to
call milk concentration a disturbance, then you must show that this variable
has an effect on postreinforcement pause via the joint effect of both
vaiables on a controlled variable: you must descbibe the controlled variable
and explain how milk concentration and postreinforcement pause affect it.
A more likely explanation in my view is that there are individual
differences between rats. While an individual rat might behave in a
way perfectly consistent with a model, the appropriate parameters
might be quite different from one rat to another.
If parameter adjustments can allow a single model to account for the
conflicting changes seen across ratios in some of these animals, then
I'm worried about the model
You missed Bill's point.The modeller only changes parameters across subjects;
any parameter changes across ratios have to be carried out by the model
itself because they are being carried out by the organism itself in the
Generating a given level of error in the nutrient-level control system
simply establishes a given reference level for lower-level systems
such as the one governing rate of eating.
This is not even true in theory; in PCT the outputs of many higher level
control systems can contribute to the reference level for any particular
lower level system (see my spreadsheet model in "Mind Readings"); the
reference for rate of eating (food input) might be set by the output of the
nutrient-level control system as well as the output of the sweetness control
system, the chewing effort control system, etc.
And this is all happening in a closed loop anyway. So even if you could
control the situation so that you produced a given level of error in the
nutrient- level control system, there is no reason to expect any particular
response to this error since, in a control loop, the actual output that
results from a particular error differs depending on the prevailing state of
the feedback function that relates error to itself. To control this feedback
function (keep it constant) you would have to control all characteristics
of the environment between the organism's output and input as well as
properties of the organism's output function itself (eg. muscle fatigue).
Why not just let the organism control and see how it does it?
For the purpose of studying the relationship between lever-press
rate and eating rate as the ratio requirement is varied this may be
perfectly acceptable procedure. This is the system I'm measuring the
characteristics of, and I'm letting it control.
You cannot stop the system from trying to keep a variable under control; but
you sure can prevent the system from achieving control if you are able to
control its error signal.
Bruce, this post of yours (950628.2020 EST) reads like a long apologia for
reinforcement theory and the current approach to doing behavioral research.
I know that you are just doing this to help us anticipate how reinforcement
theorists would react to our criticisms; but I think we get the idea already.
I think it would be much more helpful if you caould tell us the best way to
present reinforcement theorists with the evidence that organisms are control
systems; that there is no such thing as reinforcement; that behavior is
control; that actions select consequences and not vice versa. How do we get
reinforcement theorists to see what is happening right before their eyes;
that organsms are controlling (and are NOT controlled by) the consequences of
We already know the kinds of things psychologists will say to defend the
faith; how about helping us figure out ways to encourage them to start seeing
the behavioral Necker cube the right way: as control OF (not by) perceptual
Gary Cziko (950629) to John Anderson (950629.0630 EDT) --
It seems to be that much of the S-R vs. PCT debate could be cleared away by
simply keeping in mind that a given action cannot reliably produce a desired
So learning is not strenghtening connections between stimuli (perceptions)
and responses (actions), but rather strengthening connections between
different types of perceptions. For Calvin to learn to ride his bike, he
needs to figure out what controlled lower-level perceptions will result in
the higer-level perception of success at bike riding. S-R learning will
not do this. P-P learning will.
Tom Bourbon (950629.0213) --
Thank god you're back!!