[From Bill Powers (950606.1440 MDT)]
Martin Taylor (950606.1100) --
No, I do NOT think that there is one single input to a PIF.
I didn't think you did. But you drew a distinction between model-based
control on the one hand, and control based on real-time inputs on the
other hand, while the suggestion I was making lay entirely within the
real-time input side. I was simply asking whether there might not be a
way to account for the fact that human beings do not lose control
entirely when some inputs are lost, by supposing that perceptions can be
composed of non-obvious as well as obvious types of input. One
explanation for why they don't lose control altogether is they they're
using an internal world-model. I was suggesting an alternate explanation
for at least some cases.
I may have been "preaching to the choir," but you appeared not to see
that I was offering an alternate explanation.
As to the 9 points, they are 9 points of agreement.
If the ECU is made less "E"lementary, and provided with a second
degree of freedom by way of a "perceptual validity" signal, it
could shut itself off.
That's too much intelligence to put into an elementary control system,
for my taste. I would much rather try to think of a structurally simple
higher-order system that would do the equivalent. You are not just
introducing a second degree of freedom of perception (which requires a
second signal to carry it). You are introducing machinery for
interpreting it (a new perceptual function), assessing the
interpretation relative to a new goal, and acting on the system itself
in a new way. What had been a simple system now begins to bulge with ad-
hoc complexity.
>Bill's comment was that _neither_ choice presented to the rats was
one it would select by itself. All right. I'll give an example
that faces me this Thursday. ...So I can't effectively vote for
party A, and must choose between two alternatives, neither of which
would be the one I would select by myself. This seems to me to be
an exact analogy with the rat.
Well, your idea of an "exact" analogy is not mine. Nobody is forcing you
to vote for anybody, and voting is not itself painful for you. In the
rat's case, the rat is forced to have an experience that under ordinary
circumstances it would never seek out and would always avoid. Under no
circumstances is shock a beneficial experience.
···
-----------------------------------------------------------------------
Bruce Abbott (950606.1155 EST) --
When shock occurred in the escapable-shock condition, it continued
until terminated by the rat by depressing the "escape" lever. In
this condition the rats became very efficient: they remained poised
over the escape lever and pressed it rapidly enough to produce
shock durations averaging from about 100 to 300 milliseconds for
different rats. It is very clear that the rats had control over
shock duration in this condition, and that they used that control
to minimize shock duration.
The problem that keeps me from relaxing and enjoying it is that this
doesn't seem like a very significant degree of control over the shock. A
lot of unanswered questions remain. Is a shock that lasts 100-300 msec
essentially zero shock from the rat's point of view, or is it so
excruciating that the rat's error signal is approaching saturation for
any duration longer than 100 msec? I would much rather have seen an
avoidance schedule where the beginning of a trial is signaled and the
rat has the ability to prevent the next shock altogether. In that
situation, we know that the rat experiences only a few percent of the
shock rate that it would experience if it did not press the bar in time.
That shows that control definitely exists and is effective.
Then we could play back a recording of the actual shocks, while the rat
goes through the same experiment but without the bar presses having any
effect on the apparatus. So we have the same behavior and the same shock
rate, the only difference being that there is no control at all. I guess
what bothers me about your experiment is that whatever control may have
existed was pretty weak, and couldn't reduce the shock below some rather
long duration. I should think that the rat's error signal was still
pretty large even when control was supposedly occurring.
From this you conclude that the outcome of the experiment was
unsurprising. But you have not seen the research suggesting that a
different outcome was possible, and your expectations are based on
nothing more than calculations based on the objective situation.
I don't know if the output was unsurprising; all I know is that I'm not
satisfied with the distinction between control and no control. If the
rats were hovering over the bar in the "control" condition, they must
still have been experiencing a pretty large error. When I test for
control, I want it to be successful. If big errors still remain, we're
not seeing very successful control. So the rats must be trying to
distinguish between hardly any control and none at all, which, it seems
to me, makes telling the difference unneccesarily hard.
Again, "no effect" is how YOU perceive the situation, not
necessarily how the rat perceives it. WITHIN the two conditions of
the experiment, shock WAS controllable in one of them. It is only
by comparing the long term, overall shock outcomes in the two
conditions that you can conclude that there was no overall ability
to control shock duration. Could the rats perceive this? The only
way to know for sure was to run the study.
Same question: why not make the distinction easy to perceive by letting
the rats reduce the incidence of shock close to zero in the controlled
case?
---------------------------------
RE: illusion of control
Poor example, because both you and your teenage son actually have
control of the car when driving.
News to me. When I'm driving, it seems to me that _I'm_ the one
whose foot is on the accelerator and whose hands are on the wheel,
not my son.
I meant, whichever one of you is driving, that person actually does have
control of the car, while the other doesn't.
It's not having or lacking control _per se_ that determines the
experienced level of stress, its whether having or lacking control
results in the least experienced error. Didn't I make that clear?
Not really, because "having control" is ambiguous the way you're using
the phrase. For example, "Furthermore, HAVING control can be QUITE
stressful if you're not good at correcting error."
You seem to use "having control" as meaning "being put in a position
where you could control if you knew how", rather than the way I use it,
meaning "being able to keep perceptions acceptably close to their
desired states by means of actions." The means of control is necessary
for having control, but so is the skill, the ability to take advantage
of the available means.
Try the compensatory tracking task using a high-frequency
disturbance and see how unrelaxed it makes you feel.
This is an example of lack of control, not of control. You don't have
the ability to keep error acceptably small in that situation, so you
don't have control even though you're operating the handle. We're just
using "having control" in different ways. If a flying instructor turns
to the student on his first trip up and says "Ok, you have control now."
he's lying. The student may have his hand on the control stick, but he
doesn't have control.
You really said this yourself:
I think this raises a definitional problem. What do you mean by
"give"? Certainly I can give you the means by which some perception
can be controlled, and I can describe how to use those means to
achieve control. I can even make it in your best interest to
assert control (e.g., I could let go of the steering wheel while we
are traveling down the road at 70 mph). What I can't do is make you
do it.
... and what you can't do is call this a "controllable" situation. The
only way to demonstrate that something is controllable is to demonstrate
control of it.
Illusory control may reduce stress (so long as the illusion
persists) if what you are worried about is the POSSIBILITY of
uncontrollable error.
OK, so you tell someone "You're going to get a rather severe shock when
that timer counts down to zero. However, if you press this big red
button any time after 10 seconds before zero, the shock will be
disabled." There's no question of not being able to control the shock;
all that's required is a simple act the person knows how to perform. So
the level of stress as the countdown proceeds may climb somewhat, but as
soon as the big red button is pressed there is no reason for further
stress.
Until the person discovers that this control was illusory, because the
big red button wasn't connected to anything and the shock occurred when
the timer ran out. This would instantly invalidate the illusion of
control and probably everything else the experimenter said from then on.
But suppose that neither the button nor the timer is connected to
anything. Now the person presses the button, and when the timer runs
out, no shock occurs. It's the same principle as nuclear deterrence or a
bear-scarer or taking a phophlyactic dose of aspirin or giving up eating
eggs. You do something to prevent an uncontrollable error, and sure
enough, there is no uncontrollable error. One reasonable hypothesis is
that what you did prevented the unwanted error. Belief that you actually
controlled the outcome may reduce stress. However, if you use your
brain, you will realize that you haven't proven that you have control,
and that realization could lead to a great deal of stress. The critical
question is not whether you appear to be in control, but whether you ARE
in control.
Saying that the illusion of control reduces stress is therefore an iffy
statement, because it depends on the gullibility of your subject.
Dumbo's feather is a nice example: he was unwilling to try to fly
so long as he believed that the attempt would result in a painful
impact with the ground.
If Dumbo had been real, I would have said he was smarter than the guy
who gave him the feather and the story about being able to fly. If you
actually can fly, the feather is a bear-scarer. You can't prove it
didn't enable you to fly. If you can't fly, however, the illusion of
control will be quickly dispelled. That's how the test for the
controlled variable works: you can't prove beyond doubt that a
particular variable is under control, but you can very quickly show that
it's not under control.
Thus, your methodology
ensured that the rats experienced the same amount of stress in the
control of shock and no control of shock conditions.
On the assumption that the only cause of stress is the _objective_
error. The study was designed to provide evidence for or against
this assumption. When it comes to a choice between assumptions and
data, I prefer data, especially when different researchers are
making opposing assumptions.
I don't see any problem with that assumption, for rats in that
experiment. After all, they had direct evidence of lack of control, in
that no matter what happened or what they did they still experienced at
least 100-300 msec of shock. Even if they had believed they had control
over the shock, the next shock would prove they did not.
RE: Rick's suggestion:
No doubt the rats would prefer no shock to the levels they actually
received, and would have adopted other, more effective forms of
control if they could discover them (as did those exceptional few
rats that learned to roll over onto their backs). But I have a
real problem with the notion that the rat's control systems are
continually undergoing massive reorganization here.
Why? If a control system is consistently permitting large amounts of
error, reorganization is very likely to start up. Seems reasonable to
me; it's called "trial and error" behavior. When one behavior isn't
working, pretty soon you start trying something else.
You didn't read my description of the study very carefully. The
rats didn't "select the control of shock condition 1/2 of the
time," (although they certainly could have!). What they DID do was
stay in whatever condition they were placed into. When the
condition was switched, the rats made no attempt to switch back.
I missed something. What was it the rats could do to switch from the
controllable situation to the uncontrollable one? Was this the house-
light situation, too, where another lever would switch the conditions?
From what you say, I deduce that the rats could take some action that
would switch conditions. So you were testing to see if they would choose
one condition over the other, and they didn't. This says either that
they had no preference for being able to control the shocks, or that the
difference in degree of control they could obtain in one condition
relative to the other was so slight that they couldn't tell the
difference.
Nice try, but no cigar.
Right.
-----------------------------------------------------------------------
Bruce Abbott (950606.1400 EST) --
I believe that you have drawn an incorrect conclusion from one
graph showing what happens to asymptoticly maintained response
rates as the ratio requirement of a ratio schedule is varied.
Oh? In what way was it incorrect? There are many more such graphs,
including some obtained by Timberlake showing the negative relationship
between reinforcement rate and behavior rate over a range of ratios of
5000:1, with NO region in which the positive relationship was observed.
In these experiments the animals obtained all food or water from the
experimental apparatus; the experiments ran continuously. In obesity
experiments, mentioned in BCP, it was found that adding reinforcers
arbitrarily caused an immediate reduction, even a cessation, in
behavior, and ceasing to add them immediately restored the former rate
of behavior. There is even a principle I've heard of: "noncontingent
reward decreases behavior."
Perhaps a clearer picture might emerge from examining data on
runway performance. Rats are food-deprived to some criterion and
then trained to run from a start box, down a straight-alley, and
into a goal box which contains a certain amount of standard
laboratory rat chow. What is measured is the speed of running.
The picture isn't "clearer;" it's just more like what you would expect
from reinforcement theory. In this sort of experiment, speed of running
has very little effect on amount of reinforcement received, and since as
you say only one or a few trials per day were run, there is no
possibility of measuring the slope of the behavior-reinforcement curve.
Most of the time is spend in another cage under another schedule, which
is not mentioned. The experimental run is just a blip in the background
conditions.
I wonder why only a few runs per day were used. Could the reason be that
if the rat spent a few hours running the T-maze, the expected
relationship would no longer be seen? Presumably, if you put a lot of
food in the goal-box on each run, the rat would slow down, and come to
an asymptotic speed that would be slower as the amount of food increased
-- at least above some critical amount of food. With enough food in the
goal-box, there would be trials on which the rat didn't bother to run at
all. This is what is interpreted as "satiation," which is to be avoided
as it produces results not consistent with the theoretical prediction.
And if the amount of food was decreased, the rat would speed up, trying
to get more food. However, if the amount of food was made small enough,
we would start to see the other relationship: increasing the amount of
food would result in faster running, decreasing it in slower running.
That's my prediction, based on the other experiments. Any indication
that it is wrong?
... larger reward would be expected to produce faster running if
you properly analyze the situation from a control-system
perspective (remember, each visit to the goal box is having little
or no effect on the error in the upper-level nutritional control
system even with relatively large reward; the experimental
procedure essentially opens the loop on this system).
The only problem is that in order to make the results come out right,
you have to postulate something unobservable: changes in the
"attractiveness" of the goal with a sign chosen to be just right to
explain the results. All we actually observe is that the rat, fresh from
the 23 hours of training in the other cage, will strive harder to get
food when it sees more food available, and when it has been more
deprived, up to a point.
If a larger reward is expected to produce faster running, what happens
when the reward exceeds the amount of food that the rat normally eats in
a day? Will the running speed go off the upper end of the scale? Or will
it slow to a stroll, a saunter, a causal exploratory run with much
sniffing here and there and eventual leisure arrival for a nibble at the
food in the goal-box?
I believe that the conditions in this experiment have been carefully
adjusted until the relationship predicted by reinforcement theory was
observed.
Staddon certainly didn't see anything problematic.
That's right. He didn't seem to notice anything funny about the curves.
He didn't comment on their significance relative to basic reinforcement
theory. Maybe he just missed seeing the problem, or maybe he decided he
wasn't going to touch this one with a ten-foot pole.
The main thing I get from your post is that if my interpretation of the
data is right, there is definitely a problem for reinforcement theory
and you would feel a considerable urge to make it go away.
----------------------------------------------------------------------
Best to all,
Bill P.