Test

[From Bill Powers (980729.1151 MDT)]

Bruce Abbott (980729.1800 EST) --

The objective at the time was to identify what was reinforcing the choice of
the signaled over the unsignaled condition, once it had been established
that the rats do indeed so choose. That does not prevent me from
reinterpreting the data in control-theoretic terms today, does it? Insights
_can_ be retroactive (applied to old data).

Yes, but it can't be claimed that you had the insights at the time the data
were taken, as you appear to be doing in claiming that you were applying
the Test _then_. I doubt that it was especially significant to you then
that the rat's behavior was having an effect on the state of the apparatus
equal and opposite to the effect of whatever caused the switch to the
non-signaling mode. Your very choice of a binary variable (signaling vs
non-signaling) made it impossible to see the equal-and-opposite
relationship, suggesting much more strongly a sequence of events. And of
course the sequentiality of the binary variable prevented you from seeing
the _lack_ of effect of the disturbance.

If control theory applies to the behavior of organisms now, it applied
then, too, and all the relationships we expect to see under the control
model should have been observed if you were observing accurately. Therefore
your data are bound to contain information that can be explained under
control theory. That does not provide ground for claims that you were
anticipating PCT, especially if you were interpreting the data under a
different theory.

You object to my suggestions that some more alternative hypotheses be
tested, but isn't doing just that the reason for which you tried all those
different variations, such as varying the timing of the signal prior to the
shock, and testing for preferring the signal with no shocks? ...

I'm sorry, but I don't recall making any such objections.

You complained that I was putting you through the hoops and demanding a
detailed discussion of the experiment which you considered unnecessary. As
it transpired, I was just asking questions that it turned out you had also
asked, presumably without complaining. I hadn't retained the details from
the previous discussions, so I asked about them again.

What alternative
hypotheses did you suggest be tested, to which I objected? In my
recollection, my response was to inform you that the requested tests had
been performed. If that recollection is correct, then I have done exactly
as you suggest would follow good scientific practice.

That is true, but the point is that you failed to mention the crucial
tests, which is why I had to ask about them. You're blaming me for not
remembering the previous discussions in enough detail. Others who may not
have seen that previous discussion can't be expected to simply take your
word that everything necessary was done. If that were how science is done,
we'd just publish our conclusions.

Perhaps it is because I know that these alternatives were Tested quite early
on and ruled out that your assertion of a logical equivalence between
(signal and shock) and signaled shock is incorrect, just as I know from
experience that (steak and salt) is not necessarily the same culinary
experience as salted steak.

Is there yet another set of experiments that you have failed to mention? I
see nothing in what you have talked about that addresses this question. In
fact, what we see the rat controlling in your experiment is the signalling;
the shocks, since the rat can't affect them, are not controlled. The rat
exerts itself to keep the signalling turned on. That is what your data prove.

The distinction needs to be made in both cases,
not on my opinion, but on the evidence, which does argue for the distinction
made.

Evidence doesn't argue. You do. So please present that argument.

I'm not so sure that you're being "reasonable." When you have in the past
illustrated Testing for the Controlled Variable in a tracking-task setting,
you have talked about how in a single experiment the Test (applying
disturbances to the putative CV) reveals whether the CV (or a close
correlate of the CV) is being controlled by the subject. What follows after
this are further Tests to refine that definition. You have never to my
knowledge asserted before that the tracking experiment by itself does not
constitute a Test for the Controlled Variable.

In a full-blown tracking task, disturbances are applied both to the target
position and the cursor position. Since these disturbances are independent
of each other, we sample all possible combinations of magnitudes and
directions of disturbances, exhausting the degrees of freedom of the task.
Then, in fitting the model to the data, we demonstrate that the behavior we
observe quite exactly exemplifies the behavior of a control system with
specific and rather narrowly-defined parameters. This version of the Test
is much more advanced than the one I describe in terms of rules of thumb.

It is certainly still possible that the definition of the controlled
variable in a tracking task could be improved, but any such improvement
could only increase the predictivity of the model in the second decimal
place. One has to decide whether the slight improvement that remains to be
made is worth the effort that would be required. Your experiments with the
signalled shocks, of course, are by no means comparable in precision with
our tracking tasks, largely because of the binary conception of the
controlled variable.

Yet in the context of my
experiment you now wish to redefine the Test as having been performed _only_
if an exhaustive series of whatever-they-are-that-aren't-Tests has been
conducted to specifically identify exactly what aspect of the putative CV is
what the subject is "really" controlling for.

Yes. You have to rule out all reasonable alternatives. We have done so in
the tracking tasks. You have made an attempt to do so under circumstances
where many more possibilities exist. You chose not to investigate all the
possibilities, and I believe that is reasonable. But it limits what you can
claim to have found.

You seem to forget that the Test is _specifically_ a test for control. It
is not just a test to see if one variable affects another. If you show that
varying the interval between a signal and a shock affects the behavior of
the rat, you have not tested for control, but simply for an effect. This
may give you interesting information, but it's not a test for control. It's
not the Test.

I'll say it again -- I AGREE with you that such a series is necessary to
refine one's definition of the CV; that is not the issue between us here.
(And in fact, as previously noted, just such an extensive series was
conducted in an attempt to identify what factor or factors are responsible
for the observed preference for signaled over unsignaled shock schedules.)
My assertion is that each experiment using the procedure I described does
constitute as much of a Test as a single tracking experiment does. Either
both perform the Test or neither does. Your choice.

Qualitatively, perhaps there is a basis for comparison -- provided both
parties are control-theory-aware. Quantitatively, there is no comparison.

We should be able to decide this
matter without hearing specifics about any real application of the method.

Oh, is that so? Then the "we" of whom you speak does not include me.

What a strange thing to say. Well, let's try this approach on a different
method. I ask a person to keep a certain variable at a certain value. I now
apply disturbances which tend to push the variable away from the stated
value, and observe that the person responds in such a way that the effects
of the disturbances are all but negated by these actions, so that the
variable remains close to the given value, much closer than would be the
case if the person had not so acted. Is this a Test for the Controlled
Variable?

Part of it. Part of the issue here is the degree to which you actually
followed this procedure in your experiment. You must also verify that it is
the action of the system that is opposing the disturbance, and that control
is lost when the system is prevented from perceiving the proposed
controlled variable. But yes, it is the Test. This Test must be repeated
until it converges to a single result, as nearly as possible.

Can you determine that without having a specific application
(e.g., cursor tracking) of the method? If you can, why can't you do the
same when evaluating the changeover method in the abstract?

You can. But that was not the only issue: the issue was whether the Test
was applied in a knowing and competent manner at the time you did the
experiments. There is more to doing the Test than just a single iteration
of a rote procedure. Anyone can appear to do it once, by accident. What
reveals its knowing use is demonstration of an appropriate strategy of
forming hypotheses, applying disturbances, observing the results,
evaluating them as an indicator of control, and then refining the choices
of disturbances to rule out alternative definitions of the controlled
variable.

The one aspect of this test that you would never have done if you were
looking for reinforcers is to apply various disturbances to the reinforcer
itself, in all its degrees of freedom, independently of the effects of
behavior. I doubt that it would have occurred to you to have the apparatus
revert to the unsignalled mode, and after a while revert back to the
signalled mode whether the rat pressed the lever or not. So simply mentally
substituting "reinforcer" for "controlled variable" does not lead to proper
use of the Test.

What "obvious alternatives" were left untested? You suggested several
alternatives and if memory serves, my reply to you was that these had been
Tested.

Yeah, but what good did it do me or anyone else for YOU to know that they
had been tested?

Your assertion that I "applied a disturbance to the first variable
that struck [my] fancy" is does not appear to arise from the facts as I
described them; I don't know where you're getting that.

They certainly did give that appearance; you're forgetting that initially
all you said was that the condition reverted to the nonsignalled mode and
the rat pressed the lever to take it back to the signaled mode. You didn't
mention any other hypotheses about the controlled variable, or how you
applied disturbances to rule them out. In fact, you still haven't really
described the strategy you followed, and how the Test was actually applied
-- the disturbance, the countering action, the comparison of the predicted
controlled case with the observed behavior, the formulation of new
hypotheses, and so forth. I'm more or less assuming that you did all these
things, since you say you did the Test, but a description would make
assumptions unnecessary.

Now that I can agree with. My idea of applying [successive refinements of]
the Test is the same as yours.

Ah, good. How was that done in your experiment?

There are technical problems with giving the rat direct control over the
interval by which the signal preceded the shock, having to do with the lack
of immediate feedback.

So what? Since when have technical difficulties made it all right to assume
things without proof?

Assume things without proof? What things? I'm sorry, but I don't recall
making any such assumptions.

You assumed that the rat was not controlling for the interval between the
signal and the shock. When I suggested this variable might be Tested for,
you objected that it is technically difficult. So what do you then assume
about the rat's attempt to control the interval between signal and shock?
Whatever you do, you're assuming _something_ unless you actually do the test.

If your experiment created such technical
difficulties, it should have been revised to eliminate them

Ah, but Bill, the experiment as conducted _was_ the revision that overcame
the technical problem noted.

I guess I missed that. How did it rule out the possibility that the rat was
controlling for a specific interval between signal and shock?

You object to this as an unproven assumption:

Another delayed revelation. This implies that you did not keep a
press-by-press and shock-by-shock record of the experimental results, or
that if you did, your conclusions were based on whole-session averages.

Do you understand the meaning of "or?" In fact, you immediately verified my
assumption -- the part following "or."

Actually I did keep a press-by-press and shock-by-shock record of the
experimental data. The data presented were indeed whole-session averages,
but in this case those averages did not differ in any important way from
data collected at any point within a session.

That's impossible. At any point within the session, there is no "percentage
of time in the signaled condition." The system is either in the signaled
condition or in the other condition. The rat is either pressing the lever
or doing something else.

In tracking studies, you
report "whole session averages" like the various correlations, mean square
error, fitted gain value, and so on. The percentage of time spent in the
signaled shock condition is no different from these measures, Bill. It's an
index of overall performance.

Yes, but it's not a measure of what the rat or the person was doing during
the session.

I agree that time spent in the signaled condition is definitely of
interest, but whether it is the sort of thing a rat might perceive and
control is another matter. I agree that the rat might _appear_ to control
that variable, but as has been pointed out numerous times on CSGnet, one of
the problems with the Test is that you have to be careful about misleading
appearances.

The same goes for tracking studies, doesn't it?

Absolutely.

All that is fine, except for the assumption that the first variable to pass
this test is the controlled variable. I would not be satisfied by this.

I have never asserted or implied that the first variable to pass this test
is the controlled variable. I would not be satisfied by this either, and I
think you know that.

So tell us what alternatives you tested, and how you ruled them out.

skipping more of the same ...

As for
perceiving the abstract concept of "signaled condition," I am not asserting
that rats do perceive such a concept. I am asserting that they control for
having their shocks signaled.

They control for signals occurring prior to shocks. Whether they understand
that the signal somehow indicates that a shock is about to occur (or
whether they even grasp the concept of "about to occur") remains unknown.
Your experiment shows only that a certain physical state of affairs is
under control -- not what it means to the rats.

P. S. Reading your subsequent response to Rick, I realize that the proper
way to describe your proposed controlled variable is this: When shock is
inevitable, rats control for receiving signals that occur 2 to 5 seconds
before each shock. Rats appear to be controlling not just for the "signaled
condition," but for signals that occur in a particular temporal relation to
the shocks. And this is true only when shocks are qualitatively inevitable.

It's also true when shocks are avoidable or escapable (Badia & Culberston,
1972).

Great. I thought that when the rats could avoid the shocks, the signalling
became irrelevant. But it's not going to work, Bruce. No matter now many
vital facts you hold in reserve, I am not going to learn to quit
questioning your results when you spring new evidence on me.

So far we have no indication of whether this control process is a means to
reduce the quantitative experience of the shocks (energy delivered).

The energy delivered is slightly higher when shocks are signaled, because
the rats "freeze" during the signal and therefore are slower off the dime
when the shock begins. It is still possible that the signals somehow reduce
the perceptual experience. It's a possibility that is tough to test, but if
it happens, then it must be a potent effect, since rats are willing to
change to the signaled shock condition even when shocks are up to three
times as intense (in mA) as those delivered in the unsignaled condition.
(All this was discussed on CSGnet some time ago.)

I presume you have data, held back until now, showing that the rats
experience three times the current as more aversive than 1 times the
current, and that the maximum aversiveness has not already been reached
with the smaller current.

Perhaps we need to find a somewhat simpler analogy from which to reason
about the changeover procedure. Bill, imagine that you are sitting in your
favorite easy chair, reading a book. The lamp on the stand next to your
chair is on. Mary hypothesizes that you are controlling for having the
light on, so she performs the Test (or whatever it is you now wish to call
it) by switching the lamp off. You immediately reach over and turn it back
on. She turns it off again. You turn it back on. She turns it off.
"Mary," you say with some irritation as you flip the light on again, will
you _please_ leave the light alone?" Mary concludes that you are indeed
controlling for the light being on.

Questions: Has Mary performed a Test for the Controlled Variable?

Yes. And she is about to be deceived.

If so,
is she correct that you are controlling for having the light on?

That might be my controlled variable, or it might not. It's more likely
that I am controlling for having enough light on the page, with the state
of the light itself being varied but not controlled (if the sun shines in
the window, I don't need the light to be on).

What if
you wanted the light on so that you could read? Does the fact that you are
"really" controlling for having enough light on your book to read it mean
that you are NOT simultaneously controlling for having this particular light
on?

Very possible, in fact likely. I don't care about the state of the light
bulb itself. My concern is the illumination of what I'm reading. If there's
already enough illumination, I won't turn the light on. You can prove that
I'm not controlling for the light being on by supplying light from another
source. I will then not resist a disturbance of the state of the light
bulb; in fact I might turn it off myself if the total light is too bright.

Would it be fair to say that you are controlling for having this light
on SO THAT you can see the book well enough to read? That is, does the fact
that you are using this light as the MEANS by which you control for the
preception of reading the book negate the fact that you are controlling for
having the light on?

No. This is not correct. The MEANS is the output, the action, which is
varied as required to keep the result at the reference level. Varied, not
controlled. The reference state of the light is varied as required to keep
the page illuminated, which shows that it is not itself a controlled
quantity.

You're supposed to know all this, Bruce.

Best,

Bill P.

[From Rick Marken (980730.0830)]

Bill Powers (980729.2315 MDT)--

You are objecting that the rats had no control over either the
timing of the warning or the occurance of shocks, but those are
not claimed to be controlled variables. The only claimed controlled
variable is the one that the rat's actions could affect, the mode
of the apparatus.

I stared this thread by saying that Bruce Abbott had not conducted
The Test in his preference experiments because we still have no
idea what perception the rats were controlling in those experiments.
Bruce then said that his experiments had revealed that the controlled
perception was "signal 5 sec before shock". You and I explained why
we have no reason to believe that this is the controlled perception.
I agree that the experiments show that the rat was controlling the
mode of the apparatus, but, as you note

one can question whether [these; results [showing that rats control
apparatus mode] have isolated the perceived variables that were
actually under control.

which was my original point; the results of Bruce's preference
experiments clearly have _not_ isolated the perceived variables
that were actually under control. So, as I noted when I started
this thread, we don't know what variables the rats were controlling
in Bruce's shock preference experiments. We don't know this because
there was no Test for controlled variables. One reason we don't know
what perception was controlled is because the rats were not allowed
to control perceptual variables related to the mode the apparatus
was in (such as signal-shock interval). I think the rats were not
allowed to control these possibly controlled perceptions because
the experimenters had no idea that what the rats were doing was
controlling their perceptions. As you say:

I join you in questioning whether the Test can really be performed
accidentally, without understanding the principles of control.

I'm just trying to explain (to those who might be interested) why
I think this is true. Much of what Bruce did provides evidence
that the rats were controlling _something_; as you note, they were
obviously controlling the mode of the apparatus becuase they
immediately restored the apparatus to "signaling" mode after it
had changed to "non-signaling" mode. Many conventional experiments
can be seen as involving an aspect of the Test; a disturbance (IV)
is varied and there is a reaction to it (DV). I am trying to
explain why these "accidental" applications of aspects of the Test
are not the Test; I am trying to explain why psychologists have
not been able to do the Test without an understanding of the
principles of control.

We can clearly see evidence of control in conventional research;
it would be rather surprising if we couldn't since organisms are
control systems. What we can't see, however, -- even in research
that seems very _close_ to being the Test, like the McBeath et al
research I described in my "Dancer..." paper and Bruce's preference
studies -- is evidence of the perceptual variable that is actually
under control. I think the reason for this is that doing the Test
properly is not just a matter of carriying out a proper sequence
of steps. The Test is based on understanding that organisms are
controlling perceptual variables. There are many ways to try to
Test to determine what perceptual variables are under control; but,
ultimately, all these approaches fall out of an understanding of
the principles of control and the fact that behavior is the control
of perceeption.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980730.1500 EST)]

Bill Powers (980729.1151 MDT) --

Bruce Abbott (980729.1800 EST)

The objective at the time was to identify what was reinforcing the choice of
the signaled over the unsignaled condition, once it had been established
that the rats do indeed so choose. That does not prevent me from
reinterpreting the data in control-theoretic terms today, does it? Insights
_can_ be retroactive (applied to old data).

Yes, but it can't be claimed that you had the insights at the time the data
were taken, as you appear to be doing in claiming that you were applying
the Test _then_.

I assure you, I am making no claim that control theory guided the design of
the study or the interpretation of the results at the time.

f control theory applies to the behavior of organisms now, it applied
then, too, and all the relationships we expect to see under the control
model should have been observed if you were observing accurately. Therefore
your data are bound to contain information that can be explained under
control theory. That does not provide ground for claims that you were
anticipating PCT, especially if you were interpreting the data under a
different theory.

I never made a claim of anticipating PCT, Bill. My claim is that the method
employed can now be seen (in the light of control theory) as having
constituted a Test for the Controlled Variable. At the time, the method was
interpreted as assessing the rat's preference. But it happens that actively
reinstating the signaled shock condition each time it is taken away by the
apparatus (repeatedly demonstrating that the rat will choose the signaled
over the unsignaled condition) corresponds to controlling for being under a
condition in which shocks are signaled rather than unsignaled. Thus there
is more here than just the truism that control theory applies to behavior
under all circumstances.

I'm not going to get into a discussion of who said what; anyone who wants to
know can reread the previous posts on this topic.

. . . these alternatives were Tested quite early
on and ruled out that your assertion of a logical equivalence between
(signal and shock) and signaled shock is incorrect, just as I know from
experience that (steak and salt) is not necessarily the same culinary
experience as salted steak.

Is there yet another set of experiments that you have failed to mention? I
see nothing in what you have talked about that addresses this question. In
fact, what we see the rat controlling in your experiment is the signalling;
the shocks, since the rat can't affect them, are not controlled. The rat
exerts itself to keep the signalling turned on. That is what your data prove.

There are plenty of experiments I have "failed" to mention. It was a long
series. In my last post I explained how my intention was simply to present
the general method for evaluation. I didn't plan on offering a detailed
description of all experiments, their results, and the conclusions that were
reached on the basis of those results.

Evidence doesn't argue. You do.

Figure of speech. Sorry, I forgot how annoyingly literal-minded you can be,
when you want to find things to criticize.

So please present that argument.

Rats will control for being in the signaled shock (as opposed to the
unsignaled shock) condition only if the signals precede shocks, and only if
they precede them by a certain minimum time. They will not control for
being in the signaled condition if the signals and shocks are randomly
related, or if the signals occur too close in time prior to shock. The rat
exerts itself to keep the signaling turned on only if the signals are
predictive of the time when shock will occur.

I'm not so sure that you're being "reasonable." When you have in the past
illustrated Testing for the Controlled Variable in a tracking-task setting,
you have talked about how in a single experiment the Test (applying
disturbances to the putative CV) reveals whether the CV (or a close
correlate of the CV) is being controlled by the subject. What follows after
this are further Tests to refine that definition. You have never to my
knowledge asserted before that the tracking experiment by itself does not
constitute a Test for the Controlled Variable.

In a full-blown tracking task, disturbances are applied both to the target
position and the cursor position. Since these disturbances are independent
of each other, we sample all possible combinations of magnitudes and
directions of disturbances, exhausting the degrees of freedom of the task.
Then, in fitting the model to the data, we demonstrate that the behavior we
observe quite exactly exemplifies the behavior of a control system with
specific and rather narrowly-defined parameters. This version of the Test
is much more advanced than the one I describe in terms of rules of thumb.

That's all well and good, but it doesn't answer my question. Does the
typical tracking experiment exemplify a Test for the Controlled Variable?
Yes or no? (I do not wish you to infer from this that I am asking whether
this one Test would be adequate by itself to nail down the CV precisely. It
wouldn't be.)

Your experiments with the
signalled shocks, of course, are by no means comparable in precision with
our tracking tasks, largely because of the binary conception of the
controlled variable.

I would say entirely (not largely) because of the binary NATURE (not
conception) of the controlled variable. Some variables, Bill, are
inherently binary. Like whether shocks are signaled or not. Try doing a
study on control of pregnancy, for example, by allowing participants to vary
the degree of pregnancy. Raises some interesting technical difficulties,
doesn't it?

Yet in the context of my
experiment you now wish to redefine the Test as having been performed _only_
if an exhaustive series of whatever-they-are-that-aren't-Tests has been
conducted to specifically identify exactly what aspect of the putative CV is
what the subject is "really" controlling for.

Yes. You have to rule out all reasonable alternatives. We have done so in
the tracking tasks. You have made an attempt to do so under circumstances
where many more possibilities exist. You chose not to investigate all the
possibilities, and I believe that is reasonable. But it limits what you can
claim to have found.

O.K., if that is your preference. A single experiment does not constitute a
Test for the Controlled variable. When you or I or anyone else applies a
disturbance to a putative controlled variable and observes whether or not
that disturbance is resisted by the subject's actions, we henceforth will
NOT say that we that administered a Test for the Controlled Variable.

Uh, what shall we say we have done?

You seem to forget that the Test is _specifically_ a test for control.

On the contrary, I have kept that firmly in mind.

It
is not just a test to see if one variable affects another.

Didn't say that it was.

If you show that
varying the interval between a signal and a shock affects the behavior of
the rat, you have not tested for control, but simply for an effect. This
may give you interesting information, but it's not a test for control. It's
not the Test.

I think you should reconsider your position on this. I do not show that
varying the interval between a signal and a shock affects the "behavior" of
the rat (a rather vague and general term), I show that the interval between
a signal and a shock is a crucial parameter in the rat's willingness to
_control_ for being in the signaled as opposed to the unsignaled condition.
That certainly does provide information concerning the true nature of the
CV. Signal-and-shock in any order) is not sufficient; before the rat
controls for the signaled shock condition, that condition must offer signal
and _then_ shock. You can see how this manipulation narrows the
possibilities in the definition of the CV.

What a strange thing to say. Well, let's try this approach on a different
method. I ask a person to keep a certain variable at a certain value. I now
apply disturbances which tend to push the variable away from the stated
value, and observe that the person responds in such a way that the effects
of the disturbances are all but negated by these actions, so that the
variable remains close to the given value, much closer than would be the
case if the person had not so acted. Is this a Test for the Controlled
Variable?

Part of it. Part of the issue here is the degree to which you actually
followed this procedure in your experiment. You must also verify that it is
the action of the system that is opposing the disturbance,

Done -- a press of the lever restores the signaled shock conditon.

and that control
is lost when the system is prevented from perceiving the proposed
controlled variable.

Such as by moving the signals so that they don't precede the shocks? Done.

But yes, it is the Test. This Test must be repeated
until it converges to a single result, as nearly as possible.

More work needs to be done, but Testing was indeed repeated in an attempt to
converge on one result. There are some technical difficulties in ruling out
some of the remaining alternatives.

Can you determine that without having a specific application
(e.g., cursor tracking) of the method? If you can, why can't you do the
same when evaluating the changeover method in the abstract?

You can. But that was not the only issue: the issue was whether the Test
was applied in a knowing and competent manner at the time you did the
experiments. There is more to doing the Test than just a single iteration
of a rote procedure. Anyone can appear to do it once, by accident. What
reveals its knowing use is demonstration of an appropriate strategy of
forming hypotheses, applying disturbances, observing the results,
evaluating them as an indicator of control, and then refining the choices
of disturbances to rule out alternative definitions of the controlled
variable.

What we were doing was forming hypotheses, doing the experiments, observing
the results, evaluating them as an indicator of preference, and then
refining the choices of manipulated variables to rule out alternative
definitions of the reinforcer. Because choice in the context of this method
was assessed by giving the rats control over being in the signaled or
unsignaled shock condition, the results of the series can now be
interpreted, in retrospect, as involving a series of Tests that ruled out
alternative hypotheses as to the nature of the controlled variable.

The one aspect of this test that you would never have done if you were
looking for reinforcers is to apply various disturbances to the reinforcer
itself, in all its degrees of freedom, independently of the effects of
behavior. I doubt that it would have occurred to you to have the apparatus
revert to the unsignalled mode, and after a while revert back to the
signalled mode whether the rat pressed the lever or not.

As it happens, I designed, conducted, and published such an experiment.
This was done to answer some unwarranted criticisms of the changeover
method. A clock switched conditions back and forth between signaled and
unsignaled independently of the rat's behavior. At any time, the rat could
press a lever to switch conditions. The rat pressed and switched as soon as
the unsignaled condition appeared, but did not while the signaled condition
was present.

Now that I can agree with. My idea of applying [successive refinements of]
the Test is the same as yours.

Ah, good. How was that done in your experiment?

I really don't feel like expending the enormous time it would require to
provide an appropriately detailed description, especially as my own interest
is in establishing the validity of the general method as a Test of the
Controlled Variable and not in defending work done 25 years ago.

You assumed that the rat was not controlling for the interval between the
signal and the shock. When I suggested this variable might be Tested for,
you objected that it is technically difficult. So what do you then assume
about the rat's attempt to control the interval between signal and shock?
Whatever you do, you're assuming _something_ unless you actually do the test.

The rat was not controlling for the interval between signal and shock
because there was no available means to do so. It might prefer a certain
interval over others if given the choice. Providing direct control over the
interval is not technically difficult (a mere matter of programming the
apparatus); what I said posed a technical difficulty was the delay in
feedback between actions that would alter the size of the interval and the
rat's perceptions of the resulting changes. The rat will not make the
connection between its actions and interval size at such a delay. The
technical difficulty lies in the nature of the rat's mental apparatus. If
you have a technical solution to that problem that will work, I'd be happy
to try it out.

However, there is an alternative approach, and that is to allow the rat to
control for being in the signaled (as opposed to unsignaled) shock condition
while parametrically varying the interval between signal and shock. The rat
will control for being in the signaled condition more strongly for a
signaled condition that offers optimal intervals than for those that offer
less optimal intervals, as perceived by the rat.

Actually I did keep a press-by-press and shock-by-shock record of the
experimental data. The data presented were indeed whole-session averages,
but in this case those averages did not differ in any important way from
data collected at any point within a session.

That's impossible. At any point within the session, there is no "percentage
of time in the signaled condition." The system is either in the signaled
condition or in the other condition. The rat is either pressing the lever
or doing something else.

Please, Bill, I'm not stupid. You can determine a rate by taking a sample
of finite length, at any "point" within the session.

In tracking studies, you
report "whole session averages" like the various correlations, mean square
error, fitted gain value, and so on. The percentage of time spent in the
signaled shock condition is no different from these measures, Bill. It's an
index of overall performance.

Yes, but it's not a measure of what the rat or the person was doing during
the session.

Every time the rat is forced back into the unsignaled shock condition, it
immediately runs over the the lever and presses it, reinstating the signaled
shock condition. It does not press the lever while in the signaled shock
condition. That's what happens the whole session long. Rick described one
of his demos, in which the participant pushes a button or something when she
detects (based on a series of numbers) that a program has changed; the press
restores the old program. I imagine that the data from Rick's experiment
would look very similar to changeover data.

Great. I thought that when the rats could avoid the shocks, the signalling
became irrelevant. But it's not going to work, Bruce. No matter now many
vital facts you hold in reserve, I am not going to learn to quit
questioning your results when you spring new evidence on me.

Then we seem to be working at cross purposes. I am trying to convince you
that the general method I described constitutes a valid Test for the
Controlled Variable. You seem to be after something else. I have provided
examples to illustrate the method. They are not intended to provide an
exhaustive review of the study as carried out or of the hypotheses evaluated.

Perhaps we need to find a somewhat simpler analogy from which to reason
about the changeover procedure. Bill, imagine that you are sitting in your
favorite easy chair, reading a book. The lamp on the stand next to your
chair is on. Mary hypothesizes that you are controlling for having the
light on, so she performs the Test (or whatever it is you now wish to call
it) by switching the lamp off. You immediately reach over and turn it back
on. She turns it off again. You turn it back on. She turns it off.
"Mary," you say with some irritation as you flip the light on again, will
you _please_ leave the light alone?" Mary concludes that you are indeed
controlling for the light being on.

Questions: Has Mary performed a Test for the Controlled Variable?

Yes. And she is about to be deceived.

You are not controlling the state of the lamp?

If so,
is she correct that you are controlling for having the light on?

That might be my controlled variable, or it might not. It's more likely
that I am controlling for having enough light on the page, with the state
of the light itself being varied but not controlled (if the sun shines in
the window, I don't need the light to be on).

So you vary the state of the lamp as necessary to produce the desired amount
of light on your book, but you are not controlling the state of the lamp.
Is this your position?

What if
you wanted the light on so that you could read? Does the fact that you are
"really" controlling for having enough light on your book to read it mean
that you are NOT simultaneously controlling for having this particular light
on?

Very possible, in fact likely. I don't care about the state of the light
bulb itself. My concern is the illumination of what I'm reading. If there's
already enough illumination, I won't turn the light on. You can prove that
I'm not controlling for the light being on by supplying light from another
source. I will then not resist a disturbance of the state of the light
bulb; in fact I might turn it off myself if the total light is too bright.

If you don't care about the state of the lamp, why do you keep turning it
back on after Mary turns it off?

Would it be fair to say that you are controlling for having this light
on SO THAT you can see the book well enough to read? That is, does the fact
that you are using this light as the MEANS by which you control for the
preception of reading the book negate the fact that you are controlling for
having the light on?

No. This is not correct. The MEANS is the output, the action, which is
varied as required to keep the result at the reference level. Varied, not
controlled. The reference state of the light is varied as required to keep
the page illuminated, which shows that it is not itself a controlled
quantity.

I think you need to think this through. Having the light on is a means by
which you are enabled to read your book. Turning the switch is the means by
which you control the state of the lamp. You are controlling the state of
the lamp as a means of producing sufficient illumination of your book for
reading, and you might find an alternate means if the lamp became a problem
(what with Mary fiddling with it and all), but so long as it is your
selected means to this other end, dammit Bill, YOU ARE CONTROLLING THE STATE
OF THE LAMP.

You're supposed to know all this, Bruce.

And so are you.

Regards,

Bruce

[From Chris Cherpas (980730.1712 PT)]

Rick Marken (980730.0830)--

...I think the reason for this is that doing the Test
properly is not just a matter of carrying out a proper sequence
of steps. There are many ways to try to Test to determine what
perceptual variables are under control;...

Variable means to common ends; OK, so let's go up a level
(or is it two?) and see what else is involved in the Test...

but, ultimately, all these approaches fall out of an
understanding of the principles of control...

The "ultimately" suggests a system concept. Not only are
perceptions of the principles controlled (to remain principles),
but "all these approaches fall out of" (all these program-level
perceptions are organized by) "an understanding of the principles"
(an organization of principles), and "the fact that behavior is
the control of perception" (the canonical system concept, so to
speak).

Or, restated:

The Test is based on understanding that organisms are
controlling perceptual variables.

Can one understand the Test without understanding the
Tester in control system terms?

Best regards,
cc

[From Bruce Abbott (980730.2100 EST)]

Bill,

I think that the main sticking point between us concerns what can be learned
from the sorts of manipulations that were carried out in the preference
experiments we've been discussing. Perhaps an assessment is in order.

1. At this point, I believe you agree that the basic procedure does show
    that rats control for being in the signaled condition as opposed to
    the unsignaled condition, when the signal-shock interval is optimal.

2. A main concern is with the binary nature of control, i.e., there
    were only two states the apparatus could be in (signaled shock condition
    or unsignaled shock condition), so one can not observe continuous change
    in output, continuous change in magnitude of disturbance, and the
    relationship between these values and the value of the CV.

3. You are not convinced that the parametric manipulation of such values
    as the interval between signal and shock, while examining the rat's
    willingness to control for being in the signaled condition, will provide
    information about the nature of the CV.

With respect to this last item, what I would like you to do is to
temporarily forget about the issue of whether the procedure provides a Test
for the Controlled Variable and instead, beginning with a fresh sheet of
paper, as it were, think about what is going on in this procedure as the rat
is repeatedly tested for preference for signaled over unsignaled shock
schedules at different signal-shock intervals. At some intervals between
signal and shock, the rats press the lever immediately as soon as the
unsignaled condition is imposed by the apparatus; consequently they spend
nearly the entire session in the signaled condition (high-gain control). At
other intervals the rats seem less motivated, taking their time getting to
the lever and thus spending quite a bit less time in the signaled shock
condition (low-gain control). At yet other intervals, they do not even
bother to reinstate the signaled condition once it is replaced by the
unsignaled condition (they no longer control for being in the signaled shock
condition, nor do they control for being in the unsignaled shock condition).

In PCT terms, how might one explain these systematic changes in the rat's
interest in controlling for being in the signaled shock condition, as the
interval between signal and shock is experimentally manipulated across sessions?

Regards,

Bruce

[From Bill Powers (980730.2011 MDT)]

Bruce Abbott (980730.1500 EST)--

Most of this discussion has become repetitive and I don't think we need to
go around again.

O.K., if that is your preference. A single experiment does not constitute a
Test for the Controlled variable. When you or I or anyone else applies a
disturbance to a putative controlled variable and observes whether or not
that disturbance is resisted by the subject's actions, we henceforth will
NOT say that we that administered a Test for the Controlled Variable.

Uh, what shall we say we have done?

You have disturbed something you think might be a controlled variable to
see if the subject pushes back. If you really wanted to know what the
controlled variable is, you'd continue from there, and do the Test.
....

I think you should reconsider your position on this. I do not show that
varying the interval between a signal and a shock affects the "behavior" of
the rat (a rather vague and general term), I show that the interval between
a signal and a shock is a crucial parameter in the rat's willingness to
_control_ for being in the signaled as opposed to the unsignaled condition.
That certainly does provide information concerning the true nature of the
CV.

OK, you aren't going to give up. Let it go.

Questions: Has Mary performed a Test for the Controlled Variable?

Yes. And she is about to be deceived.

You are not controlling the state of the lamp?

No. I am varying it. If you disturb the illumination of the page by shining
a light on it, I will immediately alter the state of the lamp; I have no
desire for the lamp to be on or off. You can turn the lamp on or off just
by adding or subtracting illumination of the page I'm reading.

If so,
is she correct that you are controlling for having the light on?

That might be my controlled variable, or it might not. It's more likely
that I am controlling for having enough light on the page, with the state
of the light itself being varied but not controlled (if the sun shines in
the window, I don't need the light to be on).

So you vary the state of the lamp as necessary to produce the desired amount
of light on your book, but you are not controlling the state of the lamp.
Is this your position?

Yes, of course. Varying an output to oppose the effects of a disturbance on
an input is not "controlling" the output. I have no reference level for the
state of the lamp itself. I will turn the lamp on or off as required to
keep the illumination of the page the same. Control systems control their
inputs, not their outputs. I sacrifice independent control of the lamp so I
can control the illumination of the page.

What if
you wanted the light on so that you could read? Does the fact that you
are "really" controlling for having enough light on your book to read it
mean that you are NOT simultaneously controlling for having this
particular light on?

I want enough light on my book so I can read it. If there isn't already
enough light on it, I may turn on the lamp. If the lamp is on a dimmer, I
will vary the dimmer setting until the illumination is satisfactory. I have
no preference for any particular dimmer setting. If I did, I couldn't
control the illumination of the page. If I have only on-off control of the
lamp, I may have to use other actions, such as moving the lamp, to achieve
the exact level of illumination I want.

Not everything we affect by our actions is controlled by our actions.

If you don't care about the state of the lamp, why do you keep turning it
back on after Mary turns it off?

Because I care about the illumination of the page, and varying the state of
the lamp is one way to achieve the right illumination. The lamp is part of
my environmental feedback function.

I think you need to think this through. Having the light on is a means by
which you are enabled to read your book.

Yes. That is why we call it an action in PCT. An action is an output of a
control system. Control systems _vary_ their actions as a means of
controlling their perceptions. They do not control their actions.

Of course the action of a system, its output signal, enters its environment
where it has effects that come back into the same control system and alter
its perceptions. If we examine its environment in detail, we will see that
some of the outputs become reference signals for lower-level control
systems, and it is really the action of these systems that alters the
environment. This may be what is confusing to you about my answers. I am
staying at one level of control; you are moving up and down between two
levels.

Turning the switch is the means by
which you control the state of the lamp. You are controlling the state of
the lamp as a means of producing sufficient illumination of your book for
reading, and you might find an alternate means if the lamp became a problem
(what with Mary fiddling with it and all), but so long as it is your
selected means to this other end, dammit Bill, YOU ARE CONTROLLING THE STATE
OF THE LAMP.

No, I am not. I am varying the state of the lamp, but I am not controlling
it. I do not have a preference for the state of the lamp at the same level
where I have a preference for the illumination of the page. Instead, I have
a page illumination control system which acts by _varying_, not
controlling, the reference level for the lower control systems that control
the state of the lamp. The preference for the state of the lamp is
subordinate to the page illumination control system, and varies as required
to get the page illuminated properly.

Reflect on this: I cannot simultaneously choose independent reference
levels for the illumination of the page and the state of the lamp.

Best,

Bill P.

[From Bill Powers (980731.0449 MDT)]

Bruce Abbott (980730.2100 EST)--

I think that the main sticking point between us concerns what can be learned
from the sorts of manipulations that were carried out in the preference
experiments we've been discussing. Perhaps an assessment is in order.

I think this is a good idea. Too much just slides by without being discussed.

1. At this point, I believe you agree that the basic procedure does show
   that rats control for being in the signaled condition as opposed to
   the unsignaled condition, when the signal-shock interval is optimal.

I agree. The observations show that the rats will act to keep the apparatus
in the signaling state, against disturbances that tend to take it out of
this state. What it is about this state that is of importance to the rats
is still unknown.

2. A main concern is with the binary nature of control, i.e., there
   were only two states the apparatus could be in (signaled shock condition
   or unsignaled shock condition), so one can not observe continuous change
   in output, continuous change in magnitude of disturbance, and the
   relationship between these values and the value of the CV.

Yes. With only two possible values of a variable, it is hard to measure
such things as loop gain or reference level. It's the same problem I would
have in understanding an electronic circuit if my voltmeter's readout were
an LED indicating only "voltage" or "no voltage."

3. You are not convinced that the parametric manipulation of such values
   as the interval between signal and shock, while examining the rat's
   willingness to control for being in the signaled condition, will provide
   information about the nature of the CV.

It's not so much that I doubt the value of parametric manipulations; I
simply don't know how to interpret the results. They don't make sense in
control theory terms. I can observe the rat's controlling, but I can't
observe its willingness to control. Consider your next paragraph:

... think about what is going on in this procedure as the rat
is repeatedly tested for preference for signaled over unsignaled shock
schedules at different signal-shock intervals. At some intervals between
signal and shock, the rats press the lever immediately as soon as the
unsignaled condition is imposed by the apparatus; consequently they spend
nearly the entire session in the signaled condition (high-gain control). At
other intervals the rats seem less motivated, taking their time getting to
the lever and thus spending quite a bit less time in the signaled shock
condition (low-gain control). At yet other intervals, they do not even
bother to reinstate the signaled condition once it is replaced by the
unsignaled condition (they no longer control for being in the signaled shock
condition, nor do they control for being in the unsignaled shock condition).

This does not sound like the way a control system works. When you optimize
the signal, the rat behaves quickly and effectively to control the state of
the apparatus. But when you make the signal less effective, or what we are
thinking of as less effective, you get less rapid behavior from the rat. It
looks as though when the behavior produces a less effective signal, the rat
behaves less. Control theory would lead us to expect it to behave more, not
less, in the effort to produce a more effective signal.

Notice that your way of describing the effect of the manipulation implies
that the rat becomes lackadaisical, cares less about the shocks or about
defending against them. I assume that this is what you mean by the rat's
becoming "less motivated." But this is not the only possible interpretation
of the observations. You could also say that as the interval departs from
the optimum, the rat has more and more difficulty in perceiving whether a
shock is being signaled, so it is less and less certain about whether a
shock is about to occur. The uncertainty increases the latency of action.

Whatever the details, we might surmise that the loop gain is declining as
the signal departs from the optimum timing. This would lead to less
behavior. But we need to look at what happens to a control system as the
gain in the environmental feedback factor or perceptual input function
declines. Perhaps what we would see is just what you observe. A little
modeling should educate us, if you care to try it.

In PCT terms, how might one explain these systematic changes in the rat's
interest in controlling for being in the signaled shock condition, as the
interval between signal and shock is experimentally manipulated across

sessions?

When we understand what's going on, we may be able to afford to speak in
metaphors, but when the results are baffling we must be careful to avoid
slipping assumptions through without examination. I don't think we can
attribute the effect to a change in "the rat's interest in controlling."
See above: the rat is probably just as "motivated" to avoid the effects of
shock as ever. What we observe is that as the parameter is changed, at
some value there is a maximum in the speed with which the rat goes to the
bar and presses it, restoring the signaled condition. Our hypothesis, I
think, would be that the rat is best able to use the signal to defend
against the shock when it precedes the shock by some specific time. But
even that requires translation into simpler terms: what does "using" the
signal mean? What does "defending against" the shock mean?

We have only one bit of evidence about how the rat defends against the
shock; you mentioned it yesterday. You said that when the signal occurs,
the rat "freezes." You said that this meant that the rat would experience
even longer shocks in the signaled condition. The word "freezes" suggests
an involuntary reaction to the signal, but in PCT terms, we have to
consider first that this "reaction" may be a purposive action. How could
"freezing" reduce the effects of shocks on the animal?

To answer this question, we have to consider what little we know about the
effects of shock on an animal. We know from our own experience that running
an electrical current through ourselves has a number of effects. There is a
host of sensory effects, including pain and other sensations, and there is
also a direct effect on muscles, causing them to become tense and go into
conflict. If a person grabs a wire with a voltage on it, it may be
impossible to let go (depending on just where the current is running),
either because the muscles are directly affected or because the nerves
driving the muscles are affected, or both.

We also know (reports from people who do lightning research, long ago in
_Science_) that it is possible for a person to prepare for a shock (if
warned by sensations from pre-stroke electrical fields), to the point where
a hit by a lightning-bolt fails to injure the person. This is
well-attested, by several people who have learned to resist lighting
strikes and by scientific observers who have seen them do it and have
learned to do it themselves.

So we have at least an existence theorem saying that given a warning, it is
possible for an organism to "brace" itself against the effects of shocks,
and greatly (drastically!) reduce their physiological effects. It would not
be unreasonable to guess that the "freezing" we see in the rats is part of
this preparation.

Of course if animals can indeed control the effects of shocks on
themselves, then we can no longer determine the relationship between the
effects of shocks and the behavior of the animal. With the animal adjusting
its own sensitivity to shocks, we can no longer say how much aversive
effect a known amount of current is actually having on the animal. What we
interpret as a behavioral effect of the current would actually be a
purposive action aimed at reducing the current's effectiveness.

It would help, I think, if you would give a more complete description of
how the rats in this experiment behave. How long do they spend each day in
the experimental cage? Do they resist being transferred to that cage, and
does the resistance increase with time? What are the reactions to the
shocks, and do they change with time? There's obviously a conflict between
avoiding injurious levels of shock and making the shocks so mild that the
rats ignore them. What are the criteria for adjusting the level of shock?

Best,

Bill P.

[From Bruce Gregory (980731.1000 EDT)]

Bill Powers (980730.2011 MDT)

No, I am not. I am varying the state of the lamp, but I am not controlling
it. I do not have a preference for the state of the lamp at the same level
where I have a preference for the illumination of the page.
Instead, I have
a page illumination control system which acts by _varying_, not
controlling, the reference level for the lower control systems
that control
the state of the lamp. The preference for the state of the lamp is
subordinate to the page illumination control system, and varies
as required
to get the page illuminated properly.

Of course it is unlikely that you _really_ care that much about the page
illumination. If I volunteered to become part of your environmental feedback
function by reading the book to you, you might reveal that you are
indifferent to the illumination. Of course, only the Test would tell us
that.

Bruce Gregory

[From Bruce Abbott (980731.0950 EST)]

Bruce Gregory (980731.1000 EDT) --

Bill Powers (980730.2011 MDT)

No, I am not. I am varying the state of the lamp, but I am not controlling
it. I do not have a preference for the state of the lamp at the same level
where I have a preference for the illumination of the page.
Instead, I have
a page illumination control system which acts by _varying_, not
controlling, the reference level for the lower control systems
that control
the state of the lamp. The preference for the state of the lamp is
subordinate to the page illumination control system, and varies
as required
to get the page illuminated properly.

Of course it is unlikely that you _really_ care that much about the page
illumination. If I volunteered to become part of your environmental feedback
function by reading the book to you, you might reveal that you are
indifferent to the illumination. Of course, only the Test would tell us
that.

Or it may be that Bill doesn't _really_ care that much about the book;
perhaps he is reading the book as a MEANS of passing the time. In which
case he's not _really_ controlling for the state of the lamp or for reading
the book, or for experiencing the content of the book. As he doesn't seem
to be controlling for much of anything here, and yet he is doing things
(switching on the lamp, reading), he must be an S-R system! Who wouldda
thunk it? (;->

Regards,

Bruce A.

[From Bill Powers (980731.0852 MDT)]

Bruce Gregory (980731.1000 EDT)--

Of course it is unlikely that you _really_ care that much about the page
illumination. If I volunteered to become part of your environmental feedback
function by reading the book to you, you might reveal that you are
indifferent to the illumination. Of course, only the Test would tell us
that.

That's the idea. At the level of controlling page illumination, I don't
care about the state of the lamp. At the level of experiencing the contents
of the book, I don't care about the level of illumination. And as you say,
even if it's "obvious" what I'm doing, it always pays to check the
controlled variable, even if you don't give it the full treatment. It's the
most embarrassing to assume the wrong controlled variable when you could
have checked it in a couple of seconds.

Best,

Bill P.

[From Bill Powers (980731.0903 MDT)]

Bruce Abbott (980731.0950 EST)]

Or it may be that Bill doesn't _really_ care that much about the book;
perhaps he is reading the book as a MEANS of passing the time. In which
case he's not _really_ controlling for the state of the lamp or for reading
the book, or for experiencing the content of the book. As he doesn't seem
to be controlling for much of anything here, and yet he is doing things
(switching on the lamp, reading), he must be an S-R system! Who wouldda
thunk it? (;->

Yes, that too. My point is and was that it's always more than remotely
possible that the means of control is not itself under control, but is
varied open-loop, or is not the means you thought was being used. In fact
it's always possible that the system we're observing is an S-R system.
That's why we use disturbances in every PCT experiment. If the system is an
S-R system, the disturbances will have their full effect. If the effects of
the disturbances are counteracted by changes in the system's action, we
have ruled out the S-R hypothesis and can proceed to try to identify a
controlled variable. Every PCT experiment is an application of the Test.
You can apply the Test at each level involved in a given high-level control
process.

In the case of the lamp, the illumination, and the reading, at the top two
levels is it always possible that some other means might be used instead of
the one first observed. I could give up reading if someone else would do it
for me; I could give up altering the state of the lamp if I had another way
to vary the illumination (such as moving myself and the book closer to a
window). Sometimes alternate means are easy to find, sometimes not. When
alternate means are easily available, disturbances of the original means
will not be resisted; the higher system will immediately switch to the
alternate means, or the alternate means will automatically come into play.

So I might move closer to the window while reading at dusk, relax and sit
back when Alice turns the lamp on, turn the lamp off when Mary turns on the
overhead light, and turn off both lights and move closer to the window
again when I decide I want to save electricity until the sun goes down.

Best,

Bill P.

An interesting sidelight. S-R theory and PCT share two of the requirements
of the Test. In both cases, you have to verify that the supposed controlled
quantity affected by the disturbance (PCT) or the stimulus variable (SR)
actually affects the inputs of the organism, and you have to verify that
the change in the behavioral measure is actually a measure of what the
organism is doing -- some organism rather than none, and the organism in
question rather than some other organism. These requirements are often
overlooked in S-R psychology, but they exist nevertheless.

[From Rick Marken (980731.0850)]

Chris Cherpas (980730.1712 PT)--

Can one understand the Test without understanding the
Tester in control system terms?

What a lovely question! Even better would be the question "Can one
understand PCT without understanding oneself in PCT terms?". I
think it's a matter of degree of understanding. I think one can
undertstand PCT and the Test to a certain degree without understanding
oneself in PCT terms. But I think one reaches the highest degree
of PCT (and Test) understanding when one is able to see themselves
(as Tester, theorist, etc) from a PCT perspective.

I want to thank Bruce Abbott (980730.1500 EST), (980730.2100 EST)
for encouraging Bill Powers (980730.2011 MDT), Bill Powers
(980731.0449 MDT) to give some _excellent_ tutorials on PCT.

I'll just take the liberty of giving a diagramatic answer to
Bruce Abbott's (980730.2100 EST) question:

In PCT terms, how might one explain these systematic changes in
the rat's interest in controlling for being in the signaled shock
condition, as the interval between signal and shock is experimentally
manipulated across sessions?

Here's what I think _might_ be going on:

                r = 0
                >
                v
          ---->|C|------
          p e

···

           >

         >s> >o>
-------------------------------
          ^ |
          > >
         qi<-- s+t <-qo
                ns

The diagram shows that the rat is controlling some perception (p)
that is a representation of the controlled variable qi which
is a function of the interval (t) between signal (s) and shock.
The output variable (qo) is the bar press which switches between
the signal condition (s+t, where t is the temporal offset, positive
or negative, between signal onset and shock onset) and the no
signal condition (ns). The diagram shows that signalling condition
(s+t vs ns) is in the feedback path between the rat's outputs
and the (unknown) controlled input (qi).

The Test should answer the question "what is qi". Bruce's experiments
don't let us answer that question. WHat we do learn from Bruce's
experiments is that rats will keep the apparatus in the state s+t
as long as t>2 sec (the signal occurs at least 2 seconds before the
shock). But we don't know what _perceptible_ variable the rat is
controlling by doing this. It could be that the rat is controlling
for a certain delay (t); so qi = t. But we don't know whether or not
this is the controlled variable (qi) because the rat was not given
direct control over t.

Another, more likely, possibility for the controlled variable is
a variable like "physiological effects of shock" (see Bill Powers
(980731.0449 MDT) discussion of the possibility that animals can
learn to "brace for" and hence control the physiological effects
of shock). The value of this variable (the magnitude of the
physiological effect of shock) would be the same in both the
no signal (ns) and signal (s+t) condition when t<2 sec in the
signal condition (the signal precedes the shock by less than 2
seconds or follows it). So when t<2 the value of qi is the same
whether the rat is in condition ns or s+t. This would explain why
the rat doesn't appear to control for being in condition s+t when
t<2. When t>2 the rat appears to control for being in condition
s+t because acting (qo) to be in that condition brings the
controlled perception (qi, the physiological effects of the
shock) closer to the rat's reference for that variable.

To test the hypothesis that qi is "physiological effects of shock"
the experimenter would have to devise a way to measure and monitor
the physiological effects (or correlates thereof) that might be
under control. Bruce's experiments are a good first step toward
determining a controlled variable (they are a good first step
towards doing a true Test for the controlled variable); all they
need, really, is an understanding of what a controlled variable _is_

Best

Rick
--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (980801.0835 EST)]

Bill Powers (980731.0449 MDT) --

3. You are not convinced that the parametric manipulation of such values
   as the interval between signal and shock, while examining the rat's
   willingness to control for being in the signaled condition, will provide
   information about the nature of the CV.

It's not so much that I doubt the value of parametric manipulations; I
simply don't know how to interpret the results. They don't make sense in
control theory terms. I can observe the rat's controlling, but I can't
observe its willingness to control.

I would define "willingness to control" operationally as the degree to which
the rat keeps the alternative (in this case, signaled shock) condition
present, when we know that the rat can easily keep that condition present
(by means of its actions) over 90% of session time.

This definition would not work if we were systematically changing the
response requirement so that staying in the signaled shock condition became
more difficult for different experiments. However, in this procedure the
response requirement is always the same; the only variable is the nature of
the schedules of signals and shocks.

As to whether or not the results "make sense in control theory terms," it
seems to me that you have two reasonable options: (1) find a way to apply
control theory to the procedure such that the results do make sense in
control theory terms, or (2) adopt the view that control theory does not
handle behavior of this sort. The latter, of course, would deny the
generality of the assertion that "behavior is the control of perception." I
am of the opinion that the results of the study can be explained in a way
that makes sense in control theory terms. We just need to analyze the
situation carefully to see how.

What the procedure offers the rat is a choice between staying in the imposed
condition (in this case, the unsignaled shock schedule) or changing over to
some alternate condition (in this case, the signaled shock schedule). The
two schedules are identical except for some feature (e.g., the presence of
the signal in the signaled shock schedule). Whether the rat will exercise
the control given to it and switch to the alternate condtion depends on
whether the alternate condition is perceived by the rat to be "better" than
the imposed condition, and sufficiently "better" to be worth the effort
required to keep reinstating the alternate condition every time the
apparatus reverts back to the imposed condition. "Better" is a relative
measure -- you always have to ask, "better than _what_?"

Now this is true even when the comparison is between something and nothing.
"Would you like a piece of cake?", your host asks. "No, thank you," you
reply, having decided that at present you would prefer no cake over a piece
of this particular cake. The judgment always involves a comparison of the
relative attractiveness of the two options.

The more attractive the alternative is, the more strongly one will control
for obtaining and maintaining that alternative, everything else being equal.
One way in which control theory could accommodate this observation is to
assume that the organism will control for the more favorable alternative
(and, necessarily, against the less favorable), with the gain of the system
being proportional to the degree of relative attraction. In this way, at a
given level of error, the system with the higher gain will more vigorously
control for the presence of the alternative condition over the imposed one.
Higher error in the changeover procedure translates into more rapid approach
to and pressing of the response lever whenever the apparatus restores the
imposed condition.

I'll stop here and await your comments on this proposal.

Regards,

Bruce

[From Bruce Gregory (980801.1021 EDT)]

Bruce Abbott (980801.0835 EST)

The more attractive the alternative is, the more strongly one will control
for obtaining and maintaining that alternative, everything else
being equal.
One way in which control theory could accommodate this observation is to
assume that the organism will control for the more favorable alternative
(and, necessarily, against the less favorable), with the gain of
the system
being proportional to the degree of relative attraction. In this
way, at a
given level of error, the system with the higher gain will more vigorously
control for the presence of the alternative condition over the
imposed one.

I find the gain argument confusing. If I find chocolate more attractive than
vanilla, I don't think it is necessary to imagine that I am also trying to
control for vanilla, but with less loop gain. This seems to imply a constant
low level conflict. When I choose chocolate, I control for chocolate with
what is essentially infinite loop gain and stop controlling for vanilla
(reference level zero). My enthusiasm for chocolate seems to determine how
long I will continue to control for it in the face of disturbances. Once I
give up, the reference level is essentially zero and I execute some other
Plan.

Bruce Gregory

[From Bruce Abbott (980801.1035 EST)]

Bruce Gregory (980801.1021 EDT) --

I find the gain argument confusing. If I find chocolate more attractive than
vanilla, I don't think it is necessary to imagine that I am also trying to
control for vanilla, but with less loop gain. This seems to imply a constant
low level conflict. When I choose chocolate, I control for chocolate with
what is essentially infinite loop gain and stop controlling for vanilla
(reference level zero). My enthusiasm for chocolate seems to determine how
long I will continue to control for it in the face of disturbances. Once I
give up, the reference level is essentially zero and I execute some other
Plan.

Please read my proposal again; it did not include any notion of a control
system being set up for both alternatives, one with more gain than the other.

With respect to gain per se, in a control system the loop gain determines
how strong an action the output produces for a given amount of error between
input and reference. If you were controlling for chocolate with
"essentially infinite gain," you'd be exerting yourself to the maximum at
essentially any error at all between the level of chocolate-experience you
want and the level you have.

Your observation that the duration over which you will continue to control
for chocolate seems to be determined by the level of "enthusiasm" you have
for chocolate is worth exploring. A possible explanation for such a
relation would appeal to the cost of the effort relative to its payoff. The
alternatives become no cost (in effort and in inability to control other
important variables owing to the fact that your resources are finite and are
being expended to obtain chocolate) and no chocolate versus cost and
chocolate. After you have experienced the cost + chocolate condition for a
while, your evaluation of it relative to no cost, no chocolate condition may
change so that you cease being willing to control for chocolate under those
conditions. Your options are then either to find some less costly way to
get the chocolate or to give up on getting chocolate altogether. The more
enthusiasm you have for chocolate, the greater the perceived benefit
relative to cost, so the longer you would continue to attempt to control for it.

Of course, these are only proposals, and would need to be evaluated
experimentally.

Regards,

Bruce

[From Bill Powers (980801.0758 MDT)]

Bruce Abbott (980801.0835 EST)--

I would define "willingness to control" operationally as the degree to which
the rat keeps the alternative (in this case, signaled shock) condition
present, when we know that the rat can easily keep that condition present
(by means of its actions) over 90% of session time.

Since the alternative condition can only be present or not present, the
"degree to which it is present" is either 0% or 100%. Another measure would
be the fraction of time spent in the alternative mode, which can be defined
over intervals up to the whole session, or even over multiple sessions. I
don't see anything here that could indicate "willingness." You could
equally well imagine that you're measuring "reluctance", by using the
measure of time in the unsignaled condition. Or you could treat the
fraction of time spend in the unsignaled condition as a measure of "task
difficulty" or "perceptual uncertainty." Those interpretations of the
actual measurements are no less gratuitous than "willingness."

I would prefer to stick to the actual measurements as much as possible. We
can see that the rat's actions conform to what we would expect if the rat
were controlling the state of the apparatus with a reference condition of
"signaled condition." We don't know what the significance of doing this is
to the rat.

This definition would not work if we were systematically changing the
response requirement so that staying in the signaled shock condition became
more difficult for different experiments. However, in this procedure the
response requirement is always the same; the only variable is the nature of
the schedules of signals and shocks.

You call the presses of the lever "responses." To what events are these
actions "responses?" We can easily verify that they are actions: we can see
the rat producing them. But how would we verify that each action is a
response to something?

As to whether or not the results "make sense in control theory terms," it
seems to me that you have two reasonable options: (1) find a way to apply
control theory to the procedure such that the results do make sense in
control theory terms, or (2) adopt the view that control theory does not
handle behavior of this sort. The latter, of course, would deny the
generality of the assertion that "behavior is the control of perception." I
am of the opinion that the results of the study can be explained in a way
that makes sense in control theory terms. We just need to analyze the
situation carefully to see how.

I don't think you understand in what sense I see a problem here. If you
make it easier for a control system to control something (you give its
actions more effect on the controlled variable), the results should be
_less_ action by the control system, not _more_. If changing the delay
toward the optimum value makes it easier for the rat to control the effect
of shocks, the rat's actions should relax somewhat -- it should not have to
correct errors so rapidly. But I'm saying this without having experimented
with a model to see how changing its parameters might alter the observable
behavior -- perhaps there's some other relationship I haven't seen yet. The
only way to understand what's happening is to propose a model and try it
out. I'm deferring to you on that; if you want to be the one to check it
out, be my guest. Otherwise, just tell me you don't want to do that and
I'll give it a try.

Whether the rat will exercise
the control given to it and switch to the alternate condtion depends on
whether the alternate condition is perceived by the rat to be "better" than
the imposed condition, and sufficiently "better" to be worth the effort
required to keep reinstating the alternate condition every time the
apparatus reverts back to the imposed condition. "Better" is a relative
measure -- you always have to ask, "better than _what_?"

I think it's very premature to be guessing what the rat is thinking about.
This may seem like a choice situation to the rat, or merely a control
problem. The fact that there are different possible behaviors (as you see
it) doesn't mean the rat is considering alternatives and choosing between
them, or judging whether one course of action is better than another. It
could be as simple as "press the lever to reduce the effect of shocks." But
since we don't even know what effect that is, it's too soon to start
guessing. How would you go about determining whether the rat is controlling
for "better"?

Now this is true even when the comparison is between something and nothing.
"Would you like a piece of cake?", your host asks. "No, thank you," you
reply, having decided that at present you would prefer no cake over a piece
of this particular cake. The judgment always involves a comparison of the
relative attractiveness of the two options.

You can make anything sound like a choice situation just by picking the
right words. That doesn't mean that there's actually any choice being made.
In driving a car, the driver (you could say) has to choose between turning
the wheel to the left or to the right. Each direction of wheel movement has
its own effect on how the car moves. So the driver has to decide which
effect on the car is better, and choose the direction of wheel movement
that is required.

But all of that is nonsense, because the driver is just a continous control
system that converts errors into actions. There aren't any choice processes
going on in steering the car. If you see a piece of cake, and have a zero
reference level for some cake, you will keep the error at zero by refusing
any cake. Why muck up such a simple process by making a complicated
decision process out of it? Why pick the most complex hypothesis first?
Remember you can't just SAY that there's a choice being made: you have to
_prove_ it. We can prove that control is going on. Can you prove that a
decision is being made? I'll answer for you: you can't.

The more attractive the alternative is, the more strongly one will control
for obtaining and maintaining that alternative, everything else being equal.

That's gobbledegook. What is "controlling more strongly?" If you mean loop
gain, say so. Ir you mean higher reference level, say that. If you mean
higher gain in the input function, say that. If you mean more powerful
output function, say that. Controlling "more strongly" means nothing.

And "attractive" not only means nothing, it offers a positively incorrect
model of how behavior works in a physical universe.

One way in which control theory could accommodate this observation is to
assume that the organism will control for the more favorable alternative
(and, necessarily, against the less favorable), with the gain of the system
being proportional to the degree of relative attraction.

That implies a control system that is sensing the degree of relative
attraction and controlling for -- what? Is there any "attraction" to sense
out there in the first place? How does the organism perceive that one
alternative is more favorable than the other? Favorable in terms of what
perceptual variables and reference levels? And controlling for them how? By
varying the gain of lower systems? Would that really work?

In this way, at a
given level of error, the system with the higher gain will more vigorously
control for the presence of the alternative condition over the imposed one.
Higher error in the changeover procedure translates into more rapid approach
to and pressing of the response lever whenever the apparatus restores the
imposed condition.

I'll stop here and await your comments on this proposal.

My comment is that you've fallen into an old worn-out psychologist mode,
blathering off the top of your head about things you assume would work
without the slightest notion whether modeling would bear you out or any
concept of how you could test your ideas to see if they were right. I could
practically see the switch to the alternate personality. It's as though
someone removed the restraints saying you had to think carefully in terms
of PCT, and could go back to the more relaxed world of conjecture and
metaphor that is characteristic of traditional psychology.

Sorry, Bruce. I never got into that world and have no desire to get into it
now.

Best,

Bill P.

[From Bruce Abbott (980801.1200 EST)]

Bill Powers (980801.0758 MDT) --

Bruce Abbott (980801.0835 EST)

I would define "willingness to control" operationally as the degree to which
the rat keeps the alternative (in this case, signaled shock) condition
present, when we know that the rat can easily keep that condition present
(by means of its actions) over 90% of session time.

Since the alternative condition can only be present or not present, the
"degree to which it is present" is either 0% or 100%.

True, but that's not what I said. I said "degree to which the rat _keeps_
the alternative present," which is not either 0% or 100%.

Another measure would
be the fraction of time spent in the alternative mode, which can be defined
over intervals up to the whole session, or even over multiple sessions.

That's what I had in mind. Another, correlated measure, would be the
average error, which would decrease with "willingness." Latency to press
the lever and reinstate the alternative condition is another measure; the
latency determines how much time is spent in the imposed condition before it
is replaced by the alternative.

I
don't see anything here that could indicate "willingness." You could
equally well imagine that you're measuring "reluctance", by using the
measure of time in the unsignaled condition.

Which is just the other side of the coin. I don't care what you call it.

Or you could treat the
fraction of time spend in the unsignaled condition as a measure of "task
difficulty" or "perceptual uncertainty." Those interpretations of the
actual measurements are no less gratuitous than "willingness."

Task difficulty does not vary across the manipulations, as I pointed out in
my post. What is "perceptual uncertainty"?

I would prefer to stick to the actual measurements as much as possible.

All I did was to offer an operational definition. The actual measurements
are primary.

We
can see that the rat's actions conform to what we would expect if the rat
were controlling the state of the apparatus with a reference condition of
"signaled condition." We don't know what the significance of doing this is
to the rat.

Well, we have a better idea now then we did after that first experiment, but
go on . . .

This definition would not work if we were systematically changing the
response requirement so that staying in the signaled shock condition became
more difficult for different experiments. However, in this procedure the
response requirement is always the same; the only variable is the nature of
the schedules of signals and shocks.

You call the presses of the lever "responses." To what events are these
actions "responses?" We can easily verify that they are actions: we can see
the rat producing them. But how would we verify that each action is a
response to something?

Sorry, I intended no such implication. How about "discrete actions"?

As to whether or not the results "make sense in control theory terms," it
seems to me that you have two reasonable options: (1) find a way to apply
control theory to the procedure such that the results do make sense in
control theory terms, or (2) adopt the view that control theory does not
handle behavior of this sort. The latter, of course, would deny the
generality of the assertion that "behavior is the control of perception." I
am of the opinion that the results of the study can be explained in a way
that makes sense in control theory terms. We just need to analyze the
situation carefully to see how.

I don't think you understand in what sense I see a problem here. If you
make it easier for a control system to control something (you give its
actions more effect on the controlled variable), the results should be
_less_ action by the control system, not _more_. If changing the delay
toward the optimum value makes it easier for the rat to control the effect
of shocks, the rat's actions should relax somewhat -- it should not have to
correct errors so rapidly.

Yes, I understand that point. That's why I think that the situation is more
complex -- increasing the time spent in the signaled condition when the
circumstances there are made less desirable will not make the experience
there any better. A bad cup of coffee cannot be improved by increasing how
much of it you drink. Organisms whose systems were built to behave in this
way would soon perish.

But I'm saying this without having experimented
with a model to see how changing its parameters might alter the observable
behavior -- perhaps there's some other relationship I haven't seen yet. The
only way to understand what's happening is to propose a model and try it
out. I'm deferring to you on that; if you want to be the one to check it
out, be my guest. Otherwise, just tell me you don't want to do that and
I'll give it a try.

I'd like to try it, but things here are starting to get a bit hectic again
(deadlines an all) so it might be a while before I can get started.

Whether the rat will exercise
the control given to it and switch to the alternate condtion depends on
whether the alternate condition is perceived by the rat to be "better" than
the imposed condition, and sufficiently "better" to be worth the effort
required to keep reinstating the alternate condition every time the
apparatus reverts back to the imposed condition. "Better" is a relative
measure -- you always have to ask, "better than _what_?"

I think it's very premature to be guessing what the rat is thinking about.
This may seem like a choice situation to the rat, or merely a control
problem. The fact that there are different possible behaviors (as you see
it) doesn't mean the rat is considering alternatives and choosing between
them, or judging whether one course of action is better than another. It
could be as simple as "press the lever to reduce the effect of shocks." But
since we don't even know what effect that is, it's too soon to start
guessing. How would you go about determining whether the rat is controlling
for "better"?

I don't believe that the rat is deliberating its options; the actual process
as I envision it is considerably simpler, but I wanted to state the case in
a way that we human beings could relate to. What's relevant are the
proposed underlying processes -- the various perceptions associated with the
signaled and unsignaled conditions yield evaluations of "good" and "bad" as
you suggested in a recent post and in B:CP. The organism is organized so as
to prefer "better" over "worse," and will spawn appropriate control systems
so as to obtain the former over the latter.

Now this is true even when the comparison is between something and nothing.
"Would you like a piece of cake?", your host asks. "No, thank you," you
reply, having decided that at present you would prefer no cake over a piece
of this particular cake. The judgment always involves a comparison of the
relative attractiveness of the two options.

You can make anything sound like a choice situation just by picking the
right words. That doesn't mean that there's actually any choice being made.

No. Everything is a choice situation.

In driving a car, the driver (you could say) has to choose between turning
the wheel to the left or to the right. Each direction of wheel movement has
its own effect on how the car moves. So the driver has to decide which
effect on the car is better, and choose the direction of wheel movement
that is required.

I would say yes, during training. After that, the relevant control system
is in place and these evaluations become unnecessary, until disturbances
threaten loss of control.

But all of that is nonsense, because the driver is just a continous control
system that converts errors into actions. There aren't any choice processes
going on in steering the car. If you see a piece of cake, and have a zero
reference level for some cake, you will keep the error at zero by refusing
any cake. Why muck up such a simple process by making a complicated
decision process out of it? Why pick the most complex hypothesis first?
Remember you can't just SAY that there's a choice being made: you have to
_prove_ it. We can prove that control is going on. Can you prove that a
decision is being made? I'll answer for you: you can't.

The decision has been made at some point. Once it has been made, it need
not be made again so long as the results are satisfactory.

The more attractive the alternative is, the more strongly one will control
for obtaining and maintaining that alternative, everything else being equal.

That's gobbledegook. What is "controlling more strongly?" If you mean loop
gain, say so. Ir you mean higher reference level, say that. If you mean
higher gain in the input function, say that. If you mean more powerful
output function, say that. Controlling "more strongly" means nothing.

Loop gain.

And "attractive" not only means nothing, it offers a positively incorrect
model of how behavior works in a physical universe.

That an organism finds something "attractive" does not mean that the object
has the property of attractiveness. I am asserting no such model as you
suggest.

One way in which control theory could accommodate this observation is to
assume that the organism will control for the more favorable alternative
(and, necessarily, against the less favorable), with the gain of the system
being proportional to the degree of relative attraction.

That implies a control system that is sensing the degree of relative
attraction and controlling for -- what? Is there any "attraction" to sense
out there in the first place? How does the organism perceive that one
alternative is more favorable than the other? Favorable in terms of what
perceptual variables and reference levels?

See your own discussions of "pleasure/pain" or "good/bad."

And controlling for them how? By
varying the gain of lower systems? Would that really work?

No, by controlling for those options that yield the best outcome in terms of
the organism's perceptions of what feels best. The gain proposal is only a
first suggestion, based on the fact that higher gain yields more vigorous
action at a given level of error. An alternative is that the degree to
which one option feels better than other contributes to the magnitude of error.

In this way, at a
given level of error, the system with the higher gain will more vigorously
control for the presence of the alternative condition over the imposed one.
Higher error in the changeover procedure translates into more rapid approach
to and pressing of the response lever whenever the apparatus restores the
imposed condition.

I'll stop here and await your comments on this proposal.

My comment is that you've fallen into an old worn-out psychologist mode,
blathering off the top of your head about things you assume would work
without the slightest notion whether modeling would bear you out or any
concept of how you could test your ideas to see if they were right.

Perhaps you will want to reconsider in the light of the answers I have
provided above. Your comment immediately above is just classic Powers
boilerplate which you affix to any proposal you have a strong desire to
reject. How do you know where these ideas have come from, how well I can
assess the probable validity of the proposal, or whether I know how to test
these ideas? Have you taken to mind reading lately, or perhaps, have you
called the Psychic Hotline to determine this?

I could
practically see the switch to the alternate personality. It's as though
someone removed the restraints saying you had to think carefully in terms
of PCT, and could go back to the more relaxed world of conjecture and
metaphor that is characteristic of traditional psychology.

Oh, brother. So you don't think that I can put this into hard computer code
and make it work. I think I can.

Sorry, Bruce. I never got into that world and have no desire to get into it
now.

I've heard your rejection (based purely on visceral reaction, in my
opinion). I haven't heard you propose any convincing alternative model that
would account for the data.

Regards,

Bruce

[From Bruce Gregory (980801.1218 EDT)]

Bruce Abbott (980801.1035 EST)

With respect to gain per se, in a control system the loop gain determines
how strong an action the output produces for a given amount of
error between
input and reference. If you were controlling for chocolate with
"essentially infinite gain," you'd be exerting yourself to the maximum at
essentially any error at all between the level of chocolate-experience you
want and the level you have.

Suppose I decide that I want a chocolate bar and get on my bicycle to head
to the store. Am I controlling with low gain for getting chocolate, or am I
controlling with very high gain? Am I not exerting myself to the maximum to
continue pedaling until I reach the store? (Exerting myself to the maximum
does not imply that I am pedaling as fast as I can, but only that I keep
pedaling since this keeps the error associated with my Plan to a minimum.)

Bruce Gregory

[From Bruce Gregory (980801.1357 EDT)]

Rick Marken (980801.1050)]

I had to check reference.com to make sure my posts were actually
getting though to CSGNet. They are, so I guess I'm either saying
nothing that is a disturbance to the variables being controlled by
Abbott and Powers, or I'm just being ignored.

Don't feel like the lone ranger. I'm still waiting for a response from you
re:

[Bruce Gregory (980728.0635 EDT)]

Bruce Gregory

[From Bill Powers (980801.111p MDT)]

Bruce Abbott (980801.1200 EST)--

Since the alternative condition can only be present or not present, the
"degree to which it is present" is either 0% or 100%.

True, but that's not what I said. I said "degree to which the rat _keeps_
the alternative present," which is not either 0% or 100%.

"Keeps", then, means "on the average?" OK. Forget it.

Another measure would
be the fraction of time spent in the alternative mode, which can be defined
over intervals up to the whole session, or even over multiple sessions.

That's what I had in mind. Another, correlated measure, would be the
average error, which would decrease with "willingness." Latency to press
the lever and reinstate the alternative condition is another measure; the
latency determines how much time is spent in the imposed condition before it
is replaced by the alternative.

Yes, and this could be modeled as a control system with an integrating
output function.

I
don't see anything here that could indicate "willingness." You could
equally well imagine that you're measuring "reluctance", by using the
measure of time in the unsignaled condition.

Which is just the other side of the coin. I don't care what you call it.

Why call it anything that suggests knowledge you don't have?

Or you could treat the
fraction of time spend in the unsignaled condition as a measure of "task
difficulty" or "perceptual uncertainty." Those interpretations of the
actual measurements are no less gratuitous than "willingness."

Task difficulty does not vary across the manipulations, as I pointed out in
my post. What is "perceptual uncertainty"?

How do you know that the task doesn't get harder for the rat when the
signal's relationship to the shocks becomes more ambiguous? The increase in
delay before switching to the signaled condition may well reflect the
difficulty the rat is having in perceiving that the signaled condition
exists or doesn't exist. Why do you just dismiss this possibility?

Perceptual uncertainty is the inability to perceive clearly whether a given
signal is a sign that a shock is about to occur; in other words, the
perceptual signal is small compared with the noise because the relationship
of signal to shock is unreliable.

I would prefer to stick to the actual measurements as much as possible.

All I did was to offer an operational definition. The actual measurements
are primary.

Operational definitions used in that way are hooey. You can make any claim
and disguise it as an operational definition. Watch me do it. I say that
the rat's desire to cooperate with the experimenter (a claim that such
desires exist) is operationally defined as the relative length of time
spent in the signaled condition. Now that measure is an indicator of that
desire of the rat's. But this doesn't mean that any such desire exists; it
just gives me some measurement to make that I can use to legitimize my
unfounded assumption.

You have no reason to suppose that the rat is "unwilling" to switch
conditions. Your "operational definition" doesn't supply any reason, either.

We
can see that the rat's actions conform to what we would expect if the rat
were controlling the state of the apparatus with a reference condition of
"signaled condition." We don't know what the significance of doing this is
to the rat.

Well, we have a better idea now then we did after that first experiment, but
go on . . .

No, you don't. You have some added measures of the state of the apparatus,
namely the delay between the signal and the shock. What the significance of
this physical measurement is to the rat you do not know.

You also have some data about the relationship between the shock current
and the reference level for the signaled condition. It shows either that
tripling the shock current has no significant effect on the rat's
experience of the shock, or that the advantages of having a signal preceded
the shock by the optimum amount outweigh the disadvantages of tripling the
shock current ane presumptuously the unpleasant experience it causes.

But you still can't say how the rat is experiencing all this. It is
unlikely that the experiences could be cast in terms meaningful to a human
being.

You call the presses of the lever "responses." To what events are these
actions "responses?" We can easily verify that they are actions: we can see
the rat producing them. But how would we verify that each action is a
response to something?

Sorry, I intended no such implication. How about "discrete actions"?

Is this a report of a drastic change of policy on your part, or are you
just humoring me?

As to whether or not the results "make sense in control theory terms," it
seems to me that you have two reasonable options: (1) find a way to apply
control theory to the procedure such that the results do make sense in
control theory terms, or (2) adopt the view that control theory does not
handle behavior of this sort. The latter, of course, would deny the
generality of the assertion that "behavior is the control of

perception." >>>I

am of the opinion that the results of the study can be explained in a way
that makes sense in control theory terms. We just need to analyze the
situation carefully to see how.

I don't think you understand in what sense I see a problem here. If you
make it easier for a control system to control something (you give its
actions more effect on the controlled variable), the results should be
_less_ action by the control system, not _more_. If changing the delay
toward the optimum value makes it easier for the rat to control the effect
of shocks, the rat's actions should relax somewhat -- it should not have to
correct errors so rapidly.

Yes, I understand that point. That's why I think that the situation is more
complex -- increasing the time spent in the signaled condition when the
circumstances there are made less desirable will not make the experience
there any better.

A bad cup of coffee cannot be improved by increasing how
much of it you drink. Organisms whose systems were built to behave in this
way would soon perish.

You declared that there would be no good effect of increasing the time
spent in the signaled condition, and then went on to deliver an analogy
based on that assumption. But if the assumption is wrong, the analogy is
irrelevant.

Suppose that being in the signaled condition enables the rat to defend
itself against most of the effects of shocks. Then the more time that is
spent in the non-signaled condition, the more shocks will occur that the
rat can't defend against. So increasing the time in the signaled condition
will definitely have an improving effect on what the rat experiences.

The
only way to understand what's happening is to propose a model and try it
out. I'm deferring to you on that; if you want to be the one to check it
out, be my guest. Otherwise, just tell me you don't want to do that and
I'll give it a try.

I'd like to try it, but things here are starting to get a bit hectic again
(deadlines an all) so it might be a while before I can get started.

You don't want to do it. OK, I'll put it on my list.

I think it's very premature to be guessing what the rat is thinking about.
This may seem like a choice situation to the rat, or merely a control
problem. The fact that there are different possible behaviors (as you see
it) doesn't mean the rat is considering alternatives and choosing between
them, or judging whether one course of action is better than another. It
could be as simple as "press the lever to reduce the effect of shocks." But
since we don't even know what effect that is, it's too soon to start
guessing. How would you go about determining whether the rat is controlling
for "better"?

I don't believe that the rat is deliberating its options; the actual process
as I envision it is considerably simpler, but I wanted to state the case in
a way that we human beings could relate to.

Why, if human beings don't do it this way, either? What you're doing is
called anthropomorphizing. And in this case, you're doing so in terms of an
illusion many people have about how often they actually make any decisions.

What's relevant are the
proposed underlying processes -- the various perceptions associated with the
signaled and unsignaled conditions yield evaluations of "good" and "bad" as
you suggested in a recent post and in B:CP. The organism is organized so as
to prefer "better" over "worse," and will spawn appropriate control systems
so as to obtain the former over the latter.

No. New control systems are "spawned" to correct intrinsic error. Are you
using "spawned" in the Unix sense, or the salmon sense? Or the reorganizing
sense? They're all different.

You can make anything sound like a choice situation just by picking the
right words. That doesn't mean that there's actually any choice being made.

No. Everything is a choice situation.

I can't interpret that comment. Do you mean "No, it doesn't mean that
there's actually any choice being made"? Or do you mean, "No, I disagree,
and I assert that everything is a choice situation"? Or did you leave out a
"not"?

In driving a car, the driver (you could say) has to choose between turning
the wheel to the left or to the right. Each direction of wheel movement has
its own effect on how the car moves. So the driver has to decide which
effect on the car is better, and choose the direction of wheel movement
that is required.

I would say yes, during training. After that, the relevant control system
is in place and these evaluations become unnecessary, until disturbances
threaten loss of control.

So during training, the driver is presented with the choice of turning the
wheel left or right to turn the car left or right, and learns to choose one
of them? That, of course, is not how reorganization works, so you're
proposing a new model of learning, perhaps by some system that already
knows how to carry out the operation you call "choosing." Perhaps you had
better describe your model of how a system "chooses."

But all of that is nonsense, because the driver is just a continous control
system that converts errors into actions. There aren't any choice processes
going on in steering the car. If you see a piece of cake, and have a zero
reference level for some cake, you will keep the error at zero by refusing
any cake. Why muck up such a simple process by making a complicated
decision process out of it? Why pick the most complex hypothesis first?
Remember you can't just SAY that there's a choice being made: you have to
_prove_ it. We can prove that control is going on. Can you prove that a
decision is being made? I'll answer for you: you can't.

The decision has been made at some point. Once it has been made, it need
not be made again so long as the results are satisfactory.

Why must a decision have been made at some point? Why couldn't the system
have picked one course of action at random, ignoring all the ones not picked?

The more attractive the alternative is, the more strongly one will control
for obtaining and maintaining that alternative, everything else being

equal.

That's gobbledegook. What is "controlling more strongly?" If you mean loop
gain, say so. Ir you mean higher reference level, say that. If you mean
higher gain in the input function, say that. If you mean more powerful
output function, say that. Controlling "more strongly" means nothing.

Loop gain.

So are you saying that the higher system senses the relative loop gains of
the two or more lower systems involved, and selects the lower system with
the higher loop gain? How does it do that? Is this an example of "and then
a miracle occurs"? How can the system distinguish between a lower system
with a high loop gain and another lower system with the same loop gain but
a higher reference signal? And since the higher system _contributes to_ the
reference signal in the lower system, what keeps it from increasing or
decreasing the "strength" of control in the lower systems? And if the
higher system can adjust the loop gain of the lower systems, what "decides"
which lower system will be given the higher gain?

And "attractive" not only means nothing, it offers a positively incorrect
model of how behavior works in a physical universe.

That an organism finds something "attractive" does not mean that the object
has the property of attractiveness. I am asserting no such model as you
suggest.

Then why use language that _does_ have that meaning to most people? It
makes a great deal of practical difference whether we assume that the
environment does the attracting, or whether desirability is in the brain of
the beholder. Those who put attractiveness in the environment include
rapists who say "She was asking for it." So this is not a trivial issue.

That implies a control system that is sensing the degree of relative
attraction and controlling for -- what? Is there any "attraction" to sense
out there in the first place? How does the organism perceive that one
alternative is more favorable than the other? Favorable in terms of what
perceptual variables and reference levels?

See your own discussions of "pleasure/pain" or "good/bad."

All right. There is nothing about "choosing" in those discussions, or
comparison of "better" and "worse." You're trying to force outmoded,
old-fashioned concepts onto PCT. And I am resisting.

And controlling for them how? By
varying the gain of lower systems? Would that really work?

No, by controlling for those options that yield the best outcome in terms of
the organism's perceptions of what feels best.

So you think that the organism is always comparing alternatives, evaluating
their relative advantages and disadvantages, and choosing which action to
take by imagining which outcome would feel best? I don't think that. You'll
have to
prove it to me.

My proposal is only a
first suggestion, based on the fact that higher gain yields more vigorous
action at a given level of error.

That is the wrong way to analyze it. Higher gain yields lower error and
essentially the same degree of action, at a given setting of the reference
signal, with a given disturbance magnitude, and with a given environmental
feedback function. Do the algebra. Prove it to yourself.

You know this, Bruce, when you're thinking with your PCT-aware persona. In
this whole argument, you're acting as if you've never heard anything of PCT
but the words.

An alternative is that the degree to
which one option feels better than other contributes to the magnitude of

error.

Do you realize what an elaborate system you're proposing here?

In this way, at a
given level of error, the system with the higher gain will more vigorously
control for the presence of the alternative condition over the imposed one.
Higher error in the changeover procedure translates into more rapid

approach

to and pressing of the response lever whenever the apparatus restores the
imposed condition.

If you raise the gain, the error will get smaller in proportion because the
action becomes somewhat more vigorous, not in proportion.

I'll stop here and await your comments on this proposal.

My comment is that you've fallen into an old worn-out psychologist mode,
blathering off the top of your head about things you assume would work
without the slightest notion whether modeling would bear you out or any
concept of how you could test your ideas to see if they were right.

Perhaps you will want to reconsider in the light of the answers I have
provided above. Your comment immediately above is just classic Powers
boilerplate which you affix to any proposal you have a strong desire to
reject.

You have that backward, too. I have a strong desire to reject what you're
saying because I can see so much wrong with it. You're just falling back on
all the tired old psychological concepts, the very concepts that drove me
away from wanting to be a psychologist, probably before you were born. On
the one hand you express a desire to be known as a PCT-savvy scientist, but
on the other you keep saying things that show a failure to have
internalized the concepts of negative feedback control. Apparently you
don't see the conflicts between PCT and the older ideas -- you jump back
and forth as if the only difference is in the words you use.

I guess I'm having trouble letting go of the idea that one day you would
become a spokesman for PCT among your behaviorist colleagues. If I didn't
care about that, I'd probably just reply politely and noncomittally to you
and change the subject, as I do with people I've really given up on. But it
seems now that all you're going to tell your colleagues is how to translate
from standard psychological concepts into PCT and vice versa. If that's
your intention, I'd call it a terrible mistake, or even a hostile act. It's
like a flat-earther saying to a round-earther, "There's really no
difference in our theories except the radius of curvature that we assume."

Best,

Bill P.