[From Bill Powers (951126.1200 MST)]
Bruce Abbott (951126.1325 EST)
By your own definition (i.e., the commonly accepted one),
independent variables cannot be "manipulated for experimental
purposes" by allowing them to change naturally: such variables are
not independent variables at all. I understand your point that
independent variables imply a control system working in the
background to set their values; what I don't understand is this
Right, manipulation means causing changes on purpose (that is not,
however, the normal definition of independent variable in systems
analysis). However, if you record the state of the IV carefully, there's
no way to tell from the record whether the variable changed that way for
natural reasons, or was made to change that way by an experimenter. And
it makes no difference to the results. (Your "common-cause" problem,
however, does require deliberate manipulation to resolve, unless
continued observation shows naturally-occuring exceptions to the
apparent causal connection).
There is no essential difference from the point of view of the
system under study, but there is an absolutely HUGE difference from
the point of view of the experimenter, who does not have direct
knowledge of the system's structure. When variables change
naturally, there is always the possibility that other variables
change with them.
That's certainly true, and your point is valid.
Instead, you must manipulate _one_ of these variables while holding
the others constant. If you change one variable while holding the
rest constant, any reproducible change in the dependent variable
must demonstrate an influence of IV on DV.
To do this right, you have to hold all the other variables constant _at
every combination of values they would normally take on_. Just holding
them constant at one value can give a false picture of the relationship
between the IV and the DV. Suppose, for example, that the real
relationhips is Z = X*Y. If you hold Y "constant" and vary X, you can
get any kind of IV-DV relationship at all, including none (if you happen
to hold Y constant at zero). You mention the mother hen's "size" as a
variable that might be held constant. Size, however, involves an
assumption of distance when there is no binocular vision; it is,
roughly, subtended angle times distance. To make sure it is size you're
holding constant, you would have to vary angle and distance to make sure
that a constant product is what has to be maintained.
Also, I have noticed that IVs and DVs are commonly defined in terms of
_changes_ rather than actual values. Any nonlinearities in the system
would mean that the relationships between changes would depend on the
The point I'm trying to argue with this example is that there are
variables out there that do not participate in mutual interaction.
If the sun's rays pass through my bedroom window onto my eyelids
and therby awaken me, it does not follow that if I awaken in the
night, the sun's rays will then fall upon my eyes.
This is true. There are such one-way causal links. In this case the link
exists because of the power amplification in the senses: a few nanowatts
of energy getting through the eyelids produces neural signals in the
microwatt to milliwatt range, and muscle efforts involving watts.
The rat's running or my awakening and turning my head away from the
window both affect the perception of those variables, it is true.
I am certainly not arguing against that position! We seem to be
talking about different things. I am discussing what the
experimenter can do to learn about the system under study and
therefore am adopting the experimenter's point of view, whereas you
wish to discuss the variables involved from the system's point of
But what the experimenter "learns about the system" depends on the model
the experimenter brings to the observations. It may be that what awakens
you is not the sunlight on your eyelids, but the efforts you make in
your sleep while trying to bring the light intensity back down to zero.
In other words, you can misidentify the cause of waking up by treating
the apparent causal chain too literally.
Experiments are not usually run in the manner you describe,
although there may be some of that during pilot work to establish a
set of effective parameters for the experiment.
That's what I was talking about. Establishing a set of effective
parameters means adjusting the parameters until the kind of behavior the
experimenter wants to see occurs.
The feedback path in which the rat's running affects the
experimenter's setting of the shock intensity would not be present
during the actual experiment and therefore need not be taken into
account when the data are analyzed and interpreted.
They _are_ not taken into account, perhaps, but they _should be_ taken
into account. Suppose, for example, that what the experimenter wants to
see is a situation where increasing the frequency of reward goes with an
increase in behavior rate. The reward sizes and the level of deprivation
can be set during pilot studies, inadvertently, so that this
relationship is not seen -- for example, the experimenter may see that
"satiation" exists over all the experimental conditions, and prevents
the expected relationship from appearing. In that case, the level of
deprivation would be increased, or the reward size decreased, or the
schedule made more demanding, until the preliminary tests show an
increase in behavior rate with an increase in reinforcement rate. From
the PCT standpoint, we would say that the experimenter has made sure
that the conditions are outside the normal range of control behavior.
From the experimenter's standpoint, spurious side-effects have been
eliminated. The preliminary manipulations make a great deal of
difference in how the results are interpreted.
When we control the value of, say, i, then during the experiment it
has the status of an independent variable and we can see how
varying i changes the other variables around the loop.
The variable "i", I take it, is the input variable affected by o and d.
When you say that r and d are "true" independent variables, I take
you to mean that the influence between r or d and the other
variables is one-way, whereas normally i would both influence and
be influenced if we did not "clamp" its value for the experiment.
That's fine; I just want to clarify this as, in the broader
definition, ANY variable manipulated by the experimenter is a
"true" independent variable.
The values of r and d are "true" independent variables in the system as
it normally operates. However, if you make i into an independent
variable by placing it under your own control and arbitrarily altering
it (and it is then indeed an independent variable), you change the
organization of the system by doing this: part of your control actions
must be used to nullify the effects of the system's output on i, so i
takes on the value that you, rather than the system, determine. You have
also caused normal control actions by higher-order systems to fail at
least to some degree, because now changes in the reference signal no
longer cause corresponding changes in the input variable.
The relationship among d, o, and i is not normally one-way. These are
physical interactions and normally there are reciprocal effects in
equilibrium. The disturber is reciprocally disturbed by the opposing
effects of the system's output. However, if we _control_ the disturbing
variable d, these reciprocal effects are not allowed to alter d. Since d
is not under control by the system, this does not disrupt the operation
of the control system, so we still get a correct picture of its
When we do these experiments with computers, of course, the physical
interactions are eliminated. However, when we use a model of a physical
environment, as in Little Man v. 2, the interactions are preserved and
we find that the control system still works as expected. In the rubber
band demos, the interactions are present; when the experimenter pulls
back, an answering pull from the controller increases the force on the
experimenter's hand and stretches the experimenter's end of the rubber
bands. A model of this experiment takes the two-way balance of forces
You're right in saying that any variable manipulated by an experimenter
is a "true" dependent variable. I was trying to say, not very clearly,
that r and d are the _natural_ independent variables, the ones that are
independent relative to the system when the system is operating
normally. We don't really have to have an experimenter manipulating r
and d; all that is required is that they vary for some reason.
If independent variables are only those having a one-way influence
during normal system operation, then an independent variable is a
cause and a dependent variable (if it varies with the independent
variable while all other variables are held constant) is an effect,
as we normally use these terms. But there is nothing in
experimental logic that prevents one from manipulating variables
that normally participate in a set of mutual influences. For
example, in a control system I can hold the reference level and
disturbance constant while varying the perceptual input, and watch
what happens to the error level and output.
Not if you want to be observing the same system you started with. When
you "vary the perceptual input", you have to supply a force opposing the
output o, so you are now observing the system with a new force added to
it. As you change i, o will change, so you will have to vary the force
you're applying to i in order to keep i at the value you want. You're
now seeing the behavior of two systems organized to control the same
variable, i. To analyze the result, you must know both your own
reference level and that of the system, both your own loop gain and that
of the system, both your own dynamics and the dynamics of the system.
Very likely, you will drive the error or the output function of the
system into some limit. It is quite possible that when you attempt to do
this, both you and the system under study will break into oscillations,
although either system controlling i by itself would be stable.
The point is that mathematically what you say is true but only if you
ignore the realities of experimentation. You can't just "let" i change;
you have to perform some physical operation to make it change, and in
doing that you make the means of affecting i into part of the system
The simplest way to see this is to compare the environment model with
the system in normal operation and the same system when i is being
manipulated by applying a force f to it.
Normal: i = o + d
Manipulated: i = o + d + f
Note that this makes the manipulator's applied force into just another
disturbance that adds to d. If you now have the manipulator _insisting_
on a specific value of i, you have to include the feedback effects
through the manipulator, and you have made the equations that have to be
solved much more complicated.
Of course if you just manipulate i and look at the effects on e and o
_without_ doing all this modeling and analysis, you will get a picture
of cause and effect that contains, unbeknownst to you, some of your own
properties. You will get a false picture of the system's operation.
If you hold the reference constant and vary d, you get predictable
output. If you hold the disturbance constant and manipulate r, you
get predictable output. There is nothing in the method that
prevents one from determining that both variables affect the output
(or any of the other within-loop variables), or from assessing
their interaction (joint influence). Nor is there anything that
prevents one from determining that this influence is on the entire
loop and not just one variable.
Aren't you forgetting something here? We can't manipulate the reference
signal in an organism. We have to deduce it by using a model.
The problem is not that IV-DV implies lineal cause-effect, but that
it has often been interpreted that way by those who employ it.
Whoa. You've been telling me how you could use IV-DV to analyse a
closed-loop system which you already know exists. But that means using
many individual IV-DV analyses, each one of which shows only an apparent
cause-effect or input-output relationship, and then interpreting the
results within the structure of a closed-loop model, which is a step
extraneous to all IV-DV analyses I have ever seen. What any IV-DV
analysis gives you is y = f(x1,x2,xn). That is a pure cause-effect
relationship. As far as I know, there is no method associated with IV-DV
analysis that can come up with y = f(x1..xn,y). Yet this is the form
found in all closed-loop analyses; you find at some point that the
dependent variable is a function of itself. You then have to find a way
to separate the variables, which often can be done only by simulation.
Look at an elementary algebraic control-system model:
e = r - p
o = G*e
p = o + d.
Solve by substitution for the output, o, starting with the second
o = G*(r - (o + d)).
Immediately you see the variable you're solving for, o, on both sides of
the equal sign. In this linear model it's easy to solve for o, but if
there's a nonlinear function somewhere in the system the analytical
solution may not exist.
I have never seen anything in an IV-DV analysis that even approaches
dealing with this problem, which is unique to closed-loop systems.
Sorry, I'm talking about a system whose structure remains to be
determined, not one for which the system equations are known.
I was not talking about a control system whose structure is known; I was
talking about a system model that has been hypothesized.
If a third (unknown) variable influences both of two variables
under observation, variations in the third variable will make it
appear as though the two observed variables are directly linked.
Mere observation of the two variables during normal system
operation cannot determine whether this direct link is present or
merely an illusion, a byproduct of their common link to this third,
unobserved variable. This is an analytic problem that requires an
IV-DV approach to resolve.
I agree that the hypothesis of a common cause can be proven in this way.
You can show that an apparent cause-effect relation is probably an
illusion, by demonstrating that independently varying the cause does not
cause the supposed effect to vary. You assume that the common cause
doesn't just happen to vary equally and oppositely to your
But by this means you can also prove that there is no causal connection
between a disturbance and a controlled variable, or between the output
and a controlled variable. In a high-gain control system, the controlled
variable will vary as the reference signal varies, and will show very
low correlations with both the disturbance and the output. We know,
however, that the state of the controlled variable is completely
determined by the disturbance and the output, and that there are direct
physical connections operating.
What the usual IV-DV analysis is likely to show is that the output
depends on the disturbance, and the controlled variable (if it is even
noticed) does not depend on anything observable. I'm talking about the
normal case where we have no access to the insides of the organism and
can't measure p,r, or e directly.
What does an IV-DV analysis tell you? It tells you (in a linear case)
the slope and intercept of a function that apparently relates the
dependent variable to the independent variable, and also the scatter in
this relationship. Nothing else. It doesn't tell you _why_ this
relationship is observed. In ordinary causal systems, if you observe
such a relationship you can be confident that there is a series of
internal relationships which, put end-to-end, have the form that is
observed. What you are seeing is equivalent to the overall system
But in a control system, this is not true. What your IV-DV analysis is
showing you may be nothing more than the inverse of the environmental
feedback function and reflect almost nothing of the true forward
organism function. If you don't know what I mean by that I'll explain
further. This is the "behavioral illusion" we talk about. It has nothing
to do with mistaking a common-cause situation for a cause-effect