cause-effect, IV-DV, and PCT

[From Bill Powers (951126.1200 MST)]

Bruce Abbott (951126.1325 EST)

     By your own definition (i.e., the commonly accepted one),
     independent variables cannot be "manipulated for experimental
     purposes" by allowing them to change naturally: such variables are
     not independent variables at all. I understand your point that
     independent variables imply a control system working in the
     background to set their values; what I don't understand is this
     contradiction.

Right, manipulation means causing changes on purpose (that is not,
however, the normal definition of independent variable in systems
analysis). However, if you record the state of the IV carefully, there's
no way to tell from the record whether the variable changed that way for
natural reasons, or was made to change that way by an experimenter. And
it makes no difference to the results. (Your "common-cause" problem,
however, does require deliberate manipulation to resolve, unless
continued observation shows naturally-occuring exceptions to the
apparent causal connection).

     There is no essential difference from the point of view of the
     system under study, but there is an absolutely HUGE difference from
     the point of view of the experimenter, who does not have direct
     knowledge of the system's structure. When variables change
     naturally, there is always the possibility that other variables
     change with them.

That's certainly true, and your point is valid.

     Instead, you must manipulate _one_ of these variables while holding
     the others constant. If you change one variable while holding the
     rest constant, any reproducible change in the dependent variable
     must demonstrate an influence of IV on DV.

To do this right, you have to hold all the other variables constant _at
every combination of values they would normally take on_. Just holding
them constant at one value can give a false picture of the relationship
between the IV and the DV. Suppose, for example, that the real
relationhips is Z = X*Y. If you hold Y "constant" and vary X, you can
get any kind of IV-DV relationship at all, including none (if you happen
to hold Y constant at zero). You mention the mother hen's "size" as a
variable that might be held constant. Size, however, involves an
assumption of distance when there is no binocular vision; it is,
roughly, subtended angle times distance. To make sure it is size you're
holding constant, you would have to vary angle and distance to make sure
that a constant product is what has to be maintained.

Also, I have noticed that IVs and DVs are commonly defined in terms of
_changes_ rather than actual values. Any nonlinearities in the system
would mean that the relationships between changes would depend on the
actual values.

     The point I'm trying to argue with this example is that there are
     variables out there that do not participate in mutual interaction.
     If the sun's rays pass through my bedroom window onto my eyelids
     and therby awaken me, it does not follow that if I awaken in the
     night, the sun's rays will then fall upon my eyes.

This is true. There are such one-way causal links. In this case the link
exists because of the power amplification in the senses: a few nanowatts
of energy getting through the eyelids produces neural signals in the
microwatt to milliwatt range, and muscle efforts involving watts.

     The rat's running or my awakening and turning my head away from the
     window both affect the perception of those variables, it is true.
     I am certainly not arguing against that position! We seem to be
     talking about different things. I am discussing what the
     experimenter can do to learn about the system under study and
     therefore am adopting the experimenter's point of view, whereas you
     wish to discuss the variables involved from the system's point of
     view.

But what the experimenter "learns about the system" depends on the model
the experimenter brings to the observations. It may be that what awakens
you is not the sunlight on your eyelids, but the efforts you make in
your sleep while trying to bring the light intensity back down to zero.
In other words, you can misidentify the cause of waking up by treating
the apparent causal chain too literally.

     Experiments are not usually run in the manner you describe,
     although there may be some of that during pilot work to establish a
     set of effective parameters for the experiment.

That's what I was talking about. Establishing a set of effective
parameters means adjusting the parameters until the kind of behavior the
experimenter wants to see occurs.

     The feedback path in which the rat's running affects the
     experimenter's setting of the shock intensity would not be present
     during the actual experiment and therefore need not be taken into
     account when the data are analyzed and interpreted.

They _are_ not taken into account, perhaps, but they _should be_ taken
into account. Suppose, for example, that what the experimenter wants to
see is a situation where increasing the frequency of reward goes with an
increase in behavior rate. The reward sizes and the level of deprivation
can be set during pilot studies, inadvertently, so that this
relationship is not seen -- for example, the experimenter may see that
"satiation" exists over all the experimental conditions, and prevents
the expected relationship from appearing. In that case, the level of
deprivation would be increased, or the reward size decreased, or the
schedule made more demanding, until the preliminary tests show an
increase in behavior rate with an increase in reinforcement rate. From
the PCT standpoint, we would say that the experimenter has made sure
that the conditions are outside the normal range of control behavior.

From the experimenter's standpoint, spurious side-effects have been

eliminated. The preliminary manipulations make a great deal of
difference in how the results are interpreted.

     When we control the value of, say, i, then during the experiment it
     has the status of an independent variable and we can see how
     varying i changes the other variables around the loop.

The variable "i", I take it, is the input variable affected by o and d.

     When you say that r and d are "true" independent variables, I take
     you to mean that the influence between r or d and the other
     variables is one-way, whereas normally i would both influence and
     be influenced if we did not "clamp" its value for the experiment.
     That's fine; I just want to clarify this as, in the broader
     definition, ANY variable manipulated by the experimenter is a
     "true" independent variable.

The values of r and d are "true" independent variables in the system as
it normally operates. However, if you make i into an independent
variable by placing it under your own control and arbitrarily altering
it (and it is then indeed an independent variable), you change the
organization of the system by doing this: part of your control actions
must be used to nullify the effects of the system's output on i, so i
takes on the value that you, rather than the system, determine. You have
also caused normal control actions by higher-order systems to fail at
least to some degree, because now changes in the reference signal no
longer cause corresponding changes in the input variable.

The relationship among d, o, and i is not normally one-way. These are
physical interactions and normally there are reciprocal effects in
equilibrium. The disturber is reciprocally disturbed by the opposing
effects of the system's output. However, if we _control_ the disturbing
variable d, these reciprocal effects are not allowed to alter d. Since d
is not under control by the system, this does not disrupt the operation
of the control system, so we still get a correct picture of its
behavior.

When we do these experiments with computers, of course, the physical
interactions are eliminated. However, when we use a model of a physical
environment, as in Little Man v. 2, the interactions are preserved and
we find that the control system still works as expected. In the rubber
band demos, the interactions are present; when the experimenter pulls
back, an answering pull from the controller increases the force on the
experimenter's hand and stretches the experimenter's end of the rubber
bands. A model of this experiment takes the two-way balance of forces
into account.

You're right in saying that any variable manipulated by an experimenter
is a "true" dependent variable. I was trying to say, not very clearly,
that r and d are the _natural_ independent variables, the ones that are
independent relative to the system when the system is operating
normally. We don't really have to have an experimenter manipulating r
and d; all that is required is that they vary for some reason.

     If independent variables are only those having a one-way influence
     during normal system operation, then an independent variable is a
     cause and a dependent variable (if it varies with the independent
     variable while all other variables are held constant) is an effect,
     as we normally use these terms. But there is nothing in
     experimental logic that prevents one from manipulating variables
     that normally participate in a set of mutual influences. For
     example, in a control system I can hold the reference level and
     disturbance constant while varying the perceptual input, and watch
     what happens to the error level and output.

Not if you want to be observing the same system you started with. When
you "vary the perceptual input", you have to supply a force opposing the
output o, so you are now observing the system with a new force added to
it. As you change i, o will change, so you will have to vary the force
you're applying to i in order to keep i at the value you want. You're
now seeing the behavior of two systems organized to control the same
variable, i. To analyze the result, you must know both your own
reference level and that of the system, both your own loop gain and that
of the system, both your own dynamics and the dynamics of the system.
Very likely, you will drive the error or the output function of the
system into some limit. It is quite possible that when you attempt to do
this, both you and the system under study will break into oscillations,
although either system controlling i by itself would be stable.

The point is that mathematically what you say is true but only if you
ignore the realities of experimentation. You can't just "let" i change;
you have to perform some physical operation to make it change, and in
doing that you make the means of affecting i into part of the system
being studied.

The simplest way to see this is to compare the environment model with
the system in normal operation and the same system when i is being
manipulated by applying a force f to it.

Normal: i = o + d

Manipulated: i = o + d + f

Note that this makes the manipulator's applied force into just another
disturbance that adds to d. If you now have the manipulator _insisting_
on a specific value of i, you have to include the feedback effects
through the manipulator, and you have made the equations that have to be
solved much more complicated.

Of course if you just manipulate i and look at the effects on e and o
_without_ doing all this modeling and analysis, you will get a picture
of cause and effect that contains, unbeknownst to you, some of your own
properties. You will get a false picture of the system's operation.

···

-----------------------
     If you hold the reference constant and vary d, you get predictable
     output. If you hold the disturbance constant and manipulate r, you
     get predictable output. There is nothing in the method that
     prevents one from determining that both variables affect the output
     (or any of the other within-loop variables), or from assessing
     their interaction (joint influence). Nor is there anything that
     prevents one from determining that this influence is on the entire
     loop and not just one variable.

Aren't you forgetting something here? We can't manipulate the reference
signal in an organism. We have to deduce it by using a model.

     The problem is not that IV-DV implies lineal cause-effect, but that
     it has often been interpreted that way by those who employ it.

Whoa. You've been telling me how you could use IV-DV to analyse a
closed-loop system which you already know exists. But that means using
many individual IV-DV analyses, each one of which shows only an apparent
cause-effect or input-output relationship, and then interpreting the
results within the structure of a closed-loop model, which is a step
extraneous to all IV-DV analyses I have ever seen. What any IV-DV
analysis gives you is y = f(x1,x2,xn). That is a pure cause-effect
relationship. As far as I know, there is no method associated with IV-DV
analysis that can come up with y = f(x1..xn,y). Yet this is the form
found in all closed-loop analyses; you find at some point that the
dependent variable is a function of itself. You then have to find a way
to separate the variables, which often can be done only by simulation.

Look at an elementary algebraic control-system model:

e = r - p
o = G*e
p = o + d.

Solve by substitution for the output, o, starting with the second
equation.

o = G*(r - (o + d)).

Immediately you see the variable you're solving for, o, on both sides of
the equal sign. In this linear model it's easy to solve for o, but if
there's a nonlinear function somewhere in the system the analytical
solution may not exist.

I have never seen anything in an IV-DV analysis that even approaches
dealing with this problem, which is unique to closed-loop systems.

     Sorry, I'm talking about a system whose structure remains to be
     determined, not one for which the system equations are known.

I was not talking about a control system whose structure is known; I was
talking about a system model that has been hypothesized.

     If a third (unknown) variable influences both of two variables
     under observation, variations in the third variable will make it
     appear as though the two observed variables are directly linked.
     Mere observation of the two variables during normal system
     operation cannot determine whether this direct link is present or
     merely an illusion, a byproduct of their common link to this third,
     unobserved variable. This is an analytic problem that requires an
     IV-DV approach to resolve.

I agree that the hypothesis of a common cause can be proven in this way.
You can show that an apparent cause-effect relation is probably an
illusion, by demonstrating that independently varying the cause does not
cause the supposed effect to vary. You assume that the common cause
doesn't just happen to vary equally and oppositely to your
manipulations.

But by this means you can also prove that there is no causal connection
between a disturbance and a controlled variable, or between the output
and a controlled variable. In a high-gain control system, the controlled
variable will vary as the reference signal varies, and will show very
low correlations with both the disturbance and the output. We know,
however, that the state of the controlled variable is completely
determined by the disturbance and the output, and that there are direct
physical connections operating.

What the usual IV-DV analysis is likely to show is that the output
depends on the disturbance, and the controlled variable (if it is even
noticed) does not depend on anything observable. I'm talking about the
normal case where we have no access to the insides of the organism and
can't measure p,r, or e directly.

What does an IV-DV analysis tell you? It tells you (in a linear case)
the slope and intercept of a function that apparently relates the
dependent variable to the independent variable, and also the scatter in
this relationship. Nothing else. It doesn't tell you _why_ this
relationship is observed. In ordinary causal systems, if you observe
such a relationship you can be confident that there is a series of
internal relationships which, put end-to-end, have the form that is
observed. What you are seeing is equivalent to the overall system
function.

But in a control system, this is not true. What your IV-DV analysis is
showing you may be nothing more than the inverse of the environmental
feedback function and reflect almost nothing of the true forward
organism function. If you don't know what I mean by that I'll explain
further. This is the "behavioral illusion" we talk about. It has nothing
to do with mistaking a common-cause situation for a cause-effect
relationship.
-----------------------------------------------------------------------
Best,

Bill P.

[From Bruce Abbott (951127.0005 EST)]

Bill Powers (951126.1200 MST) --

    Bruce Abbott (951126.1325 EST)

    By your own definition (i.e., the commonly accepted one),
    independent variables cannot be "manipulated for experimental
    purposes" by allowing them to change naturally: such variables are
    not independent variables at all. I understand your point that
    independent variables imply a control system working in the
    background to set their values; what I don't understand is this
    contradiction.

Right, manipulation means causing changes on purpose (that is not,
however, the normal definition of independent variable in systems
analysis).

If this is the meaning of "independent variable" in systems analysis, then
it is equivalent to what in path analysis would be called an "exogenous
variable," if I am not mistaken.

However, if you record the state of the IV carefully, there's
no way to tell from the record whether the variable changed that way for
natural reasons, or was made to change that way by an experimenter. And
it makes no difference to the results.

Right, from the system's point of view, the variable varied, and that's all.

(Your "common-cause" problem,
however, does require deliberate manipulation to resolve, unless
continued observation shows naturally-occuring exceptions to the
apparent causal connection).

Right. In astronomy, for example, there are many "experiments" in which
nature has provided the required "manipulation" while "keeping" other
potential confounding variables constant. One only has to examine enough
examples to find ones in which certain variables are essentially constant
across the comparison while the variable of interest differs.

    Instead, you must manipulate _one_ of these variables while holding
    the others constant. If you change one variable while holding the
    rest constant, any reproducible change in the dependent variable
    must demonstrate an influence of IV on DV.

To do this right, you have to hold all the other variables constant _at
every combination of values they would normally take on_. Just holding
them constant at one value can give a false picture of the relationship
between the IV and the DV. Suppose, for example, that the real
relationhips is Z = X*Y. If you hold Y "constant" and vary X, you can
get any kind of IV-DV relationship at all, including none (if you happen
to hold Y constant at zero).

Yes, and this is potentially an enormous set of combinations. However, I
was using the simplest type of manipulation simply as an example; I do not
mean to suggest that all experiments must manipulate only one variable at a
time while examining one dependent variable at a time, nor do I intend to
restrict the method to the use of a few predetermined values of the
independent variables, although experiments in which this is done are common
enough. In the tracking studies we varied the disturbance essentially
continuously, for example. A large number of combinations of values could
be "swept out" in fairly short order in this way.

You mention the mother hen's "size" as a
variable that might be held constant. Size, however, involves an
assumption of distance when there is no binocular vision; it is,
roughly, subtended angle times distance. To make sure it is size you're
holding constant, you would have to vary angle and distance to make sure
that a constant product is what has to be maintained.

By "holding size constant" I was thinking in terms of presenting model hens
having the same size as the mother (height, width, length); this definition
of "size" is the only one that makes evolutionary sense, otherwise chicks
would go dashing after the wrong hens simply because their distance/size
relationship allowed their retinal images to match the actual mother's
better than her own at that moment. But this gets us off the track.

Also, I have noticed that IVs and DVs are commonly defined in terms of
_changes_ rather than actual values. Any nonlinearities in the system
would mean that the relationships between changes would depend on the
actual values.

Changes? I don't know what you have in mind. Most with which I am familiar
are defined in terms of actual values, whether qualitative or quantitative.
But yes, any variables so defined would have that problem.

But what the experimenter "learns about the system" depends on the model
the experimenter brings to the observations. It may be that what awakens
you is not the sunlight on your eyelids, but the efforts you make in
your sleep while trying to bring the light intensity back down to zero.
In other words, you can misidentify the cause of waking up by treating
the apparent causal chain too literally.

This is always true of any set of observations: there are no "theory-free"
observations. Yet, even if your hypothetical control system were at work,
it would still be correct to say that you were awakened by the "sun in your
eyes." This does not imply that the relationship was direct, only that the
stated relationship existed. Further research could establish what mediates
between the two.

That's what I was talking about. Establishing a set of effective
parameters means adjusting the parameters until the kind of behavior the
experimenter wants to see occurs.

This implies more than it should. You are suggesting that experimenters
vary the parameters until they get the results they want or expect to see.
But this "parameter setting" is characteristic of nearly every experiment.
If I want to use the curvature of track in a bubble chamber to determine the
charge of a particle, I must apply a strong enough field to get measurable
curvature in the tracks of charged particles moving at the speeds they
typically do. In a sense the experimenter is setting the field strength to
get "the kind of behavior the experimenter wants to see." But this is very
different from manipulating conditions so as to produce the data the
experimenter hopes to obtain, which seems to be implied by your statement.

    The feedback path in which the rat's running affects the
    experimenter's setting of the shock intensity would not be present
    during the actual experiment and therefore need not be taken into
    account when the data are analyzed and interpreted.

They _are_ not taken into account, perhaps, but they _should be_ taken
into account. Suppose, for example, that what the experimenter wants to
see is a situation where increasing the frequency of reward goes with an
increase in behavior rate. The reward sizes and the level of deprivation
can be set during pilot studies, inadvertently, so that this
relationship is not seen -- for example, the experimenter may see that
"satiation" exists over all the experimental conditions, and prevents
the expected relationship from appearing. In that case, the level of
deprivation would be increased, or the reward size decreased, or the
schedule made more demanding, until the preliminary tests show an
increase in behavior rate with an increase in reinforcement rate.

If findings depend so strongly on parameter settings, sooner or later
someone will vary them in order to determine the "generality" of the
finding, and these limitations will be uncovered. You can't expect one
experiment to accomplish everything. As you know, deprivation level, pellet
size, level of "satiation" (prefeeding), and many other such variables have
been investigated for various schedules of reinforcement.

From
the PCT standpoint, we would say that the experimenter has made sure
that the conditions are outside the normal range of control behavior.

So we're back to THAT explanation again? I though we had pretty much ruled
that out in the Collier et al. ratio data. An alternative is that
conditions are within the normal range of control behavior, but that the
tradeoffs offered are such that the rat does not attempt to control the
variable we dumb experimenters think it ought, on logical grounds, to want
to control.

From the experimenter's standpoint, spurious side-effects have been
eliminated. The preliminary manipulations make a great deal of
difference in how the results are interpreted.

I do agree with your main point, which is that the use of a narrow set of
selected parameter values can yield results different from those that might
have been produced given another selection of parameter values, thus skewing
the interpretation in a given direction.

    When we control the value of, say, i, then during the experiment it
    has the status of an independent variable and we can see how
    varying i changes the other variables around the loop.

The variable "i", I take it, is the input variable affected by o and d.

The variable "i" is the input variable _normally_ affected by o and d. But
for the purpose of the investigation, I might prevent this normal feedback
from occuring, perhaps for very brief periods (in order to prevent
reorganization from setting in), and instead control the value of i myself.
For example, in a tracking task, I might cause the cursor to begin drifting
left of target despite the participant's effort to counteract the drift and
record the mouse movement that results.

The values of r and d are "true" independent variables in the system as
it normally operates. However, if you make i into an independent
variable by placing it under your own control and arbitrarily altering
it (and it is then indeed an independent variable), you change the
organization of the system by doing this: part of your control actions
must be used to nullify the effects of the system's output on i, so i
takes on the value that you, rather than the system, determine. You have
also caused normal control actions by higher-order systems to fail at
least to some degree, because now changes in the reference signal no
longer cause corresponding changes in the input variable.

Yes, true. So I have to be careful.

The relationship among d, o, and i is not normally one-way. These are
physical interactions and normally there are reciprocal effects in
equilibrium. The disturber is reciprocally disturbed by the opposing
effects of the system's output. However, if we _control_ the disturbing
variable d, these reciprocal effects are not allowed to alter d. Since d
is not under control by the system, this does not disrupt the operation
of the control system, so we still get a correct picture of its
behavior.

In other words, if I push, the system pushes back, not only on the
controlled variable, but on the source of the disturbance.

When we do these experiments with computers, of course, the physical
interactions are eliminated. However, when we use a model of a physical
environment, as in Little Man v. 2, the interactions are preserved and
we find that the control system still works as expected. In the rubber
band demos, the interactions are present; when the experimenter pulls
back, an answering pull from the controller increases the force on the
experimenter's hand and stretches the experimenter's end of the rubber
bands. A model of this experiment takes the two-way balance of forces
into account.

Yes, and the real systems involved automatically and reciprocally adjust to
these forces.

You're right in saying that any variable manipulated by an experimenter
is a "true" [in]dependent variable. I was trying to say, not very clearly,
that r and d are the _natural_ independent variables, the ones that are
independent relative to the system when the system is operating
normally. We don't really have to have an experimenter manipulating r
and d; all that is required is that they vary for some reason.

Yes. But if we want to know whether we have identified the correct variable
as "d," we will probably want to manipulate it ourselves rather than letting
simply nature take its course. Otherwise, what we think is "d" may be some
mere correlate of "d."

    If independent variables are only those having a one-way influence
    during normal system operation, then an independent variable is a
    cause and a dependent variable (if it varies with the independent
    variable while all other variables are held constant) is an effect,
    as we normally use these terms. But there is nothing in
    experimental logic that prevents one from manipulating variables
    that normally participate in a set of mutual influences. For
    example, in a control system I can hold the reference level and
    disturbance constant while varying the perceptual input, and watch
    what happens to the error level and output.

Not if you want to be observing the same system you started with. When
you "vary the perceptual input", you have to supply a force opposing the
output o, so you are now observing the system with a new force added to
it.

There is another way -- I can break the link between o and i. The mouse no
longer controls the cursor position; instead the experimenter controls it.
This would have to be done under conditions in which it would be difficult
for the participant to tell that control had actually been lost, otherwise
the system's organization would quickly change.

You're
now seeing the behavior of two systems organized to control the same
variable, i. To analyze the result, you must know both your own
reference level and that of the system, both your own loop gain and that
of the system, both your own dynamics and the dynamics of the system.
Very likely, you will drive the error or the output function of the
system into some limit. It is quite possible that when you attempt to do
this, both you and the system under study will break into oscillations,
although either system controlling i by itself would be stable.

The way I propose to do it, I don't think this would be a problem.

    If you hold the reference constant and vary d, you get predictable
    output. If you hold the disturbance constant and manipulate r, you
    get predictable output. There is nothing in the method that
    prevents one from determining that both variables affect the output
    (or any of the other within-loop variables), or from assessing
    their interaction (joint influence). Nor is there anything that
    prevents one from determining that this influence is on the entire
    loop and not just one variable.

Aren't you forgetting something here? We can't manipulate the reference
signal in an organism. We have to deduce it by using a model.

I had tracking studies in mind. One can tell the participant to keep the
cursor on the target; the target position is then the reference, if we
assume that our participant is doing as requested. I can then move the
target around (varying the nominal reference position) or disturb the cursor
position (varying the disturbance). This doesn't appear to pose any great
difficulty. Besides, who says I have to do without a model?

    The problem is not that IV-DV implies lineal cause-effect, but that
    it has often been interpreted that way by those who employ it.

Whoa. You've been telling me how you could use IV-DV to analyse a
closed-loop system which you already know exists. But that means using
many individual IV-DV analyses, each one of which shows only an apparent
cause-effect or input-output relationship, and then interpreting the
results within the structure of a closed-loop model, which is a step
extraneous to all IV-DV analyses I have ever seen. What any IV-DV
analysis gives you is y = f(x1,x2,xn). That is a pure cause-effect
relationship. As far as I know, there is no method associated with IV-DV
analysis that can come up with y = f(x1..xn,y). Yet this is the form
found in all closed-loop analyses; you find at some point that the
dependent variable is a function of itself. You then have to find a way
to separate the variables, which often can be done only by simulation.

Whoa yourself. If x1,x2,xn are changing and y changes simultaneously with
them, is this cause-effect or just a case of one-way relationship?
Input-output seems closer to the spirit here. And what better way is there
to determine the input-output relationships of the pif, comparator, output
function, or environment function? Remember, I'm suggesting that a good
deal can be learned about the system from such manipulations; I'm not
suggesting that the functions obtained can replace the simultaneous
differential equations of the loop. There's a difference between examining
the functions within individual components of the loop and the functioning
of the loop itself. Furthermore, if I have deduced the individual functions
within the system, I can construct a computer model that incorporates them
and empirically determine whether the resulting model behaves as the real
system does. Can't I?

    Sorry, I'm talking about a system whose structure remains to be
    determined, not one for which the system equations are known.

I was not talking about a control system whose structure is known; I was
talking about a system model that has been hypothesized.

O.K., I was not talking about either a system for which the system equations
are known _or_ an hypothesized system model.

What does an IV-DV analysis tell you? It tells you (in a linear case)
the slope and intercept of a function that apparently relates the
dependent variable to the independent variable, and also the scatter in
this relationship. Nothing else. It doesn't tell you _why_ this
relationship is observed. In ordinary causal systems, if you observe
such a relationship you can be confident that there is a series of
internal relationships which, put end-to-end, have the form that is
observed. What you are seeing is equivalent to the overall system
function.

In a closed-loop system, I believe the procedure called "opening the loop"
can give you this kind of information, if you can do it. Correct? I've
read studies in which this was done in insects, which in the cases
investigated showed no signs of reorganization during the procedure.

But in a control system, this is not true. What your IV-DV analysis is
showing you may be nothing more than the inverse of the environmental
feedback function and reflect almost nothing of the true forward
organism function. If you don't know what I mean by that I'll explain
further.

Ah, but you are speaking only of the crudest analysis, in which the
closed-loop nature of the system is either not recognized or its proper
analysis is not understood. It seems to me that manipulating variables is
the only way you are ever going to uncover the true relationships among all
those system variables, including those relating input to perception, etc.

A review of the forward organism function versus environmental feedback
function and its inverse would be helpful, although I believe I have a
general idea what these are.

This is the "behavioral illusion" we talk about. It has nothing
to do with mistaking a common-cause situation for a cause-effect
relationship.

Of course not! Did anyone say it did?

Cheers,

Bruce

<[Bill Leach 951127.00:28 U.S. Eastern Time Zone]

[Bruce Abbott (951127.0005 EST)]

Right. In astronomy, for example, there are many "experiments" in which
nature has provided the required "manipulation" while "keeping" other
potential confounding variables constant. One only has to examine
enough examples to find ones in which certain variables are essentially
constant across the comparison while the variable of interest differs.

I think that you know this but you are not dealing with control systems
in such experiments.

Changes? I don't know what you have in mind. Most with which I am
familiar are defined in terms of actual values, whether qualitative or
quantitative. But yes, any variables so defined would have that problem.

The term "qualitative" seems to automatically preclude "actual values" in
my mind. Even quantitative variables need to be quantified (as much as
is possible) in terms that can be directly related to perception by the
subject (as opposed to perception by the experimenter).

This implies more than it should. You are suggesting that experimenters
vary the parameters until they get the results they want or expect to
see. But this "parameter setting" is characteristic of nearly every
experiment. If I want to use the curvature of track in a bubble chamber
to determine the charge of a particle, I must apply a strong enough
field to get measurable curvature in the tracks of charged particles
moving at the speeds they typically do. In a sense the experimenter is
setting the field strength to get "the kind of behavior the experimenter
wants to see." But this is very different from manipulating conditions
so as to produce the data the experimenter hopes to obtain, which seems
to be implied by your statement.

Maybe it implies more and maybe it doesn't... I your cited example, once
again, arbitrary fields are NOT applied but rather known field strength
that by theory prediction will produce the desired curvature. There is
no guessing there.

The variable "i" is the input variable _normally_ affected by o and d.
But for the purpose of the investigation, I might prevent this normal
feedback from occuring, perhaps for very brief periods (in order to
prevent reorganization from setting in), and instead control the value
of i myself. For example, in a tracking task, I might cause the cursor
to begin drifting left of target despite the participant's effort to
counteract the drift and record the mouse movement that results.

Your suggestion here would likely induce reorganization not prevent it.

There is another way -- I can break the link between o and i. The mouse
no longer controls the cursor position; instead the experimenter
controls it. This would have to be done under conditions in which it
would be difficult for the participant to tell that control had actually
been lost, otherwise the system's organization would quickly change.

I think that you might want to look at some of Tom Bourbon's work in this
area. The conditions are that it IS very difficult to break the link and
not have the control system detect that this has happened.

Whoa yourself. If x1,x2,xn are changing and y changes simultaneously
with them, is this cause-effect or just a case of one-way relationship?
Input-output seems closer to the spirit here. And what better way is
there to determine the input-output relationships of the pif,
comparator, output function, or environment function? Remember, I'm
suggesting that a good deal can be learned about the system from such
manipulations; I'm not suggesting that the functions obtained can
replace the simultaneous differential equations of the loop. There's a
difference between examining the functions within individual components
of the loop and the functioning of the loop itself. Furthermore, if I
have deduced the individual functions within the system, I can construct
a computer model that incorporates them and empirically determine
whether the resulting model behaves as the real system does. Can't I?

Maybe you can and it might be interesting to see. I think a real serious
problem here is that you can not really extrapolate the tracking task
experiments to "generalized behavioural studies" without doing the same
level of experimental process analysis that went into the tracking tasks.
Further, in the tracking tasks (as you have yourself mentioned), you DO
know the reference and that piece of knowledge is crucial. This
relatively certain knowledge of the reference will rarely be the case for
other work.

In a closed-loop system, I believe the procedure called "opening the
loop" can give you this kind of information, if you can do it. Correct?
I've read studies in which this was done in insects, which in the cases
investigated showed no signs of reorganization during the procedure.

No this is not correct as I understand you to mean. You can determine
specific parameters of loop components if you both "open the loop" and
substitute both input signals and loads for each component. An
additional problem however is that if the characteristics of the
individual parts of the components of the loop are not well defined, you
can still end up with lots of useless data. For example in open loop
conditions it is quite easy to saturate "circuit" components that are
never saturated in closed loop operation (even under conditions of
complete loss of control). Data taken under such conditions could easily
result in producing a set of transforms that are fundamentally flawed.

Ah, but you are speaking only of the crudest analysis, in which the
closed-loop nature of the system is either not recognized or its proper
analysis is not understood. It seems to me that manipulating variables
is the only way you are ever going to uncover the true relationships
among all those system variables, including those relating input to
perception, etc.

This is certainly true but it is the failure to recognize that the
phenomenon being studies is control that is the problem! The IV-DV
method alters an environmental parameter and then looks for a
corresponding change in some other parameter. The problem with this is
still and always has been that the "observed parameter", if it changes,
is incidental to control. In general, it isn't what is changing but what
should be changing and is not that is significant in behavioural studies.

-bill

[From Bill Powers (951127.0750 MST)]

Bruce Abbott (951127.0005 EST) --

No serious divergences in most of your post.

Aren't you forgetting something here? We can't manipulate the
reference signal in an organism. We have to deduce it by using a
model.

     I had tracking studies in mind. One can tell the participant to
     keep the cursor on the target; the target position is then the
     reference, if we assume that our participant is doing as requested.

The reference condition is "cursor on target." This reference condition
can be varied; for example, "Cursor one inch to right of target," and so
forth, or "cursor moving slowly left and right of target." It's possible
to let the handle position affect both the cursor and target (by
different amounts); then it's clear that what is controlled is a
_relationship_, not just the position of the cursor. Sometimes our
actions can affect more than one element of a relationship, as when you
bring your hands together so their forefinger tips just touch. Sometimes
we can affect only one element, as when a dog varies its own position in
order to control the relationship with the position of a cat. In all
cases, the reference relationship is specified inside the brain, not in
the environment.

Perceptions report only the current actual state of the world. They do
not define what _should_ be perceived. If the cursor is 1/4 inch left of
the target, the resulting action could bring the cursor closer to the
target or farther from it, depending on what the reference relationship
is. The reference relationship can be discover by disturbing either the
cursor or the target. In either case, the handle movements will resist
departures of the relationship from some particular relationship.

     Whoa yourself. If x1,x2,xn are changing and y changes
     simultaneously with them, is this cause-effect or just a case of
     one-way relationship? Input-output seems closer to the spirit here.
     And what better way is there to determine the input-output
     relationships of the pif, comparator, output function, or
     environment function?

Yes, this is a one-way input-output relationship [y = f(x1,x2,... xn)].
An IV-DV approach can be a good one for determining the best-fit
representation of each function. Each function in a control loop is
treated as an input-output, or cause-effect, or IV-DV relationship with
one or more input variables and one output variable. That is, the inputs
cause the output, or the inputs are independent variables (when each is
varied while the others are held constant) and the output is a dependent
variable. These classifications are superfluous, however, when you
express the elements of a control loop directly as functions: p =
Fi(i1,i2 ...in), e = r - p, o = Fo(e), etc.

     Remember, I'm suggesting that a good deal can be learned about the
     system from such manipulations; I'm not suggesting that the
     functions obtained can replace the simultaneous differential
     equations of the loop. There's a difference between examining the
     functions within individual components of the loop and the
     functioning of the loop itself.

Yes, exactly. When you are able to measure each function independently,
what you have is a set of functions which _are_ the simultaneous
differential equations of the whole system. If you found, for example,
that the output function was a leaky integrator, you would express that
function as a (single) differential equation: do/dt = k1*e - k2*o. In
English, the rate of change of output with respect to time (do/dt)
equals a gain constant times the error signal (k1*e), minus a leakage
proportional to the magnitude of the output (k2*o). This equation would
be combined with the equations for the comparator, the input function,
and the environmental feedback function and solved for the behavior of
the variables through time.

     Furthermore, if I have deduced the individual functions within the
     system, I can construct a computer model that incorporates them and
     empirically determine whether the resulting model behaves as the
     real system does. Can't I?

Unfortunately, you can't _deduce_ the individual functions within a
living system. You have to _postulate_ the form of the perceptual input
function, the comparator, and the output function, because they are
hidden inside the organism. The only functions you can directly
determine are the environmental feedback function connecting the output
to the controlled variable and the disturbance function connecting the
disturbing variable to the controlled variable.

Once you have proposed a model for the interior of the organism, then as
you say you can empirically determine whether the model behaves as the
real system does. The model provides the missing equations that complete
the set that must be solved, analytically or by simulation.

···

======================================================================
     A review of the forward organism function versus environmental
     feedback function and its inverse would be helpful, although I
     believe I have a general idea what these are.

For a single isolated control system, there is in principle a general
way to deduce the _overall_ equation of the living system. Consider this
diagram:

                      -->Fs-->--- Control Sys
- - - - - - - |- - - - |- - - - - - -
          d---Fd---> i <---Fe -- o Environment

Fs is the system function we want to determine. In the environment, we
can hold d constant, vary o while measuring i, and determine Fe; we can
and hold o constant, vary d while measuring i, and determine Fd -- good
old IV-DV, but with dynamical equations when necessary. This much is
just physics and inspection of the environment.

Now we have the equation involving only observable and known variables
and functions:

(1) i = Fe(o) + Fd(d)

The control system equation is

(2) o = Fs(i), an input-output function, nothing more, where Fs is an
unknown function.

Substituting (2) into (1), we have

i = Fe(Fs(i)) + Fd(d)

since Fd and Fe are known, we can vary d, observe the resulting behavior
of i, and find the form of Fs. That is, of course, considerably easier
to say than to do, particularly if Fe and Fd are nonlinear, dynamical,
or both, and even more so if they are not single-valued. However, in
principle this is the strategy for deducing the overall system equation
from the behavior of externally-observable variables. A great deal of
mathematical control theory is devoted to methods for deducing the form
of Fs that would be required for stable control of i (although control-
engineering notation is different -- i is the "output", for example).

An even more general approach would start with i = Fe(o,d), not even
assuming addition of the effects of disturbances and outputs on i. But
the above should be enough for the likes of us.

Knowing the overall system function Fs does not tell us the details of
how this function is accomplished in wetware. A good part of PCT
consists of trying to account for Fs at the same time we account for
other known facts about the system: for example, the fact that sensors
exist, that sensory information is available to higher systems as well
as to the local control loop, and the fact that outputs involve muscle
control subsystems. Furthermore, we have to show how the same lower-
level components are used as parts of many different higher-level system
functions, with, presumably, the same components being involved rather
than new components being fashioned for every new control task. All
these extra factors influence how we will represent the details of the
internal mechanisms that create the overall system function Fs.

In many cases the simplest way to derive Fs is to guess and test, until
an overall function is found that fits the observations as well as
possible. This is where our canonical model comes in. The simplest model
that makes a simulation behave nearly like a real system involves an
algebraic input function and comparator, and an output function that
makes do/dt = k*e, equivalent to o = integral(K*e), without even any
leakage term. This model works very well when the environmental feedback
function and disturbing function are simple constants of
proportionality.

With this model, we have

p = i
e = r - p
o = o + g*e (g = output gain factor)
i = k1*o + k2*d

In our tracking experiments we arrange the values so that k1 = k2 = 1.
This leaves us only g to evaluate from the data. Note that the first
three equations can be collapsed into a single overall system function,

o = o + g*(r - i)

If g is large enough, the input variable i will be maintained very close
to the reference constant r as long as d varies only slowly (so
transient behavioral dynamics become unimportant). As a result, for all
values of d, we will have

i = r = k1*o + k2*d

Notice that we now have a relationship between o and d that depends only
on the observable variables and functions:

o = (r - k2*d)/k1

If we vary d and measure o, we will observe that o depends on d, and
that the form of the relationship is dictated completely by the
disturbance function and the environmental feedback function. The system
characteristics do not appear in this expression.

This is a basic feature of negative feedback control loops. It was the
original reason for developing the analysis of closed-loop systems in
electronics. In August, 1927, on the Lackawanna Ferry on the way to work
at Bell Labs, H. S. Black suddenly grasped this principle and jotted it
down in a blank space in his New York Times. He saw that by feeding back
some of the output of an amplifier to its input in the negative feedback
sense, the overall input-output gain of the amplifier could be made
almost completely insensitive to changes in the characteristics of the
vacuum tubes in the forward part of the circuit. Since he was designing
telephone amplifiers that would be buried in undersea cables for decades
at a time, stability of amplification was very important because all
vacuum tubes would age and lose gain. Even more important, Black found
that by starting with far more amplification than needed and "throwing
away" the excess gain by using negative feedback ("degeneration"), the
system also would show a great increase in bandwidth over an un-fed-back
circuit of equal gain. So the negative feedback loop was not only much
more stable than the open-loop system, but was very much faster -- 10
times as fast, or more.

This same feature of control systems also creates what we call the
"behavioral illusion." If an experimenter varies some environmental
variable near an organism, that organism may show a typical reaction to
the change. Without a detailed physical analysis of the situation, the
natural assumption would be that the stimulus-event causes a chain of
effects which pass into the organism via its senses, cause activities in
the nervous system, and eventually produce muscle tensions which create
the observable response. By treating the stimulus variable as an
independent variable and the response as the dependent variable, the
experimenter could determine what seems to be the overall response
function of the organism.

However, if the organism in question happens to behave as a competent
control system, this apparent causal chain is illusory and the obtained
system response function does not actually describe the organism. As we
can see from the above mathematical expressions, varying the
environmental variable is equivalent to varying d; measuring the
response is equivalent to measuring o. But between d and o lies a
controlled variable, i, which is held by the feedback action at a value
r which is determined inside the behaving system. The result is (for the
system assumed above) that

o = (r - k2*d)/k1

In other words, the observed dependence of o on d is of a form
determined by environmental constants, and does not reflect the actual
input-output function of the behaving system, even when r happens to be
constant.

This is the behavioral illusion; the measured input-output function as
it is commonly conceived is not the organism function, but is only a way
of expressing physical relationships in the environment.

If, of course, it were realized that i is the critical input variable,
the proper input-output system function could be determined. However,
standard methodology makes it unlikely that i would be discovered as an
important variable. A good control system will maintain i at essentially
the same value as r, determined independently inside the behaving
system. It will therefore show a low correlation with both d and o --
with "stimulus" and "response." The standard methodology looks for
dependent variables that have a _high_ correlation with the independent
variable. As we can see, this high correlation is to be found between
disturbances and outputs, not between controlled variables and outputs.
Thus a standard ANOVA would reject i as a significant variable, and come
up with the relationship between d and o as the primary one.

This, I think, accounts for why control theory was not discovered by
psychologists.
-----------------------------------------------------------------------
Best,

Bill P.