# It is the cause. It is the cause (was Re: Positive Feedback etc)

[From Rick Marken (2011.06.30.2030 PDT)]

Bill Powers (2011.06.30.0630 mdt)–

BP: Behind the idea of causation is a conviction that you can isolate just one significant cause, while the other variables are unimportant and have only minor effects.

I don’t think that’s really true. I think the idea behind causation (as it is used in research) is that you can see whether a variable (or variables – most research involves manipulation of several variables simultaneously) has any effect at all on another variable and, if so, what the nature of that effect (the nature of the functional relationship between variables) is. Importance is dealt with in terms of the proportion of variance in the DV that is accounted for by the IV. Of course, in conventional research this approach to measuring importance assumes a causal model (the General Linear Model of statistics). We know, of course, that if you are dealing with a control system, the appropriate way to evaluate “importance” is in terms of how well a closed-loop model, taking the IV(s) as a disturbance to a CV, accounts for the variance in the DV.

It’s a naive wish for simplicity that creates the concept of “the cause of B.” In general there is no one cause of B. The only way to make it seem that A1 causes B is to keep all the other A’s from showing their natural variations. This is a very handy way to stack the deck to help a weak theory. You simply hold all variables constant but the one your theory says is the important one. It then is guaranteed to be the only important one.

Then how do you go about doing research? Just observe naturally occurring variations in variables and relationships between them? No experiments?

BP: In PCT, or system analysis in general, we don’t have to keep all the system variables but one constant. We have to let qi, p, e, and qo vary in order to have a working control system.

Of course. And qi and qo are the only observable variables in that list; and we don’t just let qi vary naturally; we vary it (or try to vary it) by manipulating an independent variable, the disturbance – the other variable that we can observe; kind of an important one.

If we can measure d and r, we don’t have to keep them constant, either.

We can’t keep d constant and do an experiment; d must vary or there is no experiment. And in an experiment d is manipulated by the experimenter. In control system studies r can’t be held constant or manipulated; it’s the “wild card” in research on control; something people who study physical systems don’t have to worry about. While me can measure r (as you have shown) we can’t manipulate (vary) it.

We simply record the values of all the variables we can find, and show that all the dependent variables can be calculated from the values of d and r, the independent variables. If there are unpredicted variations, they must arise because of other independent variables we have failed to notice or that are too numerous to keep track of.

You seem to be saying that experiments can be done without experimental control. But I don’t think that’s true. When we manipulate d (the IV in our experiments) we are also, at least implicitly, holding other variables constant, which happens so naturally in our tracking tasks that it goes unnoticed. But, for example, you would implicitly control the type of disturbance waveform you use in a study of the effect of disturbance frequency on control performance. You wouldn’t use a sine wave disturbances for the low frequency and noise waveforms for the high frequency disturbances, would you? You are such a good, natural researcher that you just automatically design your experiments so that there is as little confounding as possible.

The less unpredicted variation there is in the dependent variables, the more sure we can be that we have accounted for all the main variables of importance.

That’s exactly how the “importance” of a presumed “causal” variable is evaluated in conventional research. They measure predicted rather than unpredicted variation but the idea is the same. The difference between conventional and control research is that in conventional research the variation is predicted using an open loop model while in control research the variation is predicted using a closed loop model. Indeed, this was the subject of my last paper (still being reviewed) and will be the topic of my talk if I give one at the meeting. What I found is that a closed loop model accounts for more of the variance in the dependent variable than does an open loop model in an experimental situation where it appears that there is an open-loop connection between d and qo.

We can use the measured values of the variables to deduce the forms, or at least plausible forms, of the various functions connecting the variables. As long as the equations containing those functions and variables continue to predict the future values of the dependent variables, we can be satisfied that we understand the system well enough for now, without having to designate any one cause or effect.

And that’s exactly what I have done with the “object interception” data that I’ve been analyzing. In those experiments the disturbance is the path in 3-space of the object to be intercepted. There is no obvious control of other variables but there was implicit control, as in the tracking experiments. Most obvious is that all the trials were conducted in the same space using the same object to be intercepted. I used the observed values of the paths of the objects in 3-space in a control model to predict (with startling accuracy sometimes) the position of the person trying to intercept the object.

So I don’t see any use for the term “cause” in a scientific discussion of behavior, or anything else for that matter. It’s an ancient concept which is no longer needed. Informal usages, of course, will remain, and we will have to deal with others who still think the term is meaningful, but when we want precision we can talk about functional relationships among multiple variables. We no longer have to hold all else equal from the Big Bang to the present.

I agree that “cause” is a problematic concept. But it’s at the heart of conventional psychological research and I don’t think I’d be able to communicate with my colleagues too well if I dropped it. I actually find that talking about the “causal model” of behavior as the basis of psychological research provides a nice segue into a discussion of the closed loop model. It seemed to resonate quite well with the students in my Research Methods class.

Best regards

Rick

PS. I’ve been moved to tears by your discussion with Adam. Thanks.

···

Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2011.07.01.0805 MDT)]

Rick Marken (2011.06.30.2030 PDT)

BP earlier: You simply hold all
variables constant but the one your theory says is the important one. It
then is guaranteed to be the only important one.

RM: Then how do you go about doing research? Just observe naturally
occurring variations in variables and relationships between them? No
experiments?

BP: You vary independent variables and observe dependent variables. You
also examine the system, piece by piece, and measure (or propose) the
input-output relationships of each piece (like input functions and output
functions). Then you write the simultaneous equations that describe the
whole system and determine the coefficents that will make the calculated
behavior match the observed behavior.

Some degree of “holding constant” is still necessary – that’s
how you get some independent variables to vary. But you don’t have to
hold them constant if you can measure their behavior, and you can vary
them in known patterns instead of holding them constant.

BP earlier: In PCT, or system analysis in general, we don’t have to
keep all the system variables but one constant. We have to let qi, p, e,
and qo vary in order to have a working control system.

RM: Of course. And qi and qo are the only observable variables in that
list; and we don’t just let qi vary naturally; we vary it (or try to vary
it) by manipulating an independent variable, the disturbance – the other
variable that we can observe; kind of an important one.

Yes, we try to vary qi. We vary an independent variable, d,
which would cause qi to vary if there were no opposition in the form of
another variable varying oppositely. In fact we don’t expect qi to vary
much, nor do we expect the variations to reflect variations in the
manipulated variable with much fidelity (correlations around 0.1 or
0.2).

BP earlier: If we can measure d and r, we don’t have to keep them
constant, either.

RM: We can’t keep d constant and
do an experiment; d must vary or there is no
experiment.

BP: We can ask the subject (that is, higher levels in the subject) to
keep the reference signal constant while we vary the disturbance, or we
can keep the disturbance constant and ask the subject to vary the
reference signal. The point, however, is not so much the constancy of an
independent variable, but the fact that it is independent of the system
variables in the model we’re testing. When we say we “vary” an
independent variable, we mean we do so in an arbitrary way, not in any
consistent relationship with a system variable. Holding a variable
constant is one way of doing that.

BP earlier: We simply record the values of all the variables we can
find, and show that all the dependent variables can be calculated from
the values of d and r, the independent variables. If there are
unpredicted variations, they must arise because of other independent
variables we have failed to notice or that are too numerous to keep track
of.

RM: You seem to be saying that experiments can be done without
experimental control. But I don’t think that’s true. When we manipulate d
(the IV in our experiments) we are also, at least implicitly, holding
other variables constant, which happens so naturally in our tracking
tasks that it goes unnoticed. But, for example, you would implicitly
control the type of disturbance waveform you use in a study of the effect
of disturbance frequency on control performance. You wouldn’t use a sine
wave disturbances for the low frequency and noise waveforms for the high
frequency disturbances, would you? You are such a good, natural
researcher that you just automatically design your experiments so that
there is as little confounding as possible.

BP: Yes, we have to isolate the system we’re trying to model, as best we
can. The predictions we end up making show us how well we succeeded. If
there’s too much random variation in the results, we obviously missed
something and have to track it down.

This is largely a matter of degree of isolation. Using system analysis as
we do in PCT, we can encompass many more variables in an experiment that
we don’t have to hold constant, but only measure or calculate. And some
systems are quite naturally isolated: the boundary between an organism
and its environment is generally quite clear, and we can generally
eliminate important stray stimuli and forces in the environment. As I
say, success in doing this is revealed by the results of the experiment.
If we get correlations much lower than 0.9, we haven’t isolated the
system well enough. If we’re in the upper 0.90s, that’s good enough for
practical work.

BP earlier: The less unpredicted variation there is in the dependent
variables, the more sure we can be that we have accounted for all the
main variables of importance.

RM: That’s exactly how the “importance” of a presumed
“causal” variable is evaluated in conventional research. They
measure predicted rather than unpredicted variation but the idea is the
same. The difference between conventional and control research is that in
conventional research the variation is predicted using an open loop model
while in control research the variation is predicted using a closed loop
model.

BP: A bigger difference is in how many variables have to be held
constant. In analysis using simultaneous equations we can let many
variables change at the same time without confounding the result, and we
don’t have to hold variables constant if we can measure their natural
variations. In the conventional approach, most often only one variable is
supposed to vary, though multivariate analysis is possible – you know
more about that than I do.

But as you say in your articles, the biggest difference of all is in the
model that’s used. The standard linear model is the main source of just
about all the failures of prediction in conventional psychology. It’s so
bad that you have to use statistics to see what the result of an
experiment was. In PCT we just fit a model to the data and measure the
RMS prediction error, like they do in physics.

BP earlier: So I don’t see any use for the term “cause” in
a scientific discussion of behavior, or anything else for that matter.
It’s an ancient concept which is no longer needed. Informal usages, of
course, will remain, and we will have to deal with others who still think
the term is meaningful, but when we want precision we can talk about
functional relationships among multiple variables. We no longer have to
hold all else equal from the Big Bang to the present.

RM: I agree that “cause” is a problematic concept. But it’s at
the heart of conventional psychological research and I don’t think I’d be
able to communicate with my colleagues too well if I dropped it. I
actually find that talking about the “causal model” of behavior
as the basis of psychological research provides a nice segue into a
discussion of the closed loop model. It seemed to resonate quite well
with the students in my Research Methods class.

BP: The term “causal” seems to be used interchangeably with
“deterministic” or “lawful” or
“systematic.” It just means that the variables are related in a
systematic way instead of being randomly variable. In fact, by
distinguishing only between causal and non-causal, one changes the
quantitative question into a much simpler question: not “what is the
relationship between A and B?” but “Is there any relationship
at all between A and B?” This allows a faint hint of a relationship
to be granted the same status as a clearly measurable quantitative
relationship of the kind we normally get in PCT.

But you have to start from where we are, and I guess that means causation
still has to be spoken aloud. That’s not so bad – we all know what is
meant. But some day we have to come out and say it: a correlation of 0.8
deserves all the names you can call it. GIGO.

At one time I had a meter for measuring static interference with
electromagnetic communications, a “noise meter.” Noise
measurements were repeatable within a few percent. Yet the signal was
100% noise. Hmm.

Best,

Bill P.

[From Rick Marken (2011.07.01.0940)]

Bill Powers (2011.07.01.0805 MDT)

Rick Marken (2011.06.30.2030 PDT)

RM: Then how do you go about doing research? Just observe naturally
occurring variations in variables and relationships between them? No
experiments?

BP: You vary independent variables and observe dependent variables…

So then the macro economic “experiments” where tax rates are varied as an independent variable and growth is measured as the dependent variable produce pretty good data. Just as I thought.

Some degree of “holding constant” is still necessary – that’s
how you get some independent variables to vary.

I don’t understand this.

BP: We can ask the subject (that is, higher levels in the subject) to
keep the reference signal constant while we vary the disturbance, or we
can keep the disturbance constant and ask the subject to vary the
reference signal.

Not if the subject is a gerbil… or a (nominally human) Republican, for that matter (they don’t understand or want to have anything to do with cooperation, except to say that people are not cooperating when they don’t do exactly what they want them to do).

The point, however, is not so much the constancy of an
independent variable, but the fact that it is independent of the system
variables in the model we’re testing. When we say we “vary” an
independent variable, we mean we do so in an arbitrary way, not in any
consistent relationship with a system variable. Holding a variable
constant is one way of doing that.

Now I understand. And agree.

BP: But you have to start from where we are, and I guess that means causation
still has to be spoken aloud. That’s not so bad – we all know what is
meant. But some day we have to come out and say it: a correlation of 0.8
deserves all the names you can call it. GIGO.

I think that depends on what the correlation is between. I just did a difficult pursuit tracking run and the correlation between independent and dependent variables (target and mouse movements, respectively) was .32. That would rate as GIGO by your criterion. But the correlation between the output of a control model (with the target variations as the disturbance to a controlled variable – the distance between cursor and target) and the actual output (dependent variable) was .996. I think what we should evaluate the model-data correlations (if we are going to look at correlations) in terms of their size, not the observed correlations.

At one time I had a meter for measuring static interference with
electromagnetic communications, a “noise meter.” Noise
measurements were repeatable within a few percent. Yet the signal was
100% noise. Hmm.

I don’t understand this.

Best

Rick

···

Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2011.07.01.11.27]

[From Bill Powers (2011.07.01.0805 MDT)]

BP: The term "causal" seems to be used interchangeably with "deterministic" or "lawful" or "systematic." It just means that the variables are related in a systematic way instead of being randomly variable.

We speak rather different languages, apparently.

To me, "causal" implies the existence of a mechanistic influence, the cause preceding and leading directly to the effect, whether the effect be large and definitive or small and barely noticeable as compared with the effects due to other causes of variation. It has nothing whatever to do with how systematically one variable is related to one other variable.

"Deterministic" means something more along the lines of "can be predicted from". All physical systems are deterministic, in that if you know the complete web of causal relations and the precursor states of the relevant variables, you can (apart from quantum uncertainty) determine the future state of something from the past of the other variables. "Deterministic" is related to "causal", but includes also situations in which there is no causal link between the variables that can be predicted from one another.

"Determined by" means that you have discovered enough statistical or mechanical relationships to be able to state the value of some dependent variable within a specified precision, given the values of other variables.

"Systematic" means sufficiently regular that you might often treat it as happening all the time, but you wouldn't bet your life on it.

"Lawful" (in statistical terms) is similar to, but a bit stronger than systematic. "Lawful" in the sense of natural law means that if X happens, then Y must happen, though it doesn't mean that X causes Y.

All the above three deal only with observed statistical relationships, and have nothing whatever to do with causality, in the language I speak.

In circuit terms, "causal" means that the value of an output is affected only by the values of inputs now and at earlier times. The contrast is "acausal", which describes a (non-physical) circuit in which future values of the input affect the current value of the output. All physical systems, including all control systems, are causal.

In fact, by distinguishing only between causal and non-causal, one changes the quantitative question into a much simpler question: not "what is the relationship between A and B?" but "Is there any relationship at all between A and B?"

The contrast between causal and non-causal is quite distinct from the contrast between causal and acausal.

In my language, the distinction between causal and non-causal has nothing to do with whether there is a statistical relationship between A and B. There is a strong relationship over time between whether my trees are leafy and whether my grass is growing strongly, but there is no causal connection between them. I could cover my lawn with salt and prevent the grass from growing without in the least affecting how leafy the trees might be (until the salt leached into their roots). I could carefully strip the tree of leaves without affecting how well the grass grows. There is no causal relation between leafiness and grassiness. However, there is a strong causal relation between the angle of the Earth's axis toward the sun and both grassiness and leafiness, and that is what determines the strong but non-causal statistical relationship between these two variables.

Illustrating the contrast in the opposite sense, there is a perfect causal connection between the value of the disturbance in a control system and the value of the perceptual signal, but a very weak statistical relation between them. If you measured simply the values of A (disturbance) and B (perception) you would have no clue that the value of the perception is precisely and completely determined by the joint history of the values of the disturbance and the reference. And yet, in a noise-free control loop, the disturbance and the reference are the ONLY causes of the perceptual value.

Let's not confuse "causality" with statistical measures. The word "cause" is used to cover enough different degrees of influence, without inserting it into a completely different domain.

Who am I to try to define the Colorado dialect of English? All we really need to define is how possibly problematic terms should be defined when used on CSGnet. I suspect it would be generally useful to consider how widely the terms are used in one way or the other in the English-speaking world, especially as that world is exposed to non-English speakers. In respect of "cause", most dictionaries that I have looked at seem to refer to direct influence more than to statistical relationship, so I propose that we do restrict its use as I describe above.

Martin

[From Rick Marken (2011.07.01.1210)]

Martin Taylor (2011.07.01.11.27)–

Illustrating the contrast in the opposite sense, there is a perfect causal connection between the value of the disturbance in a control system and the value of the perceptual signal, but a very weak statistical relation between them. If you measured simply the values of A (disturbance) and B (perception) you would have no clue that the value of the perception is precisely and completely determined by the joint history of the values of the disturbance and the reference. And yet, in a noise-free control loop, the disturbance and the reference are the ONLY causes of the perceptual value.

Don’t forget the effect of output on perception. The causes of the perceptual value are disturbance(s) and output (the reference specifies but does not really cause the perceptual value, in my view anyway). Since the output is a function of the perceptual signal (o=f(r-p)), the perceptual signal must be counted as one of the causes of the perceptual signa itselfl, which is why causal analysis of closed loop systems gets one going around in circles, so to speak.

Best

Rick

···

Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2011.07.01.1325 MDT)]

Rick Marken (2011.07.01.0940) –

BP earlier:independent variables and observe dependent
variables…

RM: Then the macro economic “experiments” where tax rates are
varied as an independent variable and growth is measured as the dependent
variable produce pretty good data. Just as I thought.

BP: My point was that there could be a control system involved which is
varying tax rates as a means of controlling growth. This would mean that
tax rates are not an independent variable, but are the output variable of
a control system to which growth rates are an input. In that case you
would be studying an illusion in which tax rates are apparently causing
growth rates, whereas it is really unobserved disturbances of growth rate
that are resulting in changes in the tax rate. If you could discover what
those disturbances are, you might find a very high negative correlation
with changes in tax rate.

BP earlier: Some degree of “holding constant” is still
necessary – that’s how you get some independent variables to vary.

I don’t understand this.

What we call independent variables are variables that might change due to
other factors, except that we have grabbed hold of them are are making
them stay constant or change in arbitrary patterns.

BP: We can ask the subject (that is, higher levels in the subject) to
keep the reference signal constant while we vary the disturbance, or we
can keep the disturbance constant and ask the subject to vary the
reference signal.

RM: Not if the subject is a gerbil…

BP: Garbage Out omitted. With a gerbil, you’d have to find out what other
variables in the gerbil are affected by changes in the controlled
variable, and disturb whatever is found in a way that can be counteracted
by varying the reference level for the controlled variable. With language
it would be easier!

BP earlier: The point, however, is not so much the constancy of an
independent variable, but the fact that it is independent of the system
variables in the model we’re testing. When we say we “vary” an
independent variable, we mean we do so in an arbitrary way, not in any
consistent relationship with a system variable. Holding a variable
constant is one way of doing that.

RM: Now I understand. And agree.

BP: But you have to start from where we are, and I guess that means
causation still has to be spoken aloud. That’s not so bad – we all know
what is meant. But some day we have to come out and say it: a correlation
of 0.8 deserves all the names you can call it. GIGO.

RM: I think that depends on what the correlation is between. I just did a
difficult pursuit tracking run and the correlation between independent
and dependent variables (target and mouse movements, respectively)
was .32. That would rate as GIGO by your criterion. But the correlation
between the output of a control model (with the target variations as the
disturbance to a controlled variable – the distance between cursor and
target) and the actual output (dependent variable) was .996. I think what
we should evaluate the model-data correlations (if we are going to look
at correlations) in terms of their size, not the observed
correlations.

BP: I agree with you. The only reason I ever started introducing
correlations was to show conventional psychologists how high they were in
PCT experiments – and how unnecessary.

I was a bit confused by your result, but then realized that if the cursor
lagged the target, the correlation of cursor with target could be very
low (like between sine and cosine) but not because of randomness. Yet if
the model also lagged in the same way, its output could correlate very
highly with the real output. Is that what happened? If you plot real
cursor against target, do you get a kind of oval pattern? That would
indicate out-of-phase variables rather than noise as a cause of the low
correlation.

BP earlier: At one time I had a meter for measuring static
interference with electromagnetic communications, a “noise
meter.” Noise measurements were repeatable within a few percent. Yet
the signal was 100% noise. Hmm.

RM: I don’t understand this.

BP: The noise meter measured the intensity of background electromagnetic
radiation, the kind that produces the hiss of an FM radio tuned between
stations. The averaged reading on this meter was reproducible with a few
percent of variation. So what does the error of measurement of a random
variable mean?

Best,

Bill P.

[Martin Taylor 2011.07.01.16.45]

[From Rick Marken (2011.07.01.1210)]

Martin Taylor (2011.07.01.11.27)–

``````      Illustrating the contrast in the opposite sense, there is a
``````

perfect causal connection between the value of the disturbance
in a control system and the value of the perceptual signal,
but a very weak statistical relation between them. If you
measured simply the values of A (disturbance) and B
(perception) you would have no clue that the value of the
perception is precisely and completely determined by the joint
history of the values of the disturbance and the reference.
And yet, in a noise-free control loop, the disturbance and the
reference are the ONLY causes of the perceptual value.

``````      Don't forget the effect of output on perception. The causes of
``````

the perceptual value are disturbance(s) and output (the
reference specifies but does not really cause the perceptual
value, in my view anyway).

``````I'm not forgetting the effect of output on perception at all, nor am
``````

I forgetting the effect of error on output.

``````In a control loop there are only two independently variable causes,
``````

the reference and the disturbance. The output certainly is a direct
cause of the perception, but it is not a cause of the reference,
even though you can deduce the values of an intermediate variable
that maps one-to-one onto the reference value if you know the
history of the output and disturbance values. That is another
illustration of the distinction between “causal” and “determined”. A
simpler one is that the error value is causal to the output value,
but the reverse is not true, even though each is determined by the
other if you know the output function.

``````Martin
``````
···

On 2011/07/1 3:09 PM, Richard Marken wrote:

[From Bill Powers (2011.07.02.0820 M<DT)]

Martin Taylor 2011.07.01.11.27 p–

MT: To me, “causal”
implies the existence of a mechanistic influence, the cause preceding and
leading directly to the effect, whether the effect be large and
definitive or small and barely noticeable as compared with the effects
due to other causes of variation. It has nothing whatever to do with how
systematically one variable is related to one other
variable.

BP: I don’t understand how that can be true if the two variables are to
be called a cause and an effect. How do you find out whether there is a
mechanistic influence except by observing systematic (repeatable,
correlated) effects of proposed causes? You push down hard enough on one
end of a lever and if the lever doesn’t break, the other end goes up,
always. Someone may lift the other end, but it never happens that one end
goes down and the other end doesn’t go up (a logical implication). So we
say when one end goes down, that causes the other end to go up.

MT: “Deterministic”
means something more along the lines of “can be predicted
from”. All physical systems are deterministic, in that if you know
the complete web of causal relations and the precursor states of the
relevant variables, you can (apart from quantum uncertainty) determine
the future state of something from the past of the other variables.
“Deterministic” is related to “causal”, but includes
also situations in which there is no causal link between the variables
that can be predicted from one another.

BP: “Deterministic” is a statement of faith, not an
observation. It says that even though we don’t know what the linkages
are, the effect must be foreordained by the preceding events. The
statement isn’t a deduction, a logical conclusion from anything else.
It’s a statement of faith in an orderly universe where, as Acquinas (and
Arisotle) said, “Nothing moves of itself but the Prime Mover
Unmoved.” If your purpose is to prove the existence of God, that is

MT: “Determined by”
means that you have discovered enough statistical or mechanical
relationships to be able to state the value of some dependent variable
within a specified precision, given the values of other
variables.

BP: It says more than that to me: it says there are no
as-yet-undiscovered influences that can also produce the same effect. It
says that the proposed independent variable is not only sufficient to
explain the behavior of the dependent variable, but necessary. If there
are other influences on B beside A, we can’t say that A determines B,
because that will appear to be the case only if all other influences are
held constant.

As to systematic and lawful, you’re entitled to your own vocabulary, of
course, but it’s not everyone’s. For me, systematic means not random, and
lawful means obeying specific laws. Nothing deeper than that.

All the above three deal only
with observed statistical relationships, and have nothing whatever to do
with causality, in the language I speak.

BP: OK, so reognizing that your listeners may think those terms do have
something to do with causality, you will of course add the required
modifying phrases for their benefit when you try to communicate with them
using those terms.

MT: In circuit terms,
“causal” means that the value of an output is affected only by
the values of inputs now and at earlier times. The contrast is
“acausal”, which describes a (non-physical) circuit in which
future values of the input affect the current value of the output. All
physical systems, including all control systems, are
causal.

BP: In my lexicon, “acausal” means “not related to any
known other variable,” or sometimes “random.” I have also
seen it used to mean “illusory”, as when two variables seem to
be directly related, but are actually affected by some common cause, so
there is no real effect of either on the other. Effects of the future
don’t enter into it, since they simply don’t happen.

BP earlier: In fact, by
distinguishing only between causal and non-causal, one changes the
quantitative question into a much simpler question: not “what is the
relationship between A and B?” but “Is there any relationship
at all between A and B?”

MT: The contrast between causal and non-causal is quite distinct from the
contrast between causal and acausal.

BP: Really? It is? Or is this just a proposal you’re making?

MT: In my language, the
distinction between causal and non-causal has nothing to do with whether
there is a statistical relationship between A and B. There is a strong
relationship over time between whether my trees are leafy and whether my
grass is growing strongly, but there is no causal connection between
them.

BP: Ah. The distinction I make is between a real relationship and one
that is only apparent, an illusion. By real I mean that there is actually
some mechanism involved connecting cause to apparent effect. Statistical
analysis reveals only apparent causes, which may be causal illusions or
actual causes.

Illustrating the contrast in the
opposite sense, there is a perfect causal connection between the value of
the disturbance in a control system and the value of the perceptual
signal, but a very weak statistical relation between them.

BP: Sorry, can’t agree with that. The disturbance is not the only
variable affecting the perceptual signal; the system’s action also
affects it. A “perfect causal connection” means to me a
mechanism that is fully known and that (normally) rules out any other
causes of the same effect. It evidently doesn’t mean that to you. As I
use the words, if A causes B, and B occurs, then you know that A must
have occurred. If other variables can also cause B, then there are no

MT: If you measured simply the
values of A (disturbance) and B (perception) you would have no clue that
the value of the perception is precisely and completely determined by the
joint history of the values of the disturbance and the reference. And
yet, in a noise-free control loop, the disturbance and the reference are
the ONLY causes of the perceptual value.

BP: You’re omitting the parameters of the functions involved in the loop,
which are also variables. If those parameters change, the perceptual
value will change without either d or r changing.

MT: Let’s not confuse
“causality” with statistical measures. The word
“cause” is used to cover enough different degrees of influence,
without inserting it into a completely different
domain.

BP: OK, we can agree that statistics is just a descriptive language and
doesn’t say anything about mechanism or alternate paths of
influence.

MT: Who am I to try to define
the Colorado dialect of English? All we really need to define is how
possibly problematic terms should be defined when used on CSGnet. I
suspect it would be generally useful to consider how widely the terms are
used in one way or the other in the English-speaking world, especially as
that world is exposed to non-English speakers. In respect of
“cause”, most dictionaries that I have looked at seem to refer
to direct influence more than to statistical relationship, so I propose
that we do restrict its use as I describe above.

It’s hard to reform one’s lifelong language habits, so I suggest that any
time there is a disagreement, actual or potential, about usage, we simply
explain how we intend the term to be understood without using the term.
For me, it’s much the easiest just to avoid the use of “cause”
as much as I can. For me, the degrees start with “influence”,
which is a vague weasel-word I use when I don’t want to explain how some
unstated numbers of influences are exerted. “Affect” is a bit
more specific, implying a more direct but still non-exclusive path from A
to B. The most specific I can get is to write B = f(A) which claims that
the ONLY significant effect on B is some specific function of A and no
other variable. I’m sure that to other people, those terms don’t have
such self-evident connotations, so about all we can do is try to clear up
misunderstandings by using more words.

Best,

Bill P.

From Bill Powers (2011.07.02.0925 MDT)]

Martin Taylor 2011.07.01.16.45 –

MT: I’m not forgetting the
effect of output on perception at all, nor am I forgetting the effect of
error on output.

In a control loop there are only two independently variable causes, the
reference and the disturbance. The output certainly is a direct cause of
the perception, but it is not a cause of the reference, even though you
can deduce the values of an intermediate variable that maps one-to-one
onto the reference value if you know the history of the output and
disturbance values. That is another illustration of the distinction
between “causal” and “determined”. A simpler one is
that the error value is causal to the output value, but the reverse is
not true, even though each is determined by the other if you know the
output function.

BP: What this says to me is that we will be best off just to avoid using
any terms with “cause” in them except, sometimes, informally.
To me “cause” implies “ONLY cause.” A cause
determines its effect. Neither the disturbance nor the output quantity
determines the state of the perceptual signal. Therefore I would not say
that either one causes the perceptual signal unless the other is
artificially held constant. And even then you have to know what that
constant value of d or r is to state what values of the perceptual signal
will result from a given value of the other quantity.

It’s all so much simpler if we just use the equations! These verbal
classifications are obviously not sufficient for unambiguous
communication.

Best,

Bill P.

[Martin Taylor 2011.07.02.12.14]

From Bill Powers (2011.07.02.0925 MDT)]

Martin Taylor 2011.07.01.16.45 --

MT: I'm not forgetting the effect of output on perception at all, nor am I forgetting the effect of error on output.

In a control loop there are only two independently variable causes, the reference and the disturbance. The output certainly is a direct cause of the perception, but it is not a cause of the reference, even though you can deduce the values of an intermediate variable that maps one-to-one onto the reference value if you know the history of the output and disturbance values. That is another illustration of the distinction between "causal" and "determined". A simpler one is that the error value is causal to the output value, but the reverse is not true, even though each is determined by the other if you know the output function.

BP: What this says to me is that we will be best off just to avoid using any terms with "cause" in them except, sometimes, informally.

I agree. That was the gist of my first posting on the use of "cause". The word (and the concept) has been problematic for philosophers for generations. And for psychologists, too. Child psychologists have done experiments with abstract shapes that "push each other around", to see under what conditions children see "cause and effect", when there actually is no cause and effect other than the decisions of the animator. We adults tend to perceive cause and effect if some event is usually followed by another, and lots of spurious science comes from the imagined mechanisms of such spurious repeated patterns of events.

It's all so much simpler if we just use the equations! These verbal classifications are obviously not sufficient for unambiguous communication.

Yes, if you can determine the equations. In modelling, you can use the equations, even if you may have to solve them numerically. If the model fits the data, so much the better. But in the "real" world you may not be able to see any mechanistic connection where the model inserts one. Is the model right because it accounts for the data, or wrong because it inserts a mechanism where there is none.

Paul Dirac predicted the existence of the anti-electron because his model required it, and I believe he was ridiculed. Now we use his predicted particle in routine medicine (Positron Emission Tomography). But if you produced a nice model that accurately predicted my trees would come into leaf after the grass starts greening in the Spring, using a mechanism whereby the greening grass emits some leaf-induction factor, your model might predict perfectly, but would be wrong. How can you tell the difference if you can't do controlled experiments?

I tend to use "cause" when I believe there is a mechanistic influence between cause and effect, no matter how many such influences there may be. You say:

To me "cause" implies "ONLY cause."

but there never is only one. Consider your example of the lever. Was the cause of the effect (the other end going up when you lower this end) that someone placed a log under the middle of the plank when you had this end lifted, or was it that you lowered this end after someone placed the log under the middle? There's always a context that is the "cause" equally with the "only cause" that you perceive. It is a case of "all else held constant".

The considerations are just the same as must be taken into account when you talk about the probability of something. All probabilities are conditional on some context being thus-and-so.

All of which makes me glad that you second my proposal to restrict "cause" to informal usage, and (I hope you agree) to cases in which a mechanistic influence is plausibly believed to exist.

Martin

···

On 2011/07/2 11:33 AM, Bill Powers wrote:

[From Rick Marken (2011.07.02.1200)]

Bill Powers (2011.07.01.1325 MDT)–

Rick Marken (2011.07.01.0940) –

BP earlier:independent variables and observe dependent
variables…

RM: Then the macro economic “experiments” where tax rates are
varied as an independent variable and growth is measured as the dependent
variable produce pretty good data. Just as I thought.

BP: My point was that there could be a control system involved which is
varying tax rates as a means of controlling growth. This would mean that
tax rates are not an independent variable, but are the output variable of
a control system to which growth rates are an input. In that case you
would be studying an illusion in which tax rates are apparently causing
growth rates, whereas it is really unobserved disturbances of growth rate
that are resulting in changes in the tax rate. If you could discover what
those disturbances are, you might find a very high negative correlation
with changes in tax rate.

But that would be a super discovery! And it seems that the information about the relationship between taxes and growth would have helped make it.

My first foray into economic data analysis was to look at the relationship between Fed Discount Rate and CPI (Consumer Price Index, a measure of inflation) over time. I’ve attached the plot. The Fed thinks it can use the Discount Rate (the cost to the banks of borrowing money) to control inflation. If this were true you would expect CPI to be a controlled variable that remains nearly flat over time as the Discount Rate varies to combat whatever the disturbance is that causes variations in CPI (inflation). In fact, the Fed is clearly not controlling CPI by varying Discount Rate; it is doing a pursuit tracking task, controlling the relationship between CPI and Discount Rate by raising and lowering the the rate in proportion to increases and decreases, respectively, in CPI.

The correlation between Discount Rate and CPI is .77 (from 1953-2001). If you look carefully at the data it looks like the Fed reduced the gain on this tracking task in about 1990. If we leave the years after 1989 out of the analysis the correlation between Discount Rate and CPI from 1953 to 1989 is .88. So I think we can learn about economic control by looking at macro-economic relationships. At least the data provide a criterion against which to evaluate the performance of any model of the economy.

BP: But you have to start from where we are, and I guess that means
causation still has to be spoken aloud. That’s not so bad – we all know
what is meant. But some day we have to come out and say it: a correlation
of 0.8 deserves all the names you can call it. GIGO.

RM: I think that depends on what the correlation is between. I just did a
difficult pursuit tracking run and the correlation between independent
and dependent variables (target and mouse movements, respectively)
was .32. That would rate as GIGO by your criterion. But the correlation
between the output of a control model (with the target variations as the
disturbance to a controlled variable – the distance between cursor and
target) and the actual output (dependent variable) was .996. I think what
we should evaluate the model-data correlations (if we are going to look
at correlations) in terms of their size, not the observed
correlations.
BP: I agree with you. The only reason I ever started introducing
correlations was to show conventional psychologists how high they were in
PCT experiments – and how unnecessary.

I was a bit confused by your result, but then realized that if the cursor
lagged the target, the correlation of cursor with target could be very
low (like between sine and cosine) but not because of randomness. Yet if
the model also lagged in the same way, its output could correlate very
highly with the real output. Is that what happened? If you plot real
cursor against target, do you get a kind of oval pattern? That would
indicate out-of-phase variables rather than noise as a cause of the low
correlation.

I did take time delays into account. The data I reported above is not right, though. What I actually found is this: The correlation between model and actual mouse movements for the causal model with optimal lag was .47; the causal model accounts for 22% of the variance in behavior. The correlation between model and actual mouse movements for the best fitting control
model was .84; the control model accounts for 71% of the variance in behavior.

So the fit of the causal model to actual behavior is quite low, but in the range of correlations observed in conventional research, which uses the causal model as the basis for analysis. The fit of the control model to actual behavior is much better (but still not up to your .99 criterion). Since it is not lag that can account for the difference in success of the causal and control models it must be something about the details of the way the control model acts to keep the cursor/target relationship under control; that’s something I have to look at with future research. But it’s clear that a control model, using an independent or “predictor” variable as input, can produce behavior that has a higher correlation with actual behavior than the observed correlation between predictor and behavioral variable, the latter correlation implicitly being based on the causal model of the behavior. IN other words, it’s possible that the appropriate control model could produce variations in growth (based on tax rate) that correlate more highly with growth variations than the observed correlation between taxes and growth (which is what you said above).

Best

Rick

···

Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2011.07.02.1205)]:

Martin Taylor (2011.07.01.16.45)–

Rick Marken (2011.07.01.1210)]–

``````      MT: in a noise-free control loop, the disturbance and the
``````

reference are the ONLY causes of the perceptual value.

``````      RM: Don't forget the effect of output on perception. The causes of
``````

the perceptual value are disturbance(s) and output (the
reference specifies but does not really cause the perceptual
value, in my view anyway).

``````I'm not forgetting the effect of output on perception at all, nor am
``````

I forgetting the effect of error on output.

That’s great. But then don’t forget to include output as one of the causes of variations in perception. You mention only d and r in the quoted statement above. Remember, this is a closed-loop system we are talking about and one of the equations we use to define the system is: p = f(o+d).

Best

Rick

···
``````In a control loop there are only two independently variable causes,
``````

the reference and the disturbance. The output certainly is a direct
cause of the perception, but it is not a cause of the reference,
even though you can deduce the values of an intermediate variable
that maps one-to-one onto the reference value if you know the
history of the output and disturbance values. That is another
illustration of the distinction between “causal” and “determined”. A
simpler one is that the error value is causal to the output value,
but the reverse is not true, even though each is determined by the
other if you know the output function.

``````Martin
``````

Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2011.07.02.16.30]

···

On 2011/07/2 3:06 PM, Richard Marken wrote:

[From Rick Marken (2011.07.02.1205)]:

``````        Martin Taylor
``````

(2011.07.01.16.45)–

Rick Marken (2011.07.01.1210)]–

``````                MT: in a noise-free control loop, the disturbance
``````

and the reference are the ONLY causes of the
perceptual value.

``````                RM: Don't forget the effect of output on perception.
``````

The causes of the perceptual value are
disturbance(s) and output (the reference specifies
but does not really cause the perceptual value, in
my view anyway).

``````        I'm not forgetting the effect of output on perception at
``````

all, nor am I forgetting the effect of error on output.

``````      That's great. But then don't forget to include output as one
``````

of the causes of variations in perception. You mention only d
and r in the quoted statement above. Remember, this is a
closed-loop system we are talking about and one of the
equations we use to define the system is: p = f(o+d).

``````p = f(o+d) = f(g(e) + d) = f(g(r-p) + d)

r and d are the only independently manipulable variables. The rest
``````

follow.

``````Martin
``````

[From Rick Marken (2011.07.02.1455)]

Martin Taylor (2011.07.02.16.30)–

``````        MT: I'm not forgetting the effect of output on perception at
``````

all, nor am I forgetting the effect of error on output.

``````      RM: That's great. But then don't forget to include output as one
``````

of the causes of variations in perception. You mention only d
and r in the quoted statement above. Remember, this is a
closed-loop system we are talking about and one of the
equations we use to define the system is: p = f(o+d).

MT: p = f(o+d) = f(g(e) + d) = f(g(r-p) + d)

``````r and d are the only independently manipulable variables. The rest
``````

follow.

Yes, r and d are the only two independent variables in a control loop. But one of the causes of p is o, which is conventionally viewed as a dependent variable but it is, nevertheless, a cause of p and a very important one too not least because it is the one that is always ignored in psychological research and it is the one that makes p a controlled variable.

Best

Rick

···
``````Martin
``````

Richard S. Marken PhD
rsmarken@gmail.com

[From Rick Marken (2011.07.02.1700)]

Bill Powers (2011.07.02.1324 MDT)–

Rick Marken (2011.07.02.1200) –

causal model. I assume you’re using target position to predict mouse
position. It seems to me that there should be a very high lagged
correlation between target position and mouse position, since the mouse
is tracking the target. Perhaps the disturbance you’re using has such a
high difficulty that this correlation is not so high, and because the
model fits the behavior within the tracking error, the model-real
correlation is higher.

Yes, I am using a very high difficulty disturbance. And that model probably fits the behavior better than the lagged target for the reason you give. But I an going to explore this further. I do think this is a way to sell control theory to conventional psychologists: it works better than the general linear (which I call teh “causal”) model in accounting for even the typically lousy results (r^2 values ~.34) that you see in your experiments.

However, you have to be sure you’re comparing similar correlations.
Model-real correlation is not comparable to target-cursor correlation.

I’m comparing model-real correlations in both cases.

.
The dramatic refutation of SR theory is seen only when you add a second
disturbance between mouse and cursor; Now the correlation of target with
mouse will be extremely low in the causal model because the other
disturbance isn’t taken into account. Better check exactly what the
comparison is.

Yes, I know.

I agree that looking for some of these new correlation possibilities in
the market data might produce some pretty important results. Glad you’re
doing it.

Actually, I’m looking for these correlation possibilities in “experimental” data. I have always planned to test a control model against real economic (what you call “market”) data; but that will have to wait, since I’m not a “real” economist (though some would say I’m not a real psychologist either;-)

This discussion has been very helpful. Thanks. I’m sure it will continue in some for or another at the meeting.

Best

Rick

···

Best,

Bill P.

Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2011.07.02.20.02]

···

On 2011/07/2 5:55 PM, Richard Marken wrote:

[From Rick Marken (2011.07.02.1455)]

``````        Martin Taylor
``````

(2011.07.02.16.30)–

``````          MT: I'm not forgetting the effect of output
``````

on perception at all, nor am I forgetting the effect of
error on output.

``````                RM: That's great. But then don't forget to include
``````

output as one of the causes of variations in
perception. You mention only d and r in the quoted
statement above. Remember, this is a closed-loop
system we are talking about and one of the equations
we use to define the system is: p = f(o+d).

MT: p = f(o+d) = f(g(e) + d) = f(g(r-p) + d)

``````        r and d are the only independently manipulable variables.
``````

The rest follow.

``````      Yes, r and d are the only two independent variables in a
``````

control loop. But one of the causes of p is o, which is
conventionally viewed as a dependent variable but it is,
nevertheless, a cause of p and a very important one too not
least because it is the one that is always ignored in
psychological research and it is the one that makes p a
controlled variable.

``````Interesting how easy it is to use words in ways that make agreement
``````

sound like disagreement, isn’t it?

``````Martin
``````

[From Bill Powers (2011.07.03.0555 MDT)]

Martin Taylor 2011.07.02.16.30 –

MT: p = f(o+d) = f(g(e) + d) =
f(g(r-p) + d)

r and d are the only
independently manipulable variables. The rest follow.

BP: Yes. This can also be done for any other loop variable, which is what
Rick Marken has been concerned about:

o = g(r - f(o + d))

You always end up with the loop variable on both sides of the equation,
which means that to solve it you have to be able to separate the variable
on the right side in order to move it to the left and have only one
instance of it. Whether that can be done analytically depends entirely on
the forms of the functions – in general you have to settle for an
approximation or solve numerically (i.e. simulate).

My problem, and perhaps Rick’s, with your discussion of causality is
that when two variables A and B affect the same variable C, C = A +
B, it’s hard for me to see either A or B as a cause of C. You have to
know the values of both A and B to know what the value of C is.
Observing A alone or B alone is not sufficient to tell you the value of
C. In my concept of causation, if you know the cause you can predict the
effect. If you say A causes C, then if you vary A you expect to be able
to say how C will vary, at least statistically. But if there is a B also
affecting C, C could change in any way at all in relation to A, even
oppositely to A.

Just consider Ashby’s conception of control: a controller controlling C
would detect the state of A, and set B so as to produce the desired value
of C. It wouldn’t matter whether A were varying systematically or
randomly if the controller could keep up with the variations.

In a negative-feedback control system, it’s even more direct. The system
detects the state of C and adjusts B so that C remains close to its
reference level regardless of the magnitude of A (or any number of A’s).
Again, it doesn’t matter whether A is varying systematically and
predictably, or randomly. Of course components of the variations in A
above some frequency limit are not detected or opposed, so an outside
observer might see high-frequency correlations between C and A. But the
low-frequency effects of A on C would be drastically reduced because B
can be adjusted fast enough to cancel them out quantitatively. This would
apply to Ashby’s model, too, if he had considered dynamics.

That being the case, I don’t see how you can call the effect of A on C
“causal.” But that’s just because of the language and the
associations one has with the sound of KAAWZ. I’m sure it makes complete
sense to you. For communicating, however, we do need some way of talking
about these things that has the same meanings for all of us. There’s no
point in talking about the accepted meaning or the right meaning or the
dictionary meaning. All that counts is the meaning we all understand. In
the preceding paragraph, I discussed the various relationships without
using the terms “cause” or “causal.” The result seems
pretty unambiguous to me. As you and I have both been saying, the word
“cause” is just a troublemaker.

Best,

Bill P.

[From Bill Powers (2011.07.03.0645 MDT)]

Martin Taylor 2011.07.02.20.02 –

MT:Interesting how easy it is to
use words in ways that make agreement sound like disagreement, isn’t
it?

Very, and the opposite as well. It’s also easy to respond to a statement
by making another statement and leaving it to the other person to figure
out whether it indicates agreement or disagreement. Both approaches lead
to poor communication.

Best,

Bill P.

[Martin Taylor 2011.07.03.10.34]

[From Bill Powers (2011.07.03.0555 MDT)]

My problem, and perhaps Rick's, with your discussion of causality is that when two variables A and B affect the same variable C, C = A + B, it's hard for me to see either A or B as a cause of C. You have to know the values of both A and B to know what the value of C is. Observing A alone or B alone is not sufficient to tell you the value of C. In my concept of causation, if you know the cause you can predict the effect. If you say A causes C, then if you vary A you expect to be able to say how C will vary, at least statistically. But if there is a B also affecting C, C could change in any way at all in relation to A, even oppositely to A.

Can you EVER say there is one cause for anything? I think not. However, one counter-example would prove me wrong.

Suppose I am holding an object in my fingers and let it go. It rises to the ceiling. What is the one cause of its rising? Is it my losing my grip on the object? Is it gravity? Is it the difference in density between the object and the air?

Suppose I lift one end of a plank and the other end drops. What is the one cause of the other end dropping? Is it my lifting? Is it that the middle of the plank rests on a high point? Is it that the plank is stiff?

Suppose I apply a force to an apparently light and mobile object but it doesn't move as much as I expect. What might be the cause of its failure to move much? Might it be stuck on a viscous surface? Might it be the environmental correlate of a controlled perception? Might it be attached to the surface by a hidden spring? Would any of these be "the cause" of it's not moving much? Is the cause of its moving the small amount it does move the application of my force? Or is it the joint application of my force and the countervailing forces that seem to me to be being applied?

Suppose I apply a force to an apparently light and mobile object but it doesn't move at all. What is the cause of the non-event of its failure to move? Is it my failure to perceive that it really isn't a mobile object, but is an integral part of the thing on which it seems to be lying? Is it the "real" world fact that the object is an integral part of the substrate? Can you talk of a cause of something not happening when you expect it to happen?

In a negative-feedback control system, it's even more direct. The system detects the state of C and adjusts B so that C remains close to its reference level regardless of the magnitude of A (or any number of A's). Again, it doesn't matter whether A is varying systematically and predictably, or randomly. ... the low-frequency effects of A on C would be drastically reduced because B can be adjusted fast enough to cancel them out quantitatively....
... I don't see how you can call the effect of A on C "causal."

To call A "the cause" of variations in C would be formally wrong, but informally quite normal. To call the control system "the cause" of the minimization of the effect of A on C would likewise be informally quite normal.

To call the effect of A on C "causal" would be technically precise and correct. There is a traceable path of direct influence between A and C, and no temporal ordering is violated (changes in C do not precede the corresponding changes in A). Therefore the relation between A and C is causal.

But that's just because of the language and the associations one has with the sound of KAAWZ. I'm sure it makes complete sense to you. For communicating, however, we do need some way of talking about these things that has the same meanings for all of us. There's no point in talking about the accepted meaning or the right meaning or the dictionary meaning. All that counts is the meaning we all understand. In the preceding paragraph, I discussed the various relationships without using the terms "cause" or "causal." The result seems pretty unambiguous to me. As you and I have both been saying, the word "cause" is just a troublemaker.

Yes. "Cause" is problematic because of some people's feeling that there ought to be one "cause" for any effect. A lot of our political problems stem from this misconception, and it would be better if "influence" or some such word were to be substituted more generally. Unfortunately, to do this might deprive the political process of much of its righteous vitriol, and perhaps might lead to genuine solutions of real problems. That would never do, would it? It would deprive a lot of politicians of their livelihoods.

"Causal" is a different kettle of fish. "Causal" has no implication of uniqueness. It implies only that there is a traceable chain of direct influence between an independent and a dependent variation. In the above examples, my grasp on the balloon, its density difference from air, and gravity are all causally connected to its rising; the stiffness of the plank, the high-point on which the middle of the plank rests, and my lifting or depressing my end are all causally connected to the motion of the far end; in a control loop, all variables are causally connected to all the others except that none are connected TO the reference and disturbance -- the causal connections are only FROM the reference and disturbance to all the other variables in the loop. Causality is important in analyzing systems that can exist in the physical world, whereas "cause" probably is not.

We can (and probably should) omit "cause" from our discussions, but I don't think we can omit "causal", "acausal" (future events influence current states), and "non-causal" (two events or states have no influence on each other, however close their statistical relationship).

Martin

[From Bill Powers (2011.07.03.0855 MDT)]

Rick Marken (2011.07.02.1455) –

RM: Yes, r and d are the only
two independent variables in a control loop. But one of the causes of p
is o, which is conventionally viewed as a dependent variable but it is,
nevertheless, a cause of p and a very important one too not least because
it is the one that is always ignored in psychological research and it is
the one that makes p a controlled variable.

BP: Try this: The output action o is used in such a way that it becomes
the determining effect on the perception p – the only effective cause of
p. This does not mean that p corresponds to o, even statistically. In
fact, to imply that the relationship to o is describable statistically is
to ignore the highly systematic, lawful, regular, predictable connection
from o to p. The reason o has to change is that it has to cancel the
effects of d on p, which are also highly systematic. This is done by
monitoring p, not d.
The causes of changes in disturbance d are not normally known, and are
multiple, so the variations in d might have to be described statistically
in order to say anything general about them. But the changes in d are not
some special kind of change; they are just changes in magnitude over time
like any other changes. A control system can counteract the effects of a
constant or changing d by varying the output action as required to
prevent d from having any significant effect on p, while in addition
making p follow the constant or changing setting of the reference signal.
It doesn’t matter to the control system whether the variations in d are
random or follow a systematic pattern, as long as they are not too large
or too fast.
The custom in psychology has been to treat uncertainty as if there is
fluctuations with some distribution and an average value of zero. But in
the disturbances we use now in models, there is no underlying regularity;
the sum of 60 cosine waves with random phases and an amplitude that
decreases with rising frequency is all that underlies the final waveform.
Deducing those phases and amplitudes from their combined effects over
many seconds would be difficult even with the aid of a high-speed
computer. It would be impossible for a naked brain.
The only way the waveform itself matters is for it to contain
regularities that, at some level in a control hierarchy, can be
recognized and thus to some degree of accuracy, be predicted. Where that
is possible control does improve slightly. Slightly. The potential
improvement is partly negated by the requirement of perceiving the
regularity and computing the prediction, rather than just registering the
momentary magnitudes of the controlled perception, and further negated by
the fact that the prediction has to be done at a higher, and therefore
slower, level, at a speed that can be accounted for only in lower-level
systems – which can’t do complex calculations.

I think that with a little more thought we will be able to prove that the
real random noise in behavioral control systems is very, very, small.
Disturbances may be largely or completely random, and the observable
actions that oppose them may, as a consequence, show variations that look
similarly random, but in fact what happens inside the control loop is
almost entirely systematic. I can say categorically that the noise
contributed from inside the control loops in a tracking experiment with a
skilled participant is no more than 5% RMS of the range of variation of
the controlled variable. If we compare the RMS tracking error with the
RMS amplitude of the disturbance (roughly 0.6 of the peak range), this
implies that tracking error is at least 12 standard deviations less than
the average magnitude of the system variable changes. In my book, that is
noise-free performance for any practical purposes. The uncertainties are
all in the eye of the beholder.

So the picture I see is this. Behavior is a highly regular and repeatable
process being performed in a world that contains a mix of regular and
random variations in its variables and parameters. As a control system
acts to prevent unintended variations in its controlled variables, its
actions mirror both the regular and the random variations in the
environment, creating the illusion that there is something random or
otherwise stochastic going on inside the behaving system. A model without
any such random aspects duplicates real human behavior, in all
well-designed PCT experiments tried so far, well within the range of
random disturbances and apparently random actions. We therefore need not
look for much randomness inside the organism, but only for systematic
relationships among variables with small error bars.

Causation in any statistical sense, I claim, can be abandoned in factor
of systematic functional relationships among variables as described by
Martin Taylor (and me). The apparently random component in the behavior
of organisms is an artifact of using the wrong model.

Best,

Bill P.

1 Like