My actions

[from Jeff Vancouver 980232.11:45 EST]

To faciliate Rick's testing for my CV's I have collected into this one post
my actions (I would normally say reactions, but that system has been
reorganized), which presumably arise (I am not suppose to use that either,
but appearing the reorganizing system is still working on it) from the
disturbances of Rick and others on the net.

[From Bill Powers (980227.1701 MST)]

Jeff Vancouver 980227.1655 EST --

Why do you think I want to maintain that 95% condition?

I am just guessing here, but I think you have a principle-level ECU that
controls for "learning something about humans." As it is now organized,
correlational data would be considered valid data only if it were above
.975. Only then could it used to learn something.

Why you set it at that level? I do not know. Probably due to the nature
of most of the problems you have attempted to tackle.

I've explained my reason several times -- just checking to see that nobody
apparently heard it, understood it, or agreed with it.

This does ring a bell, and no, I do not agree with it.

Consider how we explain the seasons of winter and summer. This explanation
is dependent on a number of facts that we accept as true, and they _all_
have to be true at the same time for the explanation to work. Without
trying to analyze the line of reasoning rigorously, we have something like
this:

1. The earth orbits the sun in a nearly circular orbit.

2. The earth's axis is tilted relative to the orbital plane.

3. The earth is heated by radiant energy from the sun.

4. The local temperature of the earth is affected by the rate at which
radiant energy is received per square mile of surface area.

5. When the sun is high in the sky, more energy is received per square mile
of surface than when the sun is low in the sky.

6. The sun is higher in the sky in the summer than in the winter.

7. Therefore the earth is warmer during the (local) summer.

So to reach the conclusion, six facts must be simultaneously true. This is
probably a very low estimate for this particular conclusion. Each fact has
some probability of being true under given conditions. The probability that
the conclusion is true is the product of the probabilities of each of the
facts on which it rests, since they all must be true at once for the
conclusion to be true.

Suppose each fact has a probability of 90% of being true. The probability
that the conclusion is true is 0.9 * 0.9 *0.9 * 0.9 *0.9 * 0.9 , or 0.53.
That means that the conclusion could be arrived at very nearly as
dependably by saying "Heads it's true, tails it's false."

Your usually thorough thinking is amazingly incomplete in this description.
First, you have dichotomized statements of fact as true or false and
scaled them on the probability of being true. Yet, the first statement the
word nearly is present which makes it an ambiguous statement which may be
true or false depending on how someone resolves the ambiguity. Second, the
4 statement refers to a variable that might be measured to assess the
validity of the model being hypothesized. So if one was to measure the
temperature of the earth (in some location) and the angle of the sun in the
sky (at some time in the day, say its zenith), the correlation that would
result after a couple of years would likely not be .98 (I do not know what
it would be), because much local phenomenon (e.g., el Nino) goes into
determining temperature. That correlation is not the same thing as
probability that the statement is true. Finally, the measurement of the
thing (i.e., temperature) can add error. Given that latent variables
(i.e., controlled perceptions) are difficult to measure, this problem is
greater in some areas of psychology than some areas of physics.

So that's my reason for not considering truth-probabilities of less than
0.95 to be of much use in science.

So, correlations are not truth-probabilities.

[From Bruce Gregory (980228.1214 EST)]

In a post last week I made the modest suggestion, strongly disavowed by Jeff,
that he and Rick live in different perceptual worlds.

and

[From Bruce Gregory (980301.0518 EST)]
Two possibilities. You dwell in a world given by the perception that others
are autonomous agents in their own perceptual worlds. Or you don't. You alter
your world, or you try to impress it on others.

[Martin Taylor 980228 17:30]

Rick Marken (980227.1720)]

Actually, it's rather easy to figure out what you, Jeff, Bruce A.,
Bill and I are controlling for: THE SAME THINGS! Remember,
we control perceptual VARIABLES. If we are pushing hard against
each others' disturbances, it's because we are all controlling
the same of similar perceptual _variables_; we are in conflict
becuase we want those variables at DIFFERENT REFERENCE VALUES.
There would be no conflict between us at all if we were controlling
different perceptual variables.

Astute comment. Perfectly correct. At least in respect of some variables
we are controlling.

I somewhat agree with Martin here, Bruce. It is largely about different
reference levels, not different perceptions. On the other hand, I do not
totally agree with Martin. Clearly the way inputs are interpreted (i.e.,
the functions they pass through) are different between Rick and I, just not
all that different. Which accounts for more of our conflict? I do not
know how to put them on the same scale. In terms of what I disavowed, it
was not that we do not live in different perceptual worlds, because I think
we do.

But the two possibilities statement is also problematic for me. It forces
the argument to an either/or dichotomy that is far too simple a
characterization to reflect the complexity of the conflict.

[From Rick Marken (980227.0840)]

Martin Taylor (980227 03:45) --

On Tuesday Rick accepts that p is a function of d, given that the
loop functions and the reference level don't change, and on Wednesday
he returns to his old position that it isn't.

Bill Powers (980227.0814 MST)

Neither Rick nor I accepts that p is a function of d. P is a
function of d and o, where o is a function of r and p

Sorry, Bill. But Martin is right. Just because p is a function
of d and o doesn't mean it's not a function of d. p is a
function of d and o is a function of p. It's cause-effect
right around the causal loop.

This is what amazes me most. Causality has many meanings. One is the "is
a function of" meaning. Here Rick seems to admit that p = f(d) and p =
f(d, o) can both be correct. Yet, he cannot acknowledge that d is a cause
(not "the" cause, just "a" cause) of p.

Bill is correct, this is futile.

Sincerely,

Jeff

[From Rick Marken (980302.1200)]

Me:

Sorry, Bill. But Martin is right. Just because p is a function
of d and o doesn't mean it's not a function of d. p is a
function of d and o is a function of p. It's cause-effect
right around the causal loop.

Jeff Vancouver (980232.11:45 EST) --

Here Rick seems to admit that p = f(d) and p = f(d, o) can both be
correct. Yet, he cannot acknowledge that d is a cause (not "the"
cause, just "a" cause) of p.

It was a joke, Jeff. I said this as part of my efforts to show
that Bill and I are controlling for the same thing. Bill knew I was
contriving a response and refused to play along. But just so you
understand, here's what I really meant to say: Martin is wrong.
p is a function of o and d, not d alone. o is _not_ a function
of d; external events do not cause behavior. Lineal cause-effect
does not apply to the behavior of a control loop. Therefore,
conventional approaches to studying behavior, which are based on
a lineal causal model, are inappropriate in the study of living
control systems

Does that clear things up a bit;-)

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[from Jeff Vancouver 980302.1535 EST]

[From Rick Marken (980302.1200)]

It was a joke, Jeff. I said this as part of my efforts to show
that Bill and I are controlling for the same thing. Bill knew I was
contriving a response and refused to play along. But just so you
understand, here's what I really meant to say: Martin is wrong.
p is a function of o and d, not d alone. o is _not_ a function
of d; external events do not cause behavior. Lineal cause-effect
does not apply to the behavior of a control loop. Therefore,
conventional approaches to studying behavior, which are based on
a lineal causal model, are inappropriate in the study of living
control systems

Does that clear things up a bit;-)

Well, yes, I was a lot surprised. I did not see how _you_ could leave out
o (and the p that it implies). Further, even I would say p = f(d) is a
poor model if in reality p = f(d, o) _and_ d and o's effect are interactive
as opposed to additive. I think that much of the problem we all are having
(here is another proposal for you to shoot down), is that in a
simultaneous, continuous context, d and o are interactive (o is
compensating for d as d occurs), but in the contexts I tend to study, it is
additive (o gets around to compensating for d eventually, maybe).

Sincerely,

Jeff

[From Rick Marken (980302.2145)]

Jeff Vancouver (980302.1535 EST) --

I think that much of the problem we all are having (here is another
proposal for you to shoot down), is that in a simultaneous, continuous
context, d and o are interactive (o is compensating for d as d occurs),
but in the contexts I tend to study, it is additive (o gets around to
compensating for d eventually, maybe).

.
Always happy to oblige a shoot down request;-)

I know that when you are not wearing PCT glasses it _looks like_ ou are
dealing with a cause-effect situation in your experiments; you present a
"stimulus" (d) and then a "response (o) occurs (maybe). It would be
nice (for you) if this were really what was going on but, alas, it is
not.

If your subjects are control systems then they are always doing
_something_ to influence controlled variables. When you introduce
the disturbance it may seem _to you_ like the subject was doing
_nothing_ before then but the subject was really just _not_ doing
what they do (if anything) to compensate for the disturbance that
was applied. For example, the disturbance you apply may be the statement
"You are a paranoid schizophrenic". The response, which could follow
after some lag period, may be "No I'm not". It seems to you like the
output went from _no output_ to the statement "No I'm not". But _no
output_ WAS an output -- of level 0. It was exactly the "correct"
output to keep the controlled perception (of one's mental health, say)
in the reference state. The controlled perception is always o + d,
even if o happens to be zero at the time a disturbance is applied.

The same thing happens with thermostatic temperature control. While the
temperature at the sensor is at the reference level the heater is _off_;
it seems like there is _no output_. But _no output_ IS AN OUTPUT; it's
the zero level of output that keeps the temperature at the reference
when
the ambinent temperature (disturbance) is warm. When a cold blast of air
(a change in the dsutrbance variable) hits the sensor the heater will
"respond" (change from zero output to "heater on"); it _looks like_ the
disturbance (cold air) causes the heater to go on. It is an illusion!!
It's not what actually happened; the thermostat was just varying its
output (o) to keep it's perception (a function of d and o AT ALL TIMES)
at the reference level.

I know that the cause-effect illusion is _very_ compelling. Indeed, it
has kept psychology in its grasp for over a century. It still has Bruce
Abbott and Martin Taylor firmly in its grip. So strap yourself to
the mast, there, Jeff. If you can get past the siren song of the the
cause-effect illusion you'll finally make it home to Ithaca -- er, PCT.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

From Bill Poers (980303.0234 MST)]

Jeff Vancouver 980232.11:45 EST--

Why you set it at that level? I do not know. Probably due to the nature
of most of the problems you have attempted to tackle.

Your usually thorough thinking is amazingly incomplete in this description.
First, you have dichotomized statements of fact as true or false and
scaled them on the probability of being true. Yet, the first statement the
word nearly is present which makes it an ambiguous statement which may be
true or false depending on how someone resolves the ambiguity.

I intended only to indicate the nature of the problem, not to provide a
rigorous example. The word "nearly" in the first statement (The earth is in
a nearly circular orbit around the sun) could be quantified; the
eccentricity of the earth's orbit is 0.0167, which is "nearly circular,"
meaning that the difference between summer and winter can't be explained by
the difference in insolation (aside from the facts that the southern
hemisphere experiences winter while summer is going on here, and perihelion
occurs during northern winter). In fact, the chance that the orbit is
actually eccentric enough to account for the difference between summer and
winter temperatures is vanishingly small. The error bars on the measurement
of orbital eccentricity are probably less than 0.005 of the measured value.
The probability of falsity of the underlying factual statement is probably
at most on the order of 10^-10.

Second, the
4 statement refers to a variable that might be measured to assess the
validity of the model being hypothesized. So if one was to measure the
temperature of the earth (in some location) and the angle of the sun in the
sky (at some time in the day, say its zenith), the correlation that would
result after a couple of years would likely not be .98 (I do not know what
it would be), because much local phenomenon (e.g., el Nino) goes into
determining temperature. That correlation is not the same thing as
probability that the statement is true.

No, but the correlation is related to the probability that any
generalization about temperature will be true in a given instance or for a
given prediction. In the case you mention, the correlation of midday
temperature with time of year at a northern location would probably be
0.999+ over the past decade or so, and higher over the history of
record-keeping. "The mean temperature in July is higher than the mean
temperature in January at 45 degrees north latitude" would probably be true
with p < 0.000001. That is one way to express truth-probability: the
probability that the apparent relationship occurred by chance rather than
for the stated reason, which is more or less 1 - probabilty that the stated
reason is correct (that is, not incorrect).

Finally, the measurement of the
thing (i.e., temperature) can add error. Given that latent variables
(i.e., controlled perceptions) are difficult to measure, this problem is
greater in some areas of psychology than some areas of physics.

Right, and that's one of the things that contributes to the degree of truth
of any supposedly factual statement drawn from empirical data.

So that's my reason for not considering truth-probabilities of less than
0.95 to be of much use in science.

So, correlations are not truth-probabilities.

No, and I didn't say they were. However, when you compute confidence levels
you're essentially computing truth-probability. If you decide that a
published conclusion must have p <= 0.05, you're deciding to accept
truth-probabilities of at least 0.95, meaning that every 20th published
"finding" with p = 0.05 will be due to chance fluctuations and that the
results will not be due to the stated hypothesis.

In our tracking experiments, when we calculate a correlation of predicted
with measured handle positions of 0.996, the RMS errors (approximately, the
standard deviations) between predicted and actual position are around 3
percent of the peak handle excursion. The measurement error is around 1/4
percent (one part in 256). This means that the measurement error is about
1/12 of the prediction error. The statement "the output is equal and
opposite to the disturbance" has, I estimate, a p of less than 10^-8 of
being false, and a prediction that this relationship will be observed with
the next participant tested will have about that same chance of being
incorrect.

So it is possible to make statements of very high probability of truth
about some aspects of human behavior.

....
Rick:

Sorry, Bill. But Martin is right. Just because p is a function
of d and o doesn't mean it's not a function of d. p is a
function of d and o is a function of p. It's cause-effect
right around the causal loop.

You:

This is what amazes me most. Causality has many meanings. One is the "is
a function of" meaning. Here Rick seems to admit that p = f(d) and p =
f(d, o) can both be correct. Yet, he cannot acknowledge that d is a cause
(not "the" cause, just "a" cause) of p.

As Rick pointed out, his statement that Martin is right was bait, intended
to provoke a denial from me and thus reveal a variable I was controlling. I
wasn't hungry. And Rick didn't believe the statement he used as a disturbance.

Best,

Bill P.

[from Jeff Vancouver 980303.0835 EST]

From Bill Poers (980303.0234 MST)]

So, correlations are not truth-probabilities.

No, and I didn't say they were. However, when you compute confidence levels
you're essentially computing truth-probability. If you decide that a
published conclusion must have p <= 0.05, you're deciding to accept
truth-probabilities of at least 0.95, meaning that every 20th published
"finding" with p = 0.05 will be due to chance fluctuations and that the
results will not be due to the stated hypothesis.

No, this is not what confidence levels are about. They refer to the degree
to which the statistic derived from this sample might not represent the
true relationship. Usually, we are interested in making sure that zero is
not inside the confidence interval so that we might talk about the
probability that the finding was not due to chance. One could, and this
is the advantage of confidence intervals over "significance," talk about
the probability one's finding included a perfect relationship (i.e., r =
1). It is the relative lack of measurement error in your example below,
which allows for such a tight confidence interval (and high r).

In our tracking experiments, when we calculate a correlation of predicted
with measured handle positions of 0.996, the RMS errors (approximately, the
standard deviations) between predicted and actual position are around 3
percent of the peak handle excursion. The measurement error is around 1/4
percent (one part in 256). This means that the measurement error is about
1/12 of the prediction error. The statement "the output is equal and
opposite to the disturbance" has, I estimate, a p of less than 10^-8 of
being false, and a prediction that this relationship will be observed with
the next participant tested will have about that same chance of being
incorrect.

This last statement I find odd coming from you. I think what you meant to
say was the next time that participant was tested all else being equal
(i.e., that they were controlling the same variable). Your statement
assumes between-subjects consistency, which I might assume given reasonable
arguments, but you tend not to.

So it is possible to make statements of very high probability of truth
about some aspects of human behavior.

As Rick pointed out, his statement that Martin is right was bait, intended
to provoke a denial from me and thus reveal a variable I was controlling. I
wasn't hungry. And Rick didn't believe the statement he used as a

disturbance.

Yes, he corrected me. But how exactly would it reveal a variable you were
controlling?

Sincerely,

Jeff

[From Bruce Gregory (80303.1122 EST)]

Rick Marken (980302.2145)]

Nice post.

The same thing happens with thermostatic temperature control. While the
temperature at the sensor is at the reference level the heater is _off_;
it seems like there is _no output_. But _no output_ IS AN OUTPUT; it's
the zero level of output that keeps the temperature at the reference
when
the ambinent temperature (disturbance) is warm. When a cold blast of air
(a change in the dsutrbance variable) hits the sensor the heater will
"respond" (change from zero output to "heater on"); it _looks like_ the
disturbance (cold air) causes the heater to go on. It is an illusion!!
It's not what actually happened; the thermostat was just varying its
output (o) to keep it's perception (a function of d and o AT ALL TIMES)
at the reference level.

I tried to make this point in an earlier post on the logic of a
thermostat. The thermostat is constantly "testing the waters"
and has a logical output that results from the results of each
test. Much of the time this logical output is "leave things the
way they are." (In New England this is the logical output for
periods of up to four months of the year.)

Bruce

[from Jeff Vancouver 980303.1400 EST]

[From Rick Marken (980302.2145)]

If your subjects are control systems then they are always doing
_something_ to influence controlled variables. When you introduce
the disturbance it may seem _to you_ like the subject was doing
_nothing_ before then but the subject was really just _not_ doing
what they do (if anything) to compensate for the disturbance that
was applied. For example, the disturbance you apply may be the statement
"You are a paranoid schizophrenic". The response, which could follow
after some lag period, may be "No I'm not". It seems to you like the
output went from _no output_ to the statement "No I'm not". But _no
output_ WAS an output -- of level 0.

It does not seem to me that the output went from _no output_ to output.
The issue is variance. _Nothing_ is missing data. It is of no use.

So does the response "No I'm not" after some lag from the comment "You are
a paranoid schizophrenic" tell you something about the perception the focal
person is controling?

Sincerely,

Jeff

[From Bill Powers (980303.1451 MST)]

Jeff Vancouver 980303.0835 EST--

No, this is not what confidence levels are about. They refer to the degree
to which the statistic derived from this sample might not represent the
true relationship. Usually, we are interested in making sure that zero is
not inside the confidence interval so that we might talk about the
probability that the finding was not due to chance. One could, and this
is the advantage of confidence intervals over "significance," talk about
the probability one's finding included a perfect relationship (i.e., r =
1). It is the relative lack of measurement error in your example below,
which allows for such a tight confidence interval (and high r).

While your words are different from mine, I'm not sure your meanings are
any different. I will defer to experts on statistics, not being one myself.

The statement "the output is equal and
opposite to the disturbance" has, I estimate, a p of less than 10^-8 of
being false, and a prediction that this relationship will be observed with
the next participant tested will have about that same chance of being
incorrect.

This last statement I find odd coming from you. I think what you meant to
say was the next time that participant was tested all else being equal
(i.e., that they were controlling the same variable). Your statement
assumes between-subjects consistency, which I might assume given reasonable
arguments, but you tend not to.

Actually I am talking about between-subjects consistency; the statement in
question appears to be true of everyone who can do the tracking experiment,
or any other control experiment we have instrumented, within fairly narrow
definitions of "equal and opposite."

Best,

Bill P.

[Martin Taylor 980305 20:58]

Rick Marken (980302.2145)

If your subjects are control systems then they are always doing
_something_ to influence controlled variables. When you introduce
the disturbance it may seem _to you_ like the subject was doing
_nothing_ before then but the subject was really just _not_ doing
what they do (if anything) to compensate for the disturbance that
was applied. For example, the disturbance you apply may be the statement
"You are a paranoid schizophrenic". The response, which could follow
after some lag period, may be "No I'm not". It seems to you like the
output went from _no output_ to the statement "No I'm not". But _no
output_ WAS an output -- of level 0. It was exactly the "correct"
output to keep the controlled perception (of one's mental health, say)
in the reference state. The controlled perception is always o + d,
even if o happens to be zero at the time a disturbance is applied.
...
I know that the cause-effect illusion is _very_ compelling. Indeed, it
has kept psychology in its grasp for over a century. It still has Bruce
Abbott and Martin Taylor firmly in its grip.

I'm sure you wouldn't be surprised, but I was, by the difficulty I had
in getting the editor of a book in which I have a chapter to allow me to
make exactly this argument. He simply couldn't accept that an output of
zero WAS an effective output in controlling the relevant perception.

So I think that if you want to continue to keep your perception of me as
being in the grip of the cause-effect illusion against which I have fought
for 40 years, you'll have to think of some other criterion.

Martin

[Martin Taylor 980305 20:45]

Rick Marken (980302.1200)]

Me:

Sorry, Bill. But Martin is right. Just because p is a function
of d and o doesn't mean it's not a function of d. p is a
function of d and o is a function of p. It's cause-effect
right around the causal loop.

Jeff Vancouver (980232.11:45 EST) --

Here Rick seems to admit that p = f(d) and p = f(d, o) can both be
correct. Yet, he cannot acknowledge that d is a cause (not "the"
cause, just "a" cause) of p.

It was a joke, Jeff. I said this as part of my efforts to show
that Bill and I are controlling for the same thing. Bill knew I was
contriving a response and refused to play along. But just so you
understand, here's what I really meant to say: Martin is wrong.
p is a function of o and d, not d alone. o is _not_ a function
of d; external events do not cause behavior. Lineal cause-effect
does not apply to the behavior of a control loop. Therefore,
conventional approaches to studying behavior, which are based on
a lineal causal model, are inappropriate in the study of living
control systems

I assume that this paragraph is also a joke, since it is only in the
open loop system

    o --->---\
              qi---->--PIF--->p
    d---->---/

that p is a function of o and d. When the control loop is functioning
as a loop, p is a function of d and _r_.

Why are you using a lineal cause-effect system to argue that lineal
cause-effect does not apply?

Martin