[From Rick Marken (950120.0900)]
Martin Taylor (950116 11:00) --
If one believes in PCT, it is only variations in the model that can show why
the results are not like those in equivalent experiments that have been done
before.
I'll go with Bill Powers' (950119.2210 MST) answer to this:
It is ludicrous to think of trying to fit a model to this kind of data.
The PCT model we are using is not magical. It fits control behavior very
well, but not random flailings about which is what we seen in the bulk
of the experimental data. Face it: the experiment was badly designed and
badly done.
I said:
One thing you might do (and perhaps we should always do this when we fit
model to data) is estimate the proportion of the variance in the data that is
_predictable_ by the model.
Martin replies:
I have a variant on that...
If the model and the subject are quite independent and different control
systems, then the mean square (MS) deviation between the model and the
subject should be the sum of the two tracking MS errors.
WHAT?!?!
But if the model control system is like that of the subject, then the MS
deviation between them should be >much less than the sum.
The measure I am using is intended to be 1 minus the ratio of the
MS (model-real deviation) to the sum of the MS (model-disturbance)
and MS (real-disturbance) tracking errors. A value of 0.0 means that the
model is worthless in describing the data, and a value of 1.0 means that the
model describes the data perfectly.
Martin, you'll have to give me a little more detail on your derivation to
convince me that this MS-based measure of model performance has anything at
all to do with measuring anything sensible, let alone PREDICTABLE variance in
a performance variable.
The nice thing about Vp (besides the fact that it actually measures
predictable variance) is that it measures predictable variance in the same
way that we measure the variance actually predicted by the model(it's really
the variance _accounted for_ by the model, unless we run the model first) .
If Vp= .98 then we know that the BEST we can expect a model to do is account
for 98% of the variance in the variable (a maxiumum correlation between model
and data of sqrt(.98)). Vp would let us know how well we can expect the model
to do regardless of how well the subject is actually controlling. If, for
example, subjects are controlling poorly because they have not yet learned
how to control (as was the case in your sleep experiment) then then Vp should
be quite low, meaning there is not much variance for ANY model to predict.
If, however, the subjects can control but are controlling poorly because they
are dealing with a difficult disturbance then Vp should be quite high,
meaning that there is lots of variance for a model to predict.
I think I'll set up some experiments to show how this works.
Bruce Abbott (950117.0930 EST)
When the data set contains only cursor and handle positions, (i.e.,
lacks the disturbance tables), the disturbance values can be calculated
exactly only if the reference values are known.
Me:
This is not true.
Bruce Abbott (950119.1515 EST) --
Perhaps I should have said "target" instead of reference
Nope. Still doesn't matter. The disturbance values CAN be calculated exactly
if we know only c and h (unless, of course, c=d+h+t, in which case we would
have to know t as well, as you note).
I took a look at step-wise regression on the data comprising every fifth
observation using Minitab's default criterion for excluding variables; the
result was that only C1 and C2 remained.
Well, to quote Tom Bourbon's (950118.1709) friend "Fred":
"You know, of course, what this implies about the experimental and
statistical methods we use in psychology."
I'm really looking forward to hearing what your colleagues have to say about
the results of this simple tracking task. Indeed, I'd like hear what you have
to say about it. You've done some mighty high-powered analyses of this data;
what do you think the results of these analyses imply about the experimental
and statistical methods used in psychology?
Best
Rick