# DS

[From Bill Powers (980327.1053 MST)]

Martin Taylor (various) --

OK, I changed the program to include a printout of sigma(qi)/sigma(d),
which is available from the correlation program after each run. Here are
the results from another set of three runs with the disturbance set at very
fast, medium, and slow:
Disturbance
Very Fast Medium Slow

d vs qo -0.134 -0.875 -0.985
d vs qi 0.516 0.130 0.012
qi vs qo 0.764 0.363 0.159
sigma (dist) 121.822 121.137 120.006
sigma (qi) 78.767 67.298 22.711
CR 0.647 0.556 0.189

The correlations seem to agree more or less with those from the first set
of runs. As you can see, the "control ratio", sigma(qi)/sigma(dist), has
nothing to do with any of the correlations.

So now if you want to rescue your prediction (assuming that my data and/or
analysis is biased), you're STILL going to have to run the experiment
yourself.

Best,

Bill P.

[Martin Taylor 980327 17:15]

Bill Powers (980327.1053 MST)]

Martin Taylor (various) --

OK, I changed the program to include a printout of sigma(qi)/sigma(d),
which is available from the correlation program after each run. Here are
the results from another set of three runs with the disturbance set at very
fast, medium, and slow:
Disturbance
Very Fast Medium Slow

d vs qo -0.134 -0.875 -0.985
d vs qi 0.516 0.130 0.012
qi vs qo 0.764 0.363 0.159
sigma (dist) 121.822 121.137 120.006
sigma (qi) 78.767 67.298 22.711
CR 0.647 0.556 0.189

The correlations seem to agree more or less with those from the first set
of runs. As you can see, the "control ratio", sigma(qi)/sigma(dist), has
nothing to do with any of the correlations.

Excellent! Thank you for doing this. If these data are typical, it does
look as if noise is the major reason for the low correlation between
d and qi.

So now if you want to rescue your prediction (assuming that my data and/or
analysis is biased), you're STILL going to have to run the experiment
yourself.

I tried, using Rick's Demo 1 (Nature of control). It's very jumpy on my
machine, but I got control ratios (Rick calls it "Stability") averaging
6.6 (range 4.7 to 8.3) over 10 runs (1/CR, which you plotted) = 0.152.
This means I could control under those somewhat difficult conditions
rather better than you could control using your local program. The
d vs qi correlation I got averaged .094 (about 62% of 1/CR), but over
the 10 runs it varied from -.025 to 0.338, so that estimate isn't
very stable.

I have been doing as you suggested. I've taken to my home machine the
source code for the sleep study model fitting, and half the data (71
Mbytes so far--and I can't afford much more disk space), and will test
out those data. If the results look like yours, I think we should be
able to develop estimates of the amount of internal noise in the control
those poor people were doing. It would be interesting if we could, and
if the noise level turned out to be related to their sleepiness or to
the drug.

Given what you said earlier about the human being better fitted by a model
with a leaky integrator, do you think I should modify the model we used
in the sleep fits to change it from a perfect integrator output function
to a leaky one with the leak rate as a fitting parameter? You never
suggested this modification before, when we were looking for ways to
get better fits, but I gather from your messages in this thread that
nowadays you do use the leak as a parameter. Should I? How much improvement
do you find that it gives?

As a side effect of redoing these fits, I ought to be able to determine
the d vs qi correlation both for the model (with a perfect integrator
output function) and for the human. Adding noise into the best fit model
then should be able to reproduce the d vs qi correlation that is found
in the experiments. Since I know of no obvious way to make a prediction
of the d vs qi correlation in a loop with transport lag (such as we always
have when fitting human data), the simulation seems to be the only way
to do it. It may be, however, that the correlation data are not stable
enough to allow good fitting.

I'll be out of town Sunday through Friday, and I haven't yet got past
March 19 in the existing backlog of messages. Also I have a lot of
preparation to do for the meetings I will be in, so don't expect these
results any time soon--I still have to learn how to use CodeWarrior to
write C code on the Mac!

Martin

[From Bill Powers (980327.1721 MST)]

Martin Taylor 980327 17:15 --

Excellent! Thank you for doing this. If these data are typical, it does
look as if noise is the major reason for the low correlation between
d and qi.

I tried, using Rick's Demo 1 (Nature of control). It's very jumpy on my
machine, but I got control ratios (Rick calls it "Stability") averaging
6.6 (range 4.7 to 8.3) over 10 runs (1/CR, which you plotted) = 0.152.

The stability factor is 1 - variance(qi)/variance(qi*), where var(qi) is
the value with control present and var(qi*) is the value without it
present. So this is not the same as what you call the control ratio. A
stability factor of 5 would correspond to a 1/CR of about 1/sqrt(6) or 0.408.

This means I could control under those somewhat difficult conditions
rather better than you could control using your local program.

The disturbance in Rick's demo is quite slow, but the delay (using my
machine at least) is about 0.25 seconds, enough to be very distracting for me.

When you get your C program running, I recommend that you do a series of
practice runs until your performance levels out. The data mean little until
learning is complete. You can adjust the timing of the iteration loop by
using a dummy "for" statement: for (t = 1; t < 50000; ++t); for example.

I have been doing as you suggested. I've taken to my home machine the
source code for the sleep study model fitting, and half the data (71
Mbytes so far--and I can't afford much more disk space), and will test
out those data. If the results look like yours, I think we should be
able to develop estimates of the amount of internal noise in the control
those poor people were doing. It would be interesting if we could, and
if the noise level turned out to be related to their sleepiness or to
the drug.

As you know, I am critical of the quality of the sleep data. But how you
use it is up to you.

Given what you said earlier about the human being better fitted by a model
with a leaky integrator, do you think I should modify the model we used
in the sleep fits to change it from a perfect integrator output function
to a leaky one with the leak rate as a fitting parameter?

The leak rate will make a difference only for the very slowest
disturbances, slower than any we used in the sleep study. However, there
would be no harm in adding leak rate as a third parameter to be determined.
To get meaningful measures, however, you have to be very careful about the
uniformity of experimental conditions. The viewing distance should always
be the same; the grip on the mouse and the configuration of the arm should
be constant; the body should be supported in the same position on each run.
Adding a third parameter is what Phil Runkel calls "fine slicing." Your
data have to be uniform enough to support adding a new dimension of
variation to the model. If there are variations in the physical setup,
enough noise can be added to smear out the results of changing three
parameters to the point where the "best fit" range of values is so broad
that it's useless.

nowadays you do use the leak as a parameter. Should I? How much improvement
do you find that it gives?

I can't answer that quantitatively, but I can say that I never got any
noticeable improvement in the fit of the model to tracking data by
adjusting the leak rate. I tried it just to be able to say I tried it, for
the sake of completeness. Of course I haven't used very slow disturbances
-- when control is too easy, it's hard to tell the actual performance from
ideal performance, which means that the model isn't going to be very
sensitive to small parameter changes, nor will it reveal differences in
parameters between different participants. You may or may not find that
leak rate is a useful parameter.

...don't expect these
results any time soon--I still have to learn how to use CodeWarrior to
write C code on the Mac!

Side note to Rick -- have you looked into that program? I'd be willing to
convert some of my procedures and functions to C if that would help you
write your program in C for the Mac. We have always short-changed people
with Macs, mainly because I don't have one.

Best,

Bill P.

[From Rick Marken (980327.1800)]

Bill Powers (980327.1721 MST)]

The stability factor is 1 - variance(qi)/variance(qi*), where
var(qi) is the value with control present and var(qi*) is the
value without it present.

In my demos, the stability factor is sqrt([(var(d)+var(o)]/var(qi))
which is (by your definitions) sqrt(variance (qi*)/variance(qi)).
So the bigger the value of my stability measure, the better the
control (measured in standard deviations from 1.0, which is
no control at all).

Side note to Rick -- have you looked into that program? I'd be
willing to convert some of my procedures and functions to C if
always short-changed people with Macs, mainly because I don't
have one.

I get awfully nice results with the Java demos using my 100 MH
Power PC Mac; stability factors in the 18+ range. I bet most
Mac users who would care about even trying these demos are at
least at this level of Mac. People who want to stick with Mac
will surely start going to the G3 anyway. I've tried my demos
on the G3 and they're smooth as silk. I don't think it's really
worth the effort to convert stuff to C on the Mac because
I don't think there are enough people on primative Macs who
would be interested in trying the demos. It just doesn't pencil
out for me. It's a lot of work, there's not much of an audience
on the kind of Mac that would make it worth doing and very few
people learn anything from these demos anyway. No, I think I'll
just continue to develop these fultile demos in a medium I like --
the web.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313

[From Bill Powers (980328.0507 MST)]

Rick Marken (980327.1800)]

In my demos, the stability factor is sqrt([(var(d)+var(o)]/var(qi))
which is (by your definitions) sqrt(variance (qi*)/variance(qi)).
So the bigger the value of my stability measure, the better the
control (measured in standard deviations from 1.0, which is
no control at all).

Right, sorry about that. I got it upside down.

Best,

Bill P.