[Martin Taylor 971203 16:40]
Rick Marken (971124.1100)
I've finally had a chance to play with Rick's spreadsheet, and have some
comments about the spreadsheet itself, as well as about the conclusions
he draws from it.
and in the neighbour column there are cells containing formulae
of the form" where X equals the fixed value of the left neighbour,
so that the formula is really "self + 0.5*0".These "neighbor column" cells are the cells that compute the
perceptual variable (CV) that is controlled by each subject (control
system). An example of the formula in one of these cells is:=($J14*$A$24+$K14+$B$24+$L14+$C$24+$M14*$D$24+$N14*E$24)-B4
^ ^
That is, indeed, the formula. I assume Rick "cut and pasted" it out
of the spreadsheet. But there's a problem with it (see next comment).
The environmental variables are weighted by coefficients from
what you describe as the "big matrix of "perceptual weights"" .
The coefficients are in rows J through N of the spreadsheet; each
subject's perceptions represent a weighting of the environmental
variables by coefficients from a different row of this matrix.
In the formula above, the enviromental variables are weighted by
the coefficients in row 14 of the perceptual weights matrix
($J14*$A$24+$K14+$B$24...note the row indication associated with
columns J, K ...).
But shouldn't it read $K14*B24 and $L14*$C24, rather than $K14+$B24
and $L14+$C24 if $K14 and $L14 are the perceptual weights for two of
the "Fixed environmental variables"? I assumed this to be so, and made
these changes; all subsequent comment are based on spreadsheets
altered this way.
···
------------------
I think that the most interesting experiment you can do with the
spreadsheet right now (I plan to make improvements and eventually
write a paper about it) is type in new values for the fixed
environmental variables (cells A24 - E24) and see what happens
to the average results.
I did a different experiment first. It is normal practice in such IV-DV
experiments to assign subjects randomly to the experimental conditions.
So I did that, as well as I could, by hand. In the spreadsheet as
originally distributed, the subjects are assigned in a very specific
order, and it is this order of assignment that causes the IV-DV
correlation seen in the results.
I wanted to randomize the assignment of subjects, but I couldn't find a
way in Excel 4 to permute the rows of the perceptual weight matrix
(each row determines the characteristics of one subject, so permuting
the rows at random would reassign subjects to experimental conditions),
so I did several "hand-shuffles". (Note--if you want to do this, you have
to reset the values in the formulae in the shaded "CV" cells back to the
way they were, because Excel cleverly says "I know you don't want do what
you seem to want to do, so I will change the references to track your moves").
When I did that a few times, I found that the relationship between the
"IV" (10, 20, 30) and the DV disappeared. The average DV would sometimes
be highest in one column, sometimes in another. If this happened to be
a simulation of an ordinary behaviourist study, it would result in the
experimenter saying "there's NO effect of IV on DV here." Or, "There
is no significant relationship between the subjects' behaviour and the
experimental variable." Occasionally, of course, the "randomization"
doesn't work; 1, 2, 3, 4, 5 is a perfectly good "random" permutation
of the first 5 integers (except that I did it deliberately), but such
orderings don't happen very often. Randomization is important in group
studies, to avoid precisely the problem that occurred in Rick's
spreadsheet as it was originally distributed.
------------------
A change in the value at which the
environmental variables are held constant can change the average
results COMPLETELY. This shows dramatically that the group results
tell us NOTHING about the nature of the individuals in the group.
I guess that's right, at least. There's no relation between IV and DV,
either, so the "conventional" experimenter would come to the same
conclusion. And we (PCT-mavens) "know" that S-R studies tell us nothing
about the nature of the individuals when the S and the R are parts
of the input and output variables in a perceptual control loop.
Changing the _constant_ environment changes nothing about the
individuals but it gives a completely different piture of the
average behavior of the group.
Whoops! It changes nothing about the _internal processing_ of the
individuals, but it sure changes their output values. What you should
say is that stimulus-response measures tell you nothing about the
internal processing done by the individuals, because the outputs change
so much when the fixed (i.e., un-noticed by the "conventional" experimenter)
variables change. But we've said that in so many different ways on CSGnet
that a demo of this sort isn't going to change many minds.
If you take the group results as
an indication of what is true of the individuals, your conclusion
about those individuals will differ substantially depending on the
level at which "extraneous variables" were held constant in the
experiment.
We know that the individuals have not changed their internal performance
when the environment changes, whereas the output results have changed.
But the experimental results also show that the individuals vary
all over the lot. I haven't computed the standard deviations or the
"significance" of the differences across the columns, but by eye it
sure looks like the SD is several times the ordinary differences among
the means--hence, in conventional terms "no effect." And the "conventional"
experimenter would normally compute the SD.
What it points up is that the stimulus-response relationship can change
considerably while the CEV does not change. This is, to me, the real
arrow in the heart of "coventional" research. But the spreadsheet does
not deal with the issue it purports to address, the validity of using
group data to estimate individual data. What stays constant within the
individual is not what the measurements measure. And any conventional
experimenter would look at the spread of individual data and say that the
standard deviation of the data was huge, making the mean useless as an
estimator of individual values even of what _was_ measured.
I think a spreadsheet along the lines Rick tried might be valuable as
a demo, but this one isn't it, because the control theorist and the
conventional experimenter would come to the same conclusions.
Martin