[Martin Taylor 2013.03.06.11.41]
[From Rick Marken (2013.03.05.1730)]
Martin Taylor
(2013.03.05.14.04)–
RM: Just because you write it down doesn't mean that it will
be understood by the listener in the way you intended.
True.
I'm really trying to understand what you are intending to
say.
I'll take that at face value. I hope I'm right to do so....
So let's keep talking. You've done an interesting
little experiment; let’s see what it means, from a PCT
perspective.
All right. We make a bargain. You stop saying things like
“information analysis is an open-loop model of systems”, and similar
comments that I correct twice, and I’ll try to make it more simple
to understand. I’m not saying that my corrections will be themselves
correct, but that if you want to keep making those comments, we
should first come to an understanding of why we hold contradictory
views. Simple repetition of “'Tis so” “'Tis not” do not advance
anything much, and may alienate other readers.
Let's start with a description of what the "interesting little
experiment" was and was not, and what I hope may eventually emerge.
I started by wanting to provide illustrative demos for my long
tutorial
[Martin Taylor 2013.01.01.12.40] and for the projected Part 2 of the
tutorial. I was going to use LiveCode, for which Allan Randall has
been developing an Object-Oriented extension. The reason for
LiveCode was its platform-independence.
One of the LiveCode demos was to be a demonstration of the fact that
the less accurately you can see where something is, the less
accurately you will be able to control the perception of its
location. That seems to me to be self-evident. If it were not true,
why would watchmakers and jewellers use a magnifying loupe when
doing delicate work? But it seemed to be a bone of contention for
some reason I didn’t and don’t understand, so I thought a demo would
be useful.
Since we can't get at the perception to see how well it is
controlled, we use the pseudo-control of the CEV as a surrogate,
even though we don’t know that what we think of as the CEV is
actually the environmental correlate of the controlled perception.
The first thing I thought of that would be easy to program and to
demonstrate the point was the ellipse pursuit tracking task. Before
I actually programmed it in LiveCode (which I had done in a
different version a few months ago, but wanted to reprogram using
OOP methods), I wrote about what I hoped to do on CSGnet, or maybe
it was on some other group. Anyway, Adam Matic sent me a small demo
of the effect, written in Processing. His demo used the angle you
call alpha, and the visual was a tilted line. His resolution limit
was not in the screen-to-eye-to-perception pathway, but in the
ability to change alpha. He limited that by changing the number of
decimal places used in setting it. When analyzing the loop as a
whole, the effect is likely to be much the same as with uncertain
perception, but it wasn’t what I really wanted at that point.
Adam's little program was very neat and easy to understand, so I
decided to try to learn to program in Processing, which is also
platform-independent and has the advantage of being free and open
source. (LiveCode will be the same soon; the promise is by the end
of March). It seemed reasonable to me to use the ellipse-height
experiment as a context, since I had already made the prediction of
how it would turn out and since I had written something similar in
LiveCode. So that’s what I did, trying out different elements of the
Processing language one at a time. The reason there were 100 or so
different trials in the dataset I posted was that each series of
trials represents some new thing I was learning about programming in
Processing – a lot of them in trying to figure out how to get the
results filed with the name I wanted and in the folder I wanted.
So now to what was actually programmed. At the time I sent the
message with the graph, I had a rudimentary tracking task programmed
using Perlin noise. I could modify the separation of the ellipses,
store the frame-by-frame locations of the target and cursor, compute
the error on the fly and store that, in a comma-separated file
suitable for reading by a spreadsheet, and save the data in a
system-default folder. That’s all.
The track had two phases, a 4096-frame pursuit tracking phase at 60
frames/sec that could be run at different disturbance rates followed
by a 600-frame compensatory tracking phase with a very slow fixed
disturbance rate. I used the RMS variation of the cursor in this
last part of the run as an easy surrogate for a measure of the
perceptibility of the height difference (though as you pointed out,
it was also a surrogate for the perceptibility of variation in angle
alpha). I used that method both because it was convenient and
because, even if it didn’t give a true numerical answer for the jnd,
it would vary proportionately to the jnd (or the difference that
gives d’ = 1).
Originally, I didn't even bother with this part of the run, because
all I needed was the intuitively obvious fact that the further apart
the target and cursor, the harder it would be to perceive which was
higher. But I’m glad I did program the compensatory part of the run,
because it showed that the size of the jnd surrogate varies linearly
with separation and has an intercept of 1 pixel, which is as fine as
the setup allows.
When I got to this point, the discussion on CSGnet seemed as though
it might benefit from a concrete example, so I ran myself under a
series of conditions identical except that the ellipse separations
differed, with three trials per separation, and posted a graph of
the measures of control performance, which had the previously
predicted decline with increasing separation.
You pointed out that the subject (me) might have been controlling
the angle alpha, to which I agreed, saying that my subjective
impression was that this was exactly what I seemed to be doing. I
assumed that this would be the end of that aspect of the
conversation, since I could see no way of distinguishing the two
possibilities. But you kept repeating ad nauseam that your model of
controlling the height as opposed to the angle did make that
discrimination, while I kept showing you why it did not. The ratio
of the tracking RMS to pseudo-jnd is exactly the same for height
difference as it is for angle alpha. Everything about the experiment
scales the same way as a function of separation for both possible
controlled perceptions. The only possible discrimination among
possibilities that I can see is among the possible parts of the
ellipses that might define the angle alpha (centre to centre,
nearest tips, top of lowest to bottom of highest, and so forth).
Finally I got fed up with that repetition and the many other
repetitions that have wasted a lot of time writing what I had
written at least twice before in slightly different words, so I
decided simply to refer you back to previous answers rather than to
re-explain what had been explained more than once already. I gave
myself a rule that one re-explanation was OK, to reduce the
possibility of misunderstanding, but after that it should be
possible for you to figure out from one or other of the previous
answers how things do hang together.
So now I have given you a long explanation and re-explanation of the
nature of the experiment and the measures involved, and of my
reasons for last night’s rant.
I hope we will be able to conduct a reasoned discussion of this
so-called experiment and other issues in the future. I reattach the
100 tracks for you to save for later use. Remember that even when
the file names suggest that the runs were under the same conditions,
if the file dates were not close, the conditions might have been
different.
Martin
Old Data1.zip (763 KB)
data1.zip (244 KB)