Reading versus imagination (was Uncertainty, Information and control part I: U, I, and M)

[Martin Taylor 2013.03.05.14.04]

I recognize the winky-smiley, but nevertheless, I think it hides a

measure of truth, in that you talk about getting your answers from
your own imagination. I meant that the answers to your questions
have already been given, in many cases repeatedly, so there is no
need for you to imagine for yourself what they might be. My answers
are out there in public, ready to be read.
For example, in various messages you have repeatedly asserted or
suggested that I was taking the control performance for which I had
originally posted a plot as a measure of the subject’s (my) ability
to perceive the height difference between cursor and target. Equally
often, I told you just how I did get the resolution measure, besides
having given you a careful explanation of exactly what I did when I
sent the data files that you enthusiastically requested and as
enthusiastically deleted. When I explained it the most recent time,
you asked for a definition of absolute variance, which I had
distinguished from the relative variance used to define control
performance, and made a snide comment about “Swedish variance”. I guess your dismissive comment that seemed to suggest that I don’t
even know what variance means was the last straw. I have long
realized that you don’t read my messages in order to figure out what
I might be thinking, because you already know it, and you know that
if you seriously read what I wrote, it wouldn’t match your reference
values for what I MUST think. A while back, you posted something
along the lines of “If Martin writes it I know it must be wrong”. So
you substitute your imagination and then assert it is what I said.
That I repeatedly tell you that X was not what I said seems to be
water off a duck’s back, since you keep on asserting that X was
indeed what I said.
So in this last message I said I would give up repeating myself, and
would refer you to answers already given. If you have new questions
or novel comments, I’m always happy to engage in scientific
discussion. I am not prepared to argue against a religious
absolutism that rigidly refuses to take note of what is actually
written. Nor am I prepared to re-answer questions already answered
more than once.
Martin

···

[From Rick Marken (2013.03.04.2115)]

        Martin Taylor

(2013.03.04.23.07)–

        MT: I'm getting bored with interactions in which you

repeatedly ask me to repeat what I have previously written,
so I’m not going to do it. I will just ask you to answer
your own questions by reading what I have written instead of
substituting imaginings of your own and then asking me to
explain them.

  RM: OK, fair enough. But you might not like the answers I give

myself;-) Of course I don’t like the answers you give me either so
it’s probably just as well;-)

[From Rick Marken (2013.03.05.1730)]

Martin Taylor (2013.03.05.14.04)–

MT: I recognize the winky-smiley, but nevertheless, I think it hides a

measure of truth, in that you talk about getting your answers from
your own imagination. I meant that the answers to your questions
have already been given, in many cases repeatedly, so there is no
need for you to imagine for yourself what they might be. My answers
are out there in public, ready to be read.

RM: Just because you write it down doesn’t mean that it will be understood by the listener in the way you intended. I’m really trying to understand what you are intending to say. I think I can get a better idea of it via interaction – I say what I think you intend and you correct me and so forth – rather than by searching through the archives for an explanation that I apparently didn’t understand correctly in the first place anyway.

MT: For example, in various messages you have repeatedly asserted or

suggested that I was taking the control performance for which I had
originally posted a plot as a measure of the subject’s (my) ability
to perceive the height difference between cursor and target.

RM: Yes, that’s what I thought. I thought you presented the data to show the effect of cursor-target separation on the accuracy of the perception of the vertical distance between cursor and target.

MT: Equally

often, I told you just how I did get the resolution measure, besides
having given you a careful explanation of exactly what I did when I
sent the data files that you enthusiastically requested and as
enthusiastically deleted. When I explained it the most recent time,
you asked for a definition of absolute variance, which I had
distinguished from the relative variance used to define control
performance, and made a snide comment about “Swedish variance”.

RM: I probably didn’t understand how you got the resolution measure because I just didn’t see how it was different from a measure of control. You were apparently measuring something like the average vertical deviation of cursor from target. This is how we measure control; I don’t see how your measure becomes a measure of “resolution”. The “Swedish variance” comment was just a result of the fact that I can’t resist a joke; I’ve never heard of “absolute variance” but I have heard of Absolut Vodka, which is Swedish. Funny, eh:-)? OK, never mind.

MT: So in this last message I said I would give up repeating myself, and

would refer you to answers already given. If you have new questions
or novel comments, I’m always happy to engage in scientific
discussion. I am not prepared to argue against a religious
absolutism that rigidly refuses to take note of what is actually
written. Nor am I prepared to re-answer questions already answered
more than once.

RM: Well, then I’ll never know what you are trying to show. What I do know (or think I know) is that you programmed up a nice little tracking task to see how the separation between cursor and target affects how well the tracker is able to keep the cursor on the target. You presented data that I took to show that tracking performance (ability to control) decreased as separation between cursor and target increased. The measure of tracking performance – or what I thought was tracking performance – was measured in bits. So I assumed that this measure of tracking performance was some kind of log transformation of the observed RMS error at each level of separation. That’s what I thought was presented; if that’s not then I was off base from the word “go”. Didn’t you present a graph with separation on the x axis and a measure of tracking performance in bits on the y axis? Didn’t tracking performance fall off somewhat as separation increased? If I misinterpreted the data then the little modeling exercise I did was at best premature.

As far as reading versus imagination, I think that when we read the only thing we can do is control in imagination the images (or associated words) that are evoked by what we read. Contrary to what religious (and constitutional) fundamentalists think, the meaning of what we read is not in the words but in the imaginations evoked by those words. Look at how differently people understand the words:

"A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.

So let’s keep talking. You’ve done an interesting little experiment; let’s see what it means, from a PCT perspective.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.06.11.41]

[From Rick Marken (2013.03.05.1730)]

        Martin Taylor

(2013.03.05.14.04)–

      RM: Just because you write it down doesn't mean that it will

be understood by the listener in the way you intended.

True.
      I'm really trying to understand what you are intending to

say.

I'll take that at face value. I hope I'm right to do so....
  So let's keep talking. You've done an interesting

little experiment; let’s see what it means, from a PCT
perspective.

All right. We make a bargain. You stop saying things like

“information analysis is an open-loop model of systems”, and similar
comments that I correct twice, and I’ll try to make it more simple
to understand. I’m not saying that my corrections will be themselves
correct, but that if you want to keep making those comments, we
should first come to an understanding of why we hold contradictory
views. Simple repetition of “'Tis so” “'Tis not” do not advance
anything much, and may alienate other readers.

Let's start with a description of what the "interesting little

experiment" was and was not, and what I hope may eventually emerge.

I started by wanting to provide illustrative demos for my long

tutorial
[Martin Taylor 2013.01.01.12.40] and for the projected Part 2 of the
tutorial. I was going to use LiveCode, for which Allan Randall has
been developing an Object-Oriented extension. The reason for
LiveCode was its platform-independence.

One of the LiveCode demos was to be a demonstration of the fact that

the less accurately you can see where something is, the less
accurately you will be able to control the perception of its
location. That seems to me to be self-evident. If it were not true,
why would watchmakers and jewellers use a magnifying loupe when
doing delicate work? But it seemed to be a bone of contention for
some reason I didn’t and don’t understand, so I thought a demo would
be useful.

Since we can't get at the perception to see how well it is

controlled, we use the pseudo-control of the CEV as a surrogate,
even though we don’t know that what we think of as the CEV is
actually the environmental correlate of the controlled perception.

The first thing I thought of that would be easy to program and to

demonstrate the point was the ellipse pursuit tracking task. Before
I actually programmed it in LiveCode (which I had done in a
different version a few months ago, but wanted to reprogram using
OOP methods), I wrote about what I hoped to do on CSGnet, or maybe
it was on some other group. Anyway, Adam Matic sent me a small demo
of the effect, written in Processing. His demo used the angle you
call alpha, and the visual was a tilted line. His resolution limit
was not in the screen-to-eye-to-perception pathway, but in the
ability to change alpha. He limited that by changing the number of
decimal places used in setting it. When analyzing the loop as a
whole, the effect is likely to be much the same as with uncertain
perception, but it wasn’t what I really wanted at that point.

Adam's little program was very neat and easy to understand, so I

decided to try to learn to program in Processing, which is also
platform-independent and has the advantage of being free and open
source. (LiveCode will be the same soon; the promise is by the end
of March). It seemed reasonable to me to use the ellipse-height
experiment as a context, since I had already made the prediction of
how it would turn out and since I had written something similar in
LiveCode. So that’s what I did, trying out different elements of the
Processing language one at a time. The reason there were 100 or so
different trials in the dataset I posted was that each series of
trials represents some new thing I was learning about programming in
Processing – a lot of them in trying to figure out how to get the
results filed with the name I wanted and in the folder I wanted.

So now to what was actually programmed. At the time I sent the

message with the graph, I had a rudimentary tracking task programmed
using Perlin noise. I could modify the separation of the ellipses,
store the frame-by-frame locations of the target and cursor, compute
the error on the fly and store that, in a comma-separated file
suitable for reading by a spreadsheet, and save the data in a
system-default folder. That’s all.

The track had two phases, a 4096-frame pursuit tracking phase at 60

frames/sec that could be run at different disturbance rates followed
by a 600-frame compensatory tracking phase with a very slow fixed
disturbance rate. I used the RMS variation of the cursor in this
last part of the run as an easy surrogate for a measure of the
perceptibility of the height difference (though as you pointed out,
it was also a surrogate for the perceptibility of variation in angle
alpha). I used that method both because it was convenient and
because, even if it didn’t give a true numerical answer for the jnd,
it would vary proportionately to the jnd (or the difference that
gives d’ = 1).

Originally, I didn't even bother with this part of the run, because

all I needed was the intuitively obvious fact that the further apart
the target and cursor, the harder it would be to perceive which was
higher. But I’m glad I did program the compensatory part of the run,
because it showed that the size of the jnd surrogate varies linearly
with separation and has an intercept of 1 pixel, which is as fine as
the setup allows.

When I got to this point, the discussion on CSGnet seemed as though

it might benefit from a concrete example, so I ran myself under a
series of conditions identical except that the ellipse separations
differed, with three trials per separation, and posted a graph of
the measures of control performance, which had the previously
predicted decline with increasing separation.

You pointed out that the subject (me) might have been controlling

the angle alpha, to which I agreed, saying that my subjective
impression was that this was exactly what I seemed to be doing. I
assumed that this would be the end of that aspect of the
conversation, since I could see no way of distinguishing the two
possibilities. But you kept repeating ad nauseam that your model of
controlling the height as opposed to the angle did make that
discrimination, while I kept showing you why it did not. The ratio
of the tracking RMS to pseudo-jnd is exactly the same for height
difference as it is for angle alpha. Everything about the experiment
scales the same way as a function of separation for both possible
controlled perceptions. The only possible discrimination among
possibilities that I can see is among the possible parts of the
ellipses that might define the angle alpha (centre to centre,
nearest tips, top of lowest to bottom of highest, and so forth).

Finally I got fed up with that repetition and the many other

repetitions that have wasted a lot of time writing what I had
written at least twice before in slightly different words, so I
decided simply to refer you back to previous answers rather than to
re-explain what had been explained more than once already. I gave
myself a rule that one re-explanation was OK, to reduce the
possibility of misunderstanding, but after that it should be
possible for you to figure out from one or other of the previous
answers how things do hang together.

So now I have given you a long explanation and re-explanation of the

nature of the experiment and the measures involved, and of my
reasons for last night’s rant.

I hope we will be able to conduct a reasoned discussion of this

so-called experiment and other issues in the future. I reattach the
100 tracks for you to save for later use. Remember that even when
the file names suggest that the runs were under the same conditions,
if the file dates were not close, the conditions might have been
different.

Martin

Old Data1.zip (763 KB)

data1.zip (244 KB)

[From Rick Marken (2013.03.06.1430)]

Martin Taylor (2013.03.06.11.410__

MT: You pointed out that the subject (me) might have been controlling

the angle alpha, to which I agreed, saying that my subjective
impression was that this was exactly what I seemed to be doing. I
assumed that this would be the end of that aspect of the
conversation, since I could see no way of distinguishing the two
possibilities. But you kept repeating ad nauseam that your model of
controlling the height as opposed to the angle did make that
discrimination, while I kept showing you why it did not. The ratio
of the tracking RMS to pseudo-jnd is exactly the same for height
difference as it is for angle alpha.

Everything about the experiment

scales the same way as a function of separation for both possible
controlled perceptions.

RM: I pointed out that a model that controls angle alpha shows the same increase in the average vertical difference between target and cursor (RMS error) as a function of an increase in the horizontal separation between cursor and target as does the subject (you); a model that controls just vertical distance between target and cursor does not shows any increase in RMS error as a function of separation. So all I’ve been saying is that a model that controls alpha accounts for your data better than one that controls vertical distance. I still don’t understand what it is about this result that you don’t like. And I don’t understand what the ratio of tracking RMS to pseudo-jnd has to do with it. Maybe if you could explain this again I would understand. What I understand you to be saying is this:

  1. You have found that the ratio RMS/pseudo-jnd is the same whether vertical distance between cursor and target or angle alpha is used in the calculation of RMS and pseudo-jnd.

  2. This is true for all horizontal separations between target and cursor.

  3. Therefore you conclude that the controlled variable in this task could be either the vertical distance between cursor and target or angle alpha.

I don’t see how 3 follows from 1 and 2. Part of this has to do with my not understanding what a pseudo-jnd is. I’m not trying to be difficult here. I really just don’t understand it. Why don’t we try to work this out before we go any further.

MT: I hope we will be able to conduct a reasoned discussion of this

so-called experiment and other issues in the future.

RM: Me too. I think we will make a lot of progress on that if we can get this RMS/pseugo-jnd thing worked out.

MT: I reattach the

100 tracks for you to save for later use.

RM: Thank you. It looks like the files in the “new data” folder were all done at 180 separation. Is that true?

Thanks

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.06.17.43]

[From Rick Marken (2013.03.06.1430)]

        Martin Taylor

(2013.03.06.11.410__

        MT: You pointed out that the subject (me) might have been

controlling the angle alpha, to which I agreed, saying that
my subjective impression was that this was exactly what I
seemed to be doing. I assumed that this would be the end of
that aspect of the conversation, since I could see no way of
distinguishing the two possibilities. But you kept repeating
ad nauseam that your model of controlling the height as
opposed to the angle did make that discrimination, while I
kept showing you why it did not. The ratio of the tracking
RMS to pseudo-jnd is exactly the same for height difference
as it is for angle alpha.

      Everything

about the experiment scales the same way as a function of
separation for both possible controlled perceptions.

      RM: I pointed out that a model that controls angle alpha shows

the same increase in the average vertical difference between
target and cursor (RMS error) as a function of an increase in
the horizontal separation between cursor and target as does
the subject (you); a model that controls just vertical
distance between target and cursor does not shows any increase
in RMS error as a function of separation.

I guess I have to ask a question you may have answered, but if you

did, I don’t remember it. You said that you got the decrease in
control as a function of alpha in your model by decreasing the loop
gain. I don’t remember whether you said it, but did you similarly
decrease the loop gain when you modelled the height difference? And
since you were not fitting the model to human data, as you didn’t at
that time have human data to fit, what were you comparing the models
to?

Basically, the question is: not having human data to compare, what

did you do differently with the “alpha” model as compared with the
“height” model that caused the difference in result, and why did you
do it? What were you using as a criterion for optimizing the model
parameters?

      So all I've been saying is that a model that controls

alpha accounts for your data better than one that controls
vertical distance. I still don’t understand what it is about
this result that you don’t like.

It's not the result that I don't like. It's the means of getting the

result when you have no access to the human data that I don’t
understand. And it’s not that I don’t “like” the result, it’s that I
don’t trust it, given my understanding of the experimental
situation.

      And I don't understand what the ratio of tracking RMS to

pseudo-jnd has to do with it. Maybe if you could explain this
again I would understand. What I understand you to be saying
is this:

      1. You have found that the ratio RMS/pseudo-jnd is the same

whether vertical distance between cursor and target or angle
alpha is used in the calculation of RMS and pseudo-jnd.

      2. This is true for all horizontal separations between target

and cursor.

In 1 I'm not sure what ratio you mean, but it's irrelevant anyway.

What might turn out to be relevant is whether the control
performance can be predicted from the ratio of the jnd-surrogate
measure to the range of uncontrolled variation of the the presumed
CEV. For that, it doesn’t matter whether the controlled perception
is “alpha” or “height differential”.

      3. Therefore you conclude that the controlled variable in this

task could be either the vertical distance between cursor and
target or angle alpha.

I don’t see how 3 follows from 1 and 2.

It doesn't. There's no connection between 1+2 and 3. 3 comes from an

entirely different train of thought, which is that when I try the
tracking experiment, I can’t tell whether I’m tracking “alpha” or
height difference, and I can’t think of anything I could do to
influence one that does not influence the other to exactly the same
degree proportionately to its range (or RMS value) of uncontrolled
variation.

      Part of this has to do with my not understanding what a

pseudo-jnd is. I’m not trying to be difficult here. I really
just don’t understand it. Why don’t we try to work this out
before we go any further.

In everyday language, a jnd is the least change that a person can

detect as being a change. It’s pretty well what Bekesy audiometry
measures when finding the auditory threshold as a function of
frequency. The subject pushes a button as soon as the sound can be
heard and again as soon as it can no longer be heard. The so-called
“threshold” is taken to be the mid-point between peaks of that
variation. In the experiment, the disturbance would move the cursor
up and down, except that when the subject detects that it is too
high or too low he moves the mouse to make it look equal again. I
used the RMS value of the controlled movements rather than the
Bekesy-analogue of the peak-to-peak range because it is
statistically more stable. That RMS value is the pseudo-jnd.

Does that explain my neologism?

Martin

[From Rick Marken (2013.03.06.1745)]

Martin Taylor (2013.03.06.17.43)–

MT: I guess I have to ask a question you may have answered, but if you

did, I don’t remember it. You said that you got the decrease in
control as a function of alpha in your model by decreasing the loop
gain. I don’t remember whether you said it, but did you similarly
decrease the loop gain when you modelled the height difference? And
since you were not fitting the model to human data, as you didn’t at
that time have human data to fit, what were you comparing the models
to?

RM: You can ask me the same (or similar) questions as many times as you like; it doesn’t bother me at all. Repetition of good questions – or good anything – doesn’t really bother me.

The only thing I said about loop gain is that the reason controlling angle results in an increase in RMS error with increased horizontal separation is because the change in angle/unit change in vertical separation at large separations is very small and this might affect the loop gain --the product of gains around the loop. The system gain (and slowing) for the model was the same for all separations. So, again, the only difference between the model that controls vertical separation and the model that controls angle alpha is this difference in the controlled variable.

These models were fit to the human data you provided; the data being the increase in RMS error with increase in separation. It was not a precise fit because I didn’t have the actual measures of RMS error; just my memory of your graph. But I do recall that quality of control (measured in bits, which I took to be inversely proportional to RMS error) decreased as separation increased. This trend was seen for the model controlling alpha but not for the model controlling vertical separation.

MT: Basically, the question is: not having human data to compare, what

did you do differently with the “alpha” model as compared with the
“height” model that caused the difference in result, and why did you
do it? What were you using as a criterion for optimizing the model
parameters?

RM: I used your data as the criterion for fit. And there was not a lot of parameter fitting needed. Once I got the gain and slowing parameters set so that both models controlled well with 10 pixel separation I left those parameters constant and manipulated just the separation, increasing it. When I increased the separation, control fell off for the angle control model but not for the vertical distance control model. Not surprising, perhaps, but illustrative of how much of a difference in the behavior of a control system the nature of the controlled variable can make.

      RM: So all I've been saying is that a model that controls

alpha accounts for your data better than one that controls
vertical distance. I still don’t understand what it is about
this result that you don’t like.

MT: It's not the result that I don't like. It's the means of getting the

result when you have no access to the human data that I don’t
understand. And it’s not that I don’t “like” the result, it’s that I
don’t trust it, given my understanding of the experimental
situation.

RM: Do you see now that I was using the human data – yours --as the criterion against which to evaluate the behavior of the models? If you send me the graph again and tell me how your Y axis relates to RMS data I can plot my results on the graph for both the angle and vertical distance controllers and you can see how the model data compares to the human data.

      RM: And I don't understand what the ratio of tracking RMS to

pseudo-jnd has to do with it. Maybe if you could explain this
again I would understand.

MT: In 1 I'm not sure what ratio you mean, but it's irrelevant anyway.

What might turn out to be relevant is whether the control
performance can be predicted from the ratio of the jnd-surrogate
measure to the range of uncontrolled variation of the the presumed
CEV. For that, it doesn’t matter whether the controlled perception
is “alpha” or “height differential”.

RM: OK. What is a jnd-surrogate? What is it’s relevance to control? How does it fit into a control model?

      RM: 3. Therefore you conclude that the controlled variable in this

task could be either the vertical distance between cursor and
target or angle alpha.

I don’t see how 3 follows from 1 and 2.

MT: It doesn't. There's no connection between 1+2 and 3. 3 comes from an

entirely different train of thought, which is that when I try the
tracking experiment, I can’t tell whether I’m tracking “alpha” or
height difference, and I can’t think of anything I could do to
influence one that does not influence the other to exactly the same
degree proportionately to its range (or RMS value) of uncontrolled
variation.

RM: I think it’s very difficult (maybe impossible) for us to tell what perceptions we are controlling in various circumstances. That’s why I rely on the TCV as a basis for deciding what perceptions are being controlled, as opposed to asking people what they think they are controlling.

      RM: Part of this has to do with my not understanding what a

pseudo-jnd is. I’m not trying to be difficult here. I really
just don’t understand it. Why don’t we try to work this out
before we go any further.

MT: In everyday language, a jnd is the least change that a person can

detect as being a change. It’s pretty well what Bekesy audiometry
measures when finding the auditory threshold as a function of
frequency. The subject pushes a button as soon as the sound can be
heard and again as soon as it can no longer be heard. The so-called
“threshold” is taken to be the mid-point between peaks of that
variation. In the experiment, the disturbance would move the cursor
up and down, except that when the subject detects that it is too
high or too low he moves the mouse to make it look equal again. I
used the RMS value of the controlled movements rather than the
Bekesy-analogue of the peak-to-peak range because it is
statistically more stable. That RMS value is the pseudo-jnd.

Does that explain my neologism?

RM: Yes, thanks. But I still don’t understand what that tells you about control. I believe you said in a prior post that the constancy of the ratio of RMS error (that you measured in the pursuit part of the experiment) to pseudo-jnd (which I now understand to be the RMS error in the compensatory part of the experiment) was evidence that either vertical distance or angle alpha could be the controlled in your experiment, despite my finding that only a model controlling angle results in a decrease in quality of control with increased horizontal separation between target and cursor. I still don’t understand this conclusion of yours.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2013.03.07.1430 MST)]

At 02:31 PM 3/5/2013 -0500, Martin Taylor wrote (to Marken):
[Martin Taylor 2013.03.05.14.04] --

MT: ... the answers to your questions have already been given, in many cases repeatedly, so there is no need for you to imagine for yourself what they might be. My answers are out there in public, ready to be read.

BP: They would be if past messages were as well organized on my machine and others as they evidently are on yours. My advice to authors is not to blame misunderstandings on the reader, or expect tne reader to have ready access to and an index to past messages, but simply to explain again, paying particular attention to possible interpretations of your words that you did not intend, or that you didn't make as clear as you might have done. If the recipient is misunderstanding with malicious intent, this will make it clear to everyone else that this is happening; if not, you will improve your expositions.

When we read our own words, we all know what we intended to convey, and tend to extract the intended meaning rather than the one a reader with a different mind-set might supply.

You say "I told you just how I did get the resolution measure, besides having given you a careful explanation of exactly what I did when I sent the data files that you enthusiastically requested and as enthusiastically deleted. When I explained it the most recent time, you asked for a definition of absolute variance, which I had distinguished from the relative variance used to define control performance, and made a snide comment about 'Swedish variance'."

If you had used the same number (100+) of words to write better definitions of the two kinds of variance, you would have saved yourself from feeling annoyed, as well as making what you mean clearer to others like me who have short memories. Simply referring to what you have already written (instead of examining it yourself to see what might have justified the misinterpretation) does not lead to improved communication,

MT: So in this last message I said I would give up repeating myself, and would refer you to answers already given. If you have new questions or novel comments, I'm always happy to engage in scientific discussion. I am not prepared to argue against a religious absolutism that rigidly refuses to take note of what is actually written. Nor am I prepared to re-answer questions already answered more than once.

BP: This policy is almost guaranteed to prolong misunderstandings and apparent disagreements. It is much safer to assume that the meaning you wanted to convey is not the one the reader is assigning to the words or constructions you used. Meanings are evoked by words from the memories of the recipient; I do not think they are carried by the words from you to the destination. So simply having the reader read the words again is not likely to evoke a different meaning.

Sometimes your ability to choose language and elaborate on meanings seems inspired. But sometimes your words look like a puzzle set for the reader to solve, and my tendency is to give up immediately because of a deplorable laziness on my part. If you just want to deplore my laziness, this kind of problem provides ample opportunity. But if you actually want me, or someone, to understand you, I recommend patience.

Best,

Bill P.

[Martin Taylor 2013.03.08.00.26]

[From Bill Powers (2013.03.07.1430 MST)]

At 02:31 PM 3/5/2013 -0500, Martin Taylor wrote (to Marken):
[Martin Taylor 2013.03.05.14.04] --

MT: ... the answers to your questions have already been given, in many cases repeatedly, so there is no need for you to imagine for yourself what they might be. My answers are out there in public, ready to be read.

BP: They would be if past messages were as well organized on my machine and others as they evidently are on yours. My advice to authors is not to blame misunderstandings on the reader, or expect tne reader to have ready access to and an index to past messages, but simply to explain again, paying particular attention to possible interpretations of your words that you did not intend, or that you didn't make as clear as you might have done. If the recipient is misunderstanding with malicious intent, this will make it clear to everyone else that this is happening; if not, you will improve your expositions.

Bill,

My complaint was not about being misunderstood or needing to reword explanations, but of re-re-answering direct questions that had already been asked and answered _more than once_. I don't mind answering a question a second time, or if it is difficult even three or four times, but I was spending so much of my day answering the same things over and over again that it got to be too much. When I wrote that message, I was frustrated at having been asked the same questions many more times than once, having answered politely each time, in words that I hoped would be understood when the previous answered apparently had not been.

Martin

[From Rick Marken (2013.03.08.1130)]

Martin Taylor (2013.03.08.00.26) –

BP: My advice to authors is not to blame misunderstandings on the reader, or expect tne reader to have ready access to and an index to past messages, but simply to explain again, paying particular attention to possible interpretations of your words that you did not intend, or that you didn’t make as clear as you might have done…

MT: My complaint was not about being misunderstood or needing to reword explanations, but of re-re-answering direct questions that had already been asked and answered more than once.

RM: OK, but I thought we could try to continue the discussion now. You haven’t replied yet to my last post in this thread: Rick Marken (2013.03.06.1745). In it I tried to explain my current understanding (and lack thereof) of your experiment and analysis thereof. I would be interested in what you thought of what I said.

Thanks.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.08.14.38]

You will have to humour me some more if we are to come to a common

understanding. We have resolved one point, the meaning of
“pseudo-jnd” or “jnd surrogate”. Let’s deal with one more point at a
time. I still don’t truly understand what you actually did in your
analysis.
Why should the smallness of the angle make any difference to the
model quality of control of the angle? Were you perhaps using
integer values to represent the angle instead of floats? I can see why the small size of the uncontrolled angular variation
(the disturbance variation) might make a difference to a human who
might find it difficult to discriminate angle changes of a
millisecond of arc, but a simulation model should find arc
microseconds and arc degrees no different. So, if you were using a
float representation of the angle, we have to come up with some
other reason why your model controlled less well for large
separations than for small. Maybe you could send me the code?
I thought the approved PCT method of fitting a model to human
performance was to compare the model track with the human track. You
could do that with the data you have. I hope to be able to do that
shortly in Processing, as part of the running of the experiment.
Martin

···

[From Rick Marken (2013.03.06.1745)]

        Martin Taylor

(2013.03.06.17.43)–

        MT: I guess I have to ask a question you may have answered,

but if you did, I don’t remember it. You said that you got
the decrease in control as a function of alpha in your model
by decreasing the loop gain. I don’t remember whether you
said it, but did you similarly decrease the loop gain when
you modelled the height difference? And since you were not
fitting the model to human data, as you didn’t at that time
have human data to fit, what were you comparing the models
to?

      RM: ...



      The only thing I said about loop gain is that the reason

controlling angle results in an increase in RMS error with
increased horizontal separation is because the change in
angle/unit change in vertical separation at large separations
is very small and this might affect the loop gain --the
product of gains around the loop. The system gain (and
slowing) for the model was the same for all separations. So,
again, the only difference between the model that controls
vertical separation and the model that controls angle alpha is
this difference in the controlled variable.

      These models were fit to the human data you provided; the data

being the increase in RMS error with increase in separation.

[From Rick Marken (2013.03.08.2120)]

MT: You will have to humour me some more if we are to come to a common

understanding. We have resolved one point, the meaning of
“pseudo-jnd” or “jnd surrogate”. Let’s deal with one more point at a
time.

RM: OK. But I wouldn’t say that the pseudo-jnd point is solved for me yet. I understand that you are using cursor variance during the compensatory phase of the tracking task – or some transform thereof – as the measure of pseudo-jnd. But I don’t understand why this variance is considered to be a measure of jnd. I thought jnd was a difference in the values of a stimulus that could be detected 75% of the time. I don’t see how this relates to the variance of the cursor.

MT: I still don't truly understand what you actually did in your

analysis.

Why should the smallness of the angle make any difference to the

model quality of control of the angle? Were you perhaps using
integer values to represent the angle instead of floats?

RM: No, I was using the actual floating point values of the arctan of v/s where v is the (varying) vertical distance between cursor and target and s is the horizontal separation between cursor and target.

MT: I can see why the small size of the uncontrolled angular variation

(the disturbance variation) might make a difference to a human who
might find it difficult to discriminate angle changes of a
millisecond of arc, but a simulation model should find arc
microseconds and arc degrees no different. So, if you were using a
float representation of the angle, we have to come up with some
other reason why your model controlled less well for large
separations than for small.

RM: It’s not the smallness of the angle that makes the difference; it’s the increasing smallness of the derivative of the angle (arctan(v/s)) with increasing s that makes the difference. I have it on good authority (my math teacher son) that the derivative of arctan (v/s) is s/(s^2+v^2). This means that the derivative of the angle (the arctangent) decreases with increasing horizontal between target and cursor (s).

The derivative of the angle is equivalent to the input coefficient, k.i, in the equation in B:CP that shows how loop gain affects control:

(5) p = ((k.ik.ek.o)r +(k.ik.d)d)/(1+k.ik.e*k.o)

This is equation 5 on p 287 of B:CP. In the tracking task k.e is the connection between mouse and cursor movement and k.d is the connection between d and p. k.o is the system gain, G in your equations. The product k.ik.ek.o is loop gain. The greater this product the better the control in the sense that, as k.ik.ek.o approaches infinity, equation 5 approaches p = r (the perception stays equal to the reference; perfect control, no error).

My model assumes that k.o (as well as k.e and k.d) remains constant in the different horizontal separation conditions. So the only thing that varies is k.i and since k.i is equivalent to the derivative of the arctan

k.i = s/(s^2+v^2)

as s increases, k.i decreases and this the loop gain decreases as well. So simple algebra (with a dab of calculus) leads to the prediction that control will become poorer (all other things equal) as s increases if the subject is controlling arctan(v/s) rather than just v.

MT: Maybe you could send me the code?

RM: I did it in a spreadsheet. But the model is equivalent to the following pseudo-code:

s = read as input
o = 0
for i = 1 to 50
c = o
v = c-d(i)
p = arctan (v/s)

o = o + slow*(k.o*(t-p)-o)
next i

This is a model of a pursuit tracking task so v is the distance between the cursor (c) and target (d(i)); the target is the disturbance that is varying over time (i). The controlled variable, p is the angle arctan (v/s). The gain (k.o) and slowing factor (slow) values are selected for best control at minimum horizontal separation (I ended up with k.o = 7000 and slow = .001) and remain the same as s varies on different tracking “runs” (of 50 time samples in this example).

      RM:These models were fit to the human data you provided; the data

being the increase in RMS error with increase in separation.

MT: I thought the approved PCT method of fitting a model to human

performance was to compare the model track with the human track. You
could do that with the data you have. I hope to be able to do that
shortly in Processing, as part of the running of the experiment

I could (and should) do that too. But the results you presented are a good start. The falloff in control ability with increasing s suggests that something like the arctan of v/s is a better definition of the controlled variable (p) than just v. We should find that a model that controls arctan(v/s) will give a better fit to all the runs with different values of s than does a model that controls just v.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.09.10.50]

And what is your measure of quality of control? So far, I don't

think you have mentioned it. The one you usually use is
RMS(controlled)/RMS(no control) for the CEV, or some variance of
that, isn’t it? Is that ratio the value that declines with increased
separation?
Martin

···

Thanks for this.

[From Rick Marken (2013.03.08.2120)]

      MT: I can see why the

small size of the uncontrolled angular variation (the
disturbance variation) might make a difference to a human who
might find it difficult to discriminate angle changes of a
millisecond of arc, but a simulation model should find arc
microseconds and arc degrees no different. So, if you were
using a float representation of the angle, we have to come up
with some other reason why your model controlled less well for
large separations than for small.

      RM: It's not the smallness of the angle that makes the

difference; it’s the increasing smallness of the derivative of
the angle (arctan(v/s)) with increasing s that makes the
difference. I have it on good authority (my math teacher son)
that the derivative of arctan (v/s) is s/(s^2+v^2). This
means that the derivative of the angle (the arctangent)
decreases with increasing horizontal between target and cursor
(s).

      The derivative of the angle is equivalent to the input

coefficient, k.i, in the equation in B:CP that shows how loop
gain affects control:

      (5)    p = ((k.i*k.e*k.o)*r +(k.i*k.d)*d)/(1+k.i*k.e*k.o)



      ...
        MT: Maybe you could send

me the code?

       RM: I did it in a spreadsheet. But the model is equivalent to

the following pseudo-code:

      s = read as input

      o = 0

      for i = 1 to 50

         c = o

         v = c-d(i)

         p = arctan (v/s)

         o = o + slow*(k.o*(t-p)-o)

      next i

[Martin Taylor 2013.03.09.11.16]

I meant  "variant" not "variance" in "The one you usually use is

RMS(controlled)/RMS(no control) for the CEV, or some variant of
that, isn’t it?".
Anyway, that’s what I have assumed you have been reporting. Have I
been right?
Martin

···

Correcting a typo…

[Martin Taylor 2013.03.09.10.50]

  And what is your measure of quality of control? So far, I don't

think you have mentioned it. The one you usually use is
RMS(controlled)/RMS(no control) for the CEV, or some variance of
that, isn’t it? Is that ratio the value that declines with
increased separation?
Martin

[From Rick Marken (2013.03.08.2120)]

[From Rick Marken (2013.03.09.0950)]

Martin Taylor (2013.03.09.11.16)–

  Correcting a typo...
  MT: And what is your measure of quality of control? So far, I don't

think you have mentioned it. The one you usually use is
RMS(controlled)/RMS(no control) for the CEV, or some variance of
that, isn’t it? Is that ratio the value that declines with
increased separation?
MT: I meant “variant” not “variance” in “The one you usually use is
RMS(controlled)/RMS(no control) for the CEV, or some variant of
that, isn’t it?”.

Anyway, that's what I have assumed you have been reporting. Have I

been right?

RM: Yes. I understood in the first place. I haven’t actually reported the values but I’ve based my conclusions on RMS error; the average deviation of cursor from target over a trial run. Here are some data. The rows are the two different controlled variables: v and arctan(v/s). The columns are difference values of horizontal separation (s). The values in the table are measures of RMS error (the units can be considered to be pixels). Hope the format comes out ok.

     s      10     20     40     80     160

v 10.8 10.8 10.8 10.8 10.8

arctan(v/s) 10.8 19.3 31.5 44.5 77.8

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com

[Martin Taylor 2013.03.09.13.40]

That changes the whole deal, doesn't it? You are comparing apples

and oranges.
I didn’t report the RMS error of the CEV when the perception is
controlled, though you can get those numbers for both height
differential and angle from the data I sent. I reported the ratio of
controlled RMS to uncontrolled RMS, in the form of uncertainty
values. Those numbers are necessarily identical between height ratio
and angle, at least they are for the measured human performance,
because the values that go into the calculations are the same except
for a factor of the separation, which enters equally into both
numerator and denominator of the ratio.
The interesting thing would be if you had a model fitted to the
human tracks and found that the fit was better for one of those
candidate variables than for the other.
Format is fine. Thanks.
Martin

···

[From Rick Marken (2013.03.09.0950)]

        Martin Taylor

(2013.03.09.11.16)–

          Correcting a typo...
          MT: And what is your measure of

quality of control? So far, I don’t think you have
mentioned it. The one you usually use is
RMS(controlled)/RMS(no control) for the CEV, or some
variance of that, isn’t it? Is that ratio the value that
declines with increased separation?
MT: I meant “variant” not “variance” in “The one you
usually use is RMS(controlled)/RMS(no control) for the CEV,
or some variant of that, isn’t it?”.

        Anyway, that's what I have assumed you have been reporting.

Have I been right?

      RM: Yes. I understood in the first place. I haven't actually

reported the values but I’ve based my conclusions on RMS
error; the average deviation of cursor from target over a
trial run.

      Here are some data. The rows are the two different

controlled variables: v and arctan(v/s). The columns are
difference values of horizontal separation (s). The values in
the table are measures of RMS error (the units can be
considered to be pixels). Hope the format comes out ok.

s 10 20 40 80 160

        v              10.8   10.8   10.8   10.8    10.8

        arctan(v/s)    10.8   19.3   31.5   44.5    77.8     

[From Rick Marken (2013.03.09.1300)]

Martin Taylor (2013.03.09.13.40)–

      RM: Yes. I understood in the first place. I haven't actually

reported the values but I’ve based my conclusions on RMS
error; the average deviation of cursor from target over a
trial run.

MT: That changes the whole deal, doesn't it? You are comparing apples

and oranges.

RM: What are the apples and what are the oranges?

MT: I didn't report the RMS error of the CEV when the perception is

controlled, though you can get those numbers for both height
differential and angle from the data I sent.

RM: I didn’t report RMS error for the CEV either. The numbers I show are the RMS deviation of cursor for target. There is no assumption about what the CEV is when measuring RMS error; RMS error is what is observed in the data.

MT: I reported the ratio of

controlled RMS to uncontrolled RMS, in the form of uncertainty
values.

RM: I could do that too as long as you tell me what “uncontrolled RMS” is. I take it to be the variance of c - t if the person didn’t control at all (c was constant) which would make “uncontrolled RMS” equal to the variance of t (which is the variance of the disturbance). Is that right?

MT: Those numbers are necessarily identical between height ratio

and angle, at least they are for the measured human performance,
because the values that go into the calculations are the same except
for a factor of the separation, which enters equally into both
numerator and denominator of the ratio.

RM: I take this to mean that measuring RMS error as the variance of c-t will be the same regardless of what the person is actually controlling and I guess I agree. The RMS error I measured is just the observed variance of c-t in one case where the model is controlling v and in the other when the model is controlling arctan(v/s).

MT: The interesting thing would be if you had a model fitted to the

human tracks and found that the fit was better for one of those
candidate variables than for the other.

RM: I agree. Maybe I’ll do that for one of your runs at each separation

      RM: Here are some data. The rows are the two different

controlled variables: v and arctan(v/s). The columns are
difference values of horizontal separation (s). The values in
the table are measures of RMS error (the units can be
considered to be pixels). Hope the format comes out ok.

s 10 20 40 80 160

        v              10.8   10.8   10.8   10.8    10.8

        arctan(v/s)    10.8   19.3   31.5   44.5    77.8     

MT: Format is fine. Thanks

RM: Great. You seem to think there is something wrong with these numbers; that they are not comparable to the measures of control you presented in your graph; can you tell me exactly how you obtained the measures of control used in that graph; then I can see if results look like those in your graph if I obtain the correct measures. Maybe you could re-post the graph too?

Thanks

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.09.19.48]

[From Rick Marken (2013.03.09.1300)]

        Martin Taylor

(2013.03.09.13.40)–

            RM: Yes. I understood in the first

place. I haven’t actually reported the values but I’ve
based my conclusions on RMS error; the average deviation
of cursor from target over a trial run.

        MT: That changes the whole deal, doesn't it? You are

comparing apples and oranges.

      RM: What are the apples and what are the oranges? 
It was a bad metaphor. I should have said apples to oranges to

tomatoes.

Apples: Control ratio -- RMS(uncontrolled)/RMS(controlled), valid

for both height differential and “angle alpha”

Oranges: Controlled perception height differential, measure is RMS

height differential.

Tomatoes: Controlled perception is "angle alpha", measure is RMS

height differential in pixels

        MT: I didn't report the RMS error of the CEV when the

perception is controlled, though you can get those numbers
for both height differential and angle from the data I sent.

      RM: I didn't report RMS error for the CEV either. The numbers

I show are the RMS deviation of cursor for target. There is no
assumption about what the CEV is when measuring RMS error; RMS
error is what is observed in the data.

I _really_ misunderstood you, then. I thought you were comparing the

case in which the controlled perception was the height differential
to the case where the controlled perception was the “angle alpha”.
In the first case the CEV is height differential, in the second it
is the displayed “angle alpha”. But now you say this is not what you
were thinking of, despite the diagram you showed [From Rick Marken
(2013.03.02.1000)]:

  RM: Here's a diagram that might make it

clearer.

        MT: I reported the

ratio of controlled RMS to uncontrolled RMS, in the form of
uncertainty values.

      RM: I could do that too as long as you tell me what

“uncontrolled RMS” is. I take it to be the variance of c - t
if the person didn’t control at all (c was constant) which
would make “uncontrolled RMS” equal to the variance of t
(which is the variance of the disturbance). Is that right?

I don't know if you are trying to be funny or deliberately obtuse,

one of which I suspect to be the case, but for now I’m taking at
face value your earlier statement that you are really trying to
understand.

So yes, you are right. I am doing what one normally does when one

wants to measure the quality of control. You look at what the RMS of
the presumed CEV would be if the output had no effect (equal to the
effect of the disturbance alone on the CEV) and at the measured RMS
when the control loop is closed. You then take their ratio, commonly
known as the “control ratio”. It is the usual measure of control
quality, at least it has been on CSGnet for the last 20 years
(though sometimes the ratio is of variance, sometimes it is
inverted, and sometimes it is treated more casually as “control
compensating for 98% of the variance”).

                RM: Here are some data. The rows are the two

different controlled variables: v and arctan(v/s).
The columns are difference values of horizontal
separation (s). The values in the table are measures
of RMS error (the units can be considered to be
pixels). Hope the format comes out ok.

s 10 20 40 80 160

                  v              10.8   10.8   10.8   10.8    10.8

                  arctan(v/s)    10.8   19.3   31.5   44.5    77.8
I don't understand the bottom line. You say that is is

“arctan(v/s)”, which is an angle, but you also say its unit is
pixels. That doesn’t make sense.

If what you report is really an angle in degreees, the RMS values

are unbelievably enormous. Even if it is indeed pixels (v), rather
than arctan (v/s), the RMS values are huge compared to the human
values, and they imply rather large, but decreasing RMS variation of
angle with increasing separation, which is contrary to your thesis.

Here are the RMS angles in degrees, supposing that the measure you

reported is actually pixels of height difference (the order is that
of your columns):

57, 44, 38, 29, 26.

Lower numbers mean more accuracy, so the RMS accuracy of control _of

angle_ increases with increasing separation, as would the control
ratio (since the disturbance waveform is the same in all cases, if
the constant 10.8 on the “v” line is to be believed).

But then I could be quite wrong about what your numbers represent.

MT: Format is fine. Thanks

      RM: Great.  You seem to think there is something wrong with

these numbers; that they are not comparable to the measures of
control you presented in your graph; can you tell me exactly
how you obtained the measures of control used in that graph;
then I can see if results look like those in your graph if I
obtain the correct measures. Maybe you could re-post the graph
too?

Here are two graphs. The first is what you saw before (log2(control

ratio)), the second is the human pseudo-jnd (the RMS variation in
the cursor position during the very slow compensatory tracking
portion of the run). These are from the very first 18 runs (3 at
each separation) that I did in my programming exercise, so the
actual numbers may not be comparable to the results from similarly
named files in the data files I sent you.

For control performance, 4 bits is an RMS reduction by a factor of

16 as a consequence of control, and the range of RMS reduction is
roughly 23 down to 12. Variance reduction is the square of that
(roughly 530 to 150). In other words, control was excellent at all
separations, because the disturbance was very slow (but not as slow
as in the compensatory part of the run).

![CtrlPerfBits2.jpg|522x408](upload://e7yQoT3b3FyEPfoRkvtvd3QjYLN.jpeg)

![EllipseResolution.jpg|542x311](upload://i6by2ZrGZ5sif89tm9kFaxmwPWW.jpeg)

By the way, these and a few other odd graphs are all in the Libre

Office spreadsheet file named “data/Sp0Sep50_1graphs.ods”. The other
graphs aren’t labelled but you might be able to figure out what they
are by pretending to edit them. I used this spreadsheet as a kind of
scratchpad for data from elsewhere, so you may not be able to figure
out where some of the numbers come from.

Martin

[From Rick Marken (2013.03.10.1540 PDT)]

Martin Taylor (2013.03.09.19.48)–

      RM: I didn't report RMS error for the CEV either. The numbers

I show are the RMS deviation of cursor for target. There is no
assumption about what the CEV is when measuring RMS error; RMS
error is what is observed in the data.

MT: I _really_ misunderstood you, then. I thought you were comparing the

case in which the controlled perception was the height differential
to the case where the controlled perception was the “angle alpha”.

RM: No misunderstanding there. I was doing exactly that. I was comparing a control model’s ability to control (control measured in the usual way, as RMS deviation of cursor from target) when the controlled perception was height differential (v) versus angle alpha (arctan(v/s)).

MT: In the first case the CEV is height differential, in the second it

is the displayed “angle alpha”. But now you say this is not what you
were thinking of, despite the diagram you showed [From Rick Marken
(2013.03.02.1000)]:

RM: The diagram shows the behavior of the model (in terms of RMS deviation of cursor from target) when the CEV is height differential (v) and when it is angle alpha (arctan(v/s)). I think you make be confusing theory and data. The data is how well people control at different separations (measured as RMS deviation of cursor from target). The theory is that poorer control when s increases results from the fact that people are controlling arctan (v/s) rather than just v. My table shows that, indeed, a mode that controls arctan (v/s) accounts for this behavior better than a model that controls v.

        MT: I reported the

ratio of controlled RMS to uncontrolled RMS, in the form of
uncertainty values.

      RM: I could do that too as long as you tell me what

“uncontrolled RMS” is. I take it to be the variance of c - t
if the person didn’t control at all (c was constant) which
would make “uncontrolled RMS” equal to the variance of t
(which is the variance of the disturbance). Is that right?

MT: So yes, you are right.

RM: The the numbers reported in my earlier table were precisely proportional to the ratio of controlled RMS to uncontrolled RMS that you report. The numbers in the table were just controlled RMS and since the same disturbance was used in all runs of the model uncontrolled RMS was the same for all entries, 72.8. So the ratios of RMS controlled to RMS uncontrolled in the table I sent earlier become:

     s      10     20     40     80     160

v .15 .15 .15 .15 .15

arctan(v/s) .15 .26 .43 .61 1.06

MT: I don't understand the bottom line. You say that is is

“arctan(v/s)”, which is an angle, but you also say its unit is
pixels. That doesn’t make sense.

RM: The labels in the rows of the table (v and arctan(v/s)) are the variables controlled by the model. All the numbers in the earlier table are measures of control in terms of RMS deviation of cursor from target in pixels. But since the table above now presents a ratio of RMS values, these measures of control are, of course, dimensionless.

      RM:  Maybe you could re-post the graph

too?

MT: Here are two graphs.

RM: Great, thanks! Since you are getting bit measures of performance ranging from 3.5 to 4.5 the RMS error ratios on which these are based must range from about 12 to 24. This means that the RMS ratios you are computing are (RMS uncontrolled/RMS controlled) rather than vice versa; I report (RMS controlled/RMS uncontrolled) in the table above. I’ll convert it to your measures and plot it along with you data. OK, here it is.

Clearly the behavior of the model that controls angle gives a much better match to the data than the model that controls v.

MT: By the way, these and a few other odd graphs are all in the Libre

Office spreadsheet file named “data/Sp0Sep50_1graphs.ods”. The other
graphs aren’t labelled but you might be able to figure out what they
are by pretending to edit them. I used this spreadsheet as a kind of
scratchpad for data from elsewhere, so you may not be able to figure
out where some of the numbers come from.

RM: I’m starting to do some detailed analysis of your data now; detailed in terms of fitting the model using v or arcsin(v/s) as the controlled variable (by the way; I don’t like the term CEV (controlled environmental variable); it implies the the controlled variable is a variable out in the environment. While the contrlled variable is a function of environmental variables I think it’s misleading to imply that the variable exists in the environment. But that’s just a nit. I can live with CEV. I think the main goal here is to understand each other. Do you think you are getting a better understanding of how I have been using your experiment to test to determine what perception might be under control in this task?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2013.03.10.1640 MDT)]
   Rick Marken (2013.03.10.1540 PDT)

RM: The diagram shows the behavior of the model (in terms of RMS deviation of cursor from target) when the CEV is height differential (v) and when it is angle alpha (arctan(v/s)). I think you make be confusing theory and data. The data is how well people control at different separations (measured as RMS deviation of cursor from target). The theory is that poorer control when s increases results from the fact that people are controlling arctan (v/s) rather than just v. My table shows that, indeed, a mode that controls arctan (v/s) accounts for this behavior better than a model that controls v.

BP: I think your approach is the closer to correct. The quality of control is determined by loop gain. The gain of the input function becomes less as s increases, going to zero at alpha = 90 degrees. Since the other gains around the loop remain the same, the loop gain is lower (as you said) if the controlled variable is alpha.

Since the model using alpha as the controlled variable shows errors becoming larger as S increases, and since that is the model that fits the data the best, the real control system is most probably controlling alpha, not height difference. You draw the same conclusion:

RM: Clearly the behavior of the model that controls angle gives a much better match to the data than the model that controls v.

BP: I assume that the computer display in the task does not have a horizontal line drawn to show y = 0. This means that the height difference between target and cursor must be estimated from relationships with objects on the screen, or its boundaries. Are the two dashed line segments below vertically aligned with each other?

···

--- ---

I find it hard to tell even with the lines of text below and above. I have to imagine a line from one middle dash to the other middle dash and then look at its tilt on the page. This is much easier if the two objects are closer together:

                                           ---
                                    ---

The vertical separation is the same in both cases, but the angle is clearly different. And with the same change in one of the objects' vertical position, the perceived angle below clearly changes much more than the one above.

Best,

Bill P.

[Martin Taylor 2013.03.11.00.01]

[From Rick Marken (2013.03.10.1540 PDT)]

Your message makes me much clearer about your thinking, and we may be converging. I'd still be happier if you did the model optimizing by matching the track, but at least I now think I understand the data you presented and why you think they show what you say.

I don't agree, however, that it resolves the issue of what perception I was controlling. I'd like to agree, because from the start I have said that subjectively it feels as though I am controlling the angle alpha, but I can't agree because your model doesn't take into account the change in the ability of the subject (me) to perceive the difference in height (or in alpha) as a function of separation. That change was the reason for setting up the experiment, to test the prediction that the control ratio would get worse as the ability to discriminate decreases. No model that fails to take this into account can be accepted as a valid way to distinguish the two candidates.

I think I'm being a good scientist here, in that I am arguing against what I would like to be able to believe.

RM: I'm starting to do some detailed analysis of your data now; detailed in terms of fitting the model using v or arcsin(v/s) as the controlled variable ... I think the main goal here is to understand each other. Do you think you are getting a better understanding of how I have been using your experiment to test to determine what perception might be under control in this task?

Yes, but as I mentioned, I don't think you yet have succeeded in making the discrimination, much as I would like to believe you have.

···

----------------

For your interest, I have put in my Dropbox a snapshot of my very naively coded experiment as it was a week ago. With luck, you can download running applications with source from <Dropbox - Error - Simplify your life.

This version has interference on screen if you choose to have it. There are 5 applications, 32 and 64-bit windows, 32 and 64-bit Linux, and Mac OS X. The source code comes with each, and is also separately available in the zip file you should get from that link. I hope it works for you. I can't really test it, because it is my own Dropbox, and maybe that allows me access where it would not allow others. But give it a shot. If it runs for you, you can at least see what I am talking about.

Supposing the link works and the application for your OS runs, here's what you should see.

When it starts, there is a 1000x1000 pixel window with a browny-orange background. In the middle are the two ellipses, a pinkish target and a green cursor. At the top are four pull-down menus and two buttons called "Go" and "Stop". The "Stop" button is lethal. It exits the program. The "go" button starts the experiment after you are satisfied with your menu choices. You don't have to make any choices. If you don't the experiment will run with default values.

The menus, from left to right are:

Speed: Controls how fast the disturbance changes. All the speeds are rather slow.

Separation: choose the lateral separation between the tips of the ellipses in pixels.

DataFolder: Choose where to store the csv file of your data at the end of the run. If you don't choose anything or if you choose the Default, the data will be in a "data" folder in the same folder as your application (at least it is on a Mac).

Interference Level: Interference consists of N short lines at various orientations that flash different colours and move around the window. The idea was to try to interfere with the "angle alpha" perception without interfering with the height differential perception, but I don't think it works and don't know how to test whether it works. SO treat it as a bit of programming fun.

When you have made your selection from any of the menus you want to use to change from the defaults (which are Speed 1, Separation 180, Default data folder, zero interference), click on the "Go" button and use the mouse to move the cursor to start tracking the target. If during your tracking you are near to moving the mouse out of the window left or right, the mouse-cursor (normally hidden) will be displayed in a small white box, giving you time to bring it back toward the middle.

There is a 5 second run in period during which the background is brownish grey, followed by approximately 69 seconds (4096 frames at 60 fps) with a greyblue background during which the pursuit tracking data are being recorded, followed by 10 seconds in which compensatory tracking with a very slow disturbance (the same as Speed 0) is being recorded. At the end of that, the data are saved and the background returns to its start state, ready for you to begin a new run. If you simply click "Go" the new run will have the same settings, or you can change settings from the menus to do a different kind of run.

Data files are named SpXSepYBgZ_N, where X is the disturbance speed selection, Y is the pixel separation selection, Z is the number of interfering dazzle lines, and N is a serial number that avoids overwriting -- at least until you have ten repetitions of the same conditions. The tenth version of any particular setup will be overwritten by the eleventh and subsequent ones.

You may see some debugging messages from time to time. Ignore them. Remember that although I wanted to demonstrate the effect of reduced perceptual discrimination on control performance, this whole thing was for me more of an exercise in learning to program in Processing, than an attempt at a well built experimental setup. I know of many possible improvements, and I will probably rewrite the whole thing if it ever looks as though it might be useful. You will have the source when you download the zip file, so you can write your own improvements. If you are going to do that, you will need the controlP5 library at <http://www.sojamo.de/libraries/controlP5/&gt; (Many useful libraries are linked at <http://processing.org/reference/libraries/&gt;\.

One note: This version uses Perlin noise, which is provided with Processing. I don't know the actual spectral parameters of the noise, so it will be hard to compare the results against other experiments with better known noise parameters.

Enjoy, but don't complain that the programming is lousy and inefficient. I know that already.

Martin