Reading versus imagination (was Uncertainty, Information and control part I: U, I, and M)

[From Rick Marken (2013.03.11.1700)]

Martin Taylor (2013.03.11.00.01)–

[From Rick Marken (2013.03.10.1540 PDT)]

MT: Your message makes me much clearer about your thinking, and we may be converging. I’d still be happier if you did the model optimizing by matching the track, but at least I now think I understand the data you presented and why you think they show what you say.

RM: Great. As I said, I am in the process of matching the model to some of your tracking data. I will report back soon (well, before the end of the week) on the results of that effort.

MT: I don’t agree, however, that it resolves the issue of what perception I was controlling.

RM: I agree, the results of testing for controlled variables, regardless of how it’s done, never finally resolves the issue of what the controlled variable really is. But it gets us closer to a correct definition of that variable.

I think the results of my modeling efforts certainly suggest that the perception controlled in a tracking task is closer to arctan(v/s) than it is to v. But it can’t really be arctan(v/s) for the simple reason that this value is not defined for s=0 (when there is no separation between cursor and target) but we know that people are perfectly capable of controlling the distance between cursor and target when s=0. Maybe s is better defined as the distance between the mid points of target and cursor?

So whatever perception is controlled it is closer to arctan(v/s) than it is to v. What we need now is a clever mathematician like you to suggest function(s) other than arctan(v/s) as the perceptual function that defines the controlled variable in your task; this new function should produce the same results as arctan(v/s) when s> 0 (when the actual s in your experiment was 0 I used s=10, which was the minimal s that let the model work; so this does assume that s is the distance between midpoints, I guess).

MT: I’d like to agree, because from the start I have said that subjectively it feels as though I am controlling the angle alpha, but I can’t agree because your model doesn’t take into account the change in the ability of the subject (me) to perceive the difference in height (or in alpha) as a function of separation.

RM: So my model accounts for the behavioral data but not the subjective data? The evaluation of models based on subjective data is new to me. How is it done?

MT: That change was the reason for setting up the experiment, to test the prediction that the control ratio would get worse as the ability to discriminate decreases.

RM: I know why you set up the experiment. But I think my model shows that your correct prediction about what would happen could be accounted for without any assumptions about decreased ability to discriminate. My model accounts for the data by simply assuming that what was being controlled was arctan(v/s) rather than v. There was no need to explain the results in terms of s leading to decreased discrimination. And indeed a model that showed how control would get worse as discrimination decreases – the model you apparently advocate – was never shown.

It has simply been asserted that the ability to discriminate decreases as s increases and that this is the reason for the poorer control. What I showed is that there is no reason to assume that it is decreasing ability to discriminate that leads to decreased ability to control as s increases;my model just assumes that it is arctan(v/s) rather than v than is controlled and it turns out that the angle control model produces poorer control as s increases, just like the human subject. If you believe that decreasing ability to discriminate with increasing s is the real reason for the decrease in the ability to control then you should present your model that behaves this way and then we can test it against the data.

MT: No model that fails to take this into account can be accepted as a valid way to distinguish the two candidates.

RM: I think you are saying that a control model that accounts nearly perfectly for the decrease in ability to control with increasing s can’t possibly by right because you just know that it must be the decrease in discrimination that is responsible for this result. Is that what you mean?

RM: I’m starting to do some detailed analysis of your data now; detailed in terms of fitting the model using v or arcsin(v/s) as the controlled variable … I think the main goal here is to understand each other. Do you think you are getting a better understanding of how I have been using your experiment to test to determine what perception might be under control in this task?

MT: Yes, but as I mentioned, I don’t think you yet have succeeded in making the discrimination, much as I would like to believe you have.

RM: All I’ve done is shown that a model controlling arctan(v/s) matches the data better than one controlling v and that there is no need to assume anything about ability to discriminate differences in cursor/target location to account for the decrease in ability to control with increasing s. The next step is to do some further tests of the model to see if we can refine it a bit. If you have an alternative model, that accounts for the data based on a reduction in discrimination ability with increasing s then it would be nice to see it so we could compare it to the angle control model.


MT: For your interest, I have put in my Dropbox a snapshot of my very naively coded experiment as it was a week ago. With luck, you can download running applications with source from <https://www.dropbox.com/sh/3fvum0pajdxqga9/cdfw-dzynk/TrackerExperiment_1a>.

RM: Thanks. Very impressive. I’ll just work with the data I have from you for now but it will be nice to see how the tracking task actualy works.

Best

Rick

···

This version has interference on screen if you choose to have it. There are 5 applications, 32 and 64-bit windows, 32 and 64-bit Linux, and Mac OS X. The source code comes with each, and is also separately available in the zip file you should get from that link. I hope it works for you. I can’t really test it, because it is my own Dropbox, and maybe that allows me access where it would not allow others. But give it a shot. If it runs for you, you can at least see what I am talking about.

Supposing the link works and the application for your OS runs, here’s what you should see.

When it starts, there is a 1000x1000 pixel window with a browny-orange background. In the middle are the two ellipses, a pinkish target and a green cursor. At the top are four pull-down menus and two buttons called “Go” and “Stop”. The “Stop” button is lethal. It exits the program. The “go” button starts the experiment after you are satisfied with your menu choices. You don’t have to make any choices. If you don’t the experiment will run with default values.

The menus, from left to right are:

Speed: Controls how fast the disturbance changes. All the speeds are rather slow.

Separation: choose the lateral separation between the tips of the ellipses in pixels.

DataFolder: Choose where to store the csv file of your data at the end of the run. If you don’t choose anything or if you choose the Default, the data will be in a “data” folder in the same folder as your application (at least it is on a Mac).

Interference Level: Interference consists of N short lines at various orientations that flash different colours and move around the window. The idea was to try to interfere with the “angle alpha” perception without interfering with the height differential perception, but I don’t think it works and don’t know how to test whether it works. SO treat it as a bit of programming fun.

When you have made your selection from any of the menus you want to use to change from the defaults (which are Speed 1, Separation 180, Default data folder, zero interference), click on the “Go” button and use the mouse to move the cursor to start tracking the target. If during your tracking you are near to moving the mouse out of the window left or right, the mouse-cursor (normally hidden) will be displayed in a small white box, giving you time to bring it back toward the middle.

There is a 5 second run in period during which the background is brownish grey, followed by approximately 69 seconds (4096 frames at 60 fps) with a greyblue background during which the pursuit tracking data are being recorded, followed by 10 seconds in which compensatory tracking with a very slow disturbance (the same as Speed 0) is being recorded. At the end of that, the data are saved and the background returns to its start state, ready for you to begin a new run. If you simply click “Go” the new run will have the same settings, or you can change settings from the menus to do a different kind of run.

Data files are named SpXSepYBgZ_N, where X is the disturbance speed selection, Y is the pixel separation selection, Z is the number of interfering dazzle lines, and N is a serial number that avoids overwriting – at least until you have ten repetitions of the same conditions. The tenth version of any particular setup will be overwritten by the eleventh and subsequent ones.

You may see some debugging messages from time to time. Ignore them. Remember that although I wanted to demonstrate the effect of reduced perceptual discrimination on control performance, this whole thing was for me more of an exercise in learning to program in Processing, than an attempt at a well built experimental setup. I know of many possible improvements, and I will probably rewrite the whole thing if it ever looks as though it might be useful. You will have the source when you download the zip file, so you can write your own improvements. If you are going to do that, you will need the controlP5 library at <http://www.sojamo.de/libraries/controlP5/> (Many useful libraries are linked at <http://processing.org/reference/libraries/>.

One note: This version uses Perlin noise, which is provided with Processing. I don’t know the actual spectral parameters of the noise, so it will be hard to compare the results against other experiments with better known noise parameters.

Enjoy, but don’t complain that the programming is lousy and inefficient. I know that already.

Martin


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.11.20.07]

[From Rick Marken (2013.03.11.1700)]

      Martin

Taylor (2013.03.11.00.01)–

[From Rick Marken (2013.03.10.1540 PDT)]

      MT: Your message makes me much clearer about your thinking,

and we may be converging. I’d still be happier if you did the
model optimizing by matching the track, but at least I now
think I understand the data you presented and why you think
they show what you say.

      RM: Great. As I said, I am in the process of matching the

model to some of your tracking data. I will report back soon
(well, before the end of the week) on the results of that
effort.

      MT: I don't agree, however, that it resolves the issue of what

perception I was controlling.

      RM:  I agree, the results of testing for controlled variables,

regardless of how it’s done, never finally resolves the issue
of what the controlled variable really is. But it gets us
closer to a correct definition of that variable.

      RM: So my model accounts for the behavioral data but not the

subjective data?

No. On the contrary, it accounts for the subjective data as well as

the behavioural data. My problem is that so far neither of us has
shown that it is the only way to account for the behavioural data. I
WANT to believe that the way it feels is the way it is, and that
makes me especially suspicious.

      The evaluation of models based on subjective data is new

to me. How is it done?

Dunno. Ask Bill. It's how he developed the levels of the hierarchy.

It’s why he always insists that perception is precise and we
shouldn’t be concerned with perceptual uncertainty. I agree with
something you said early in this thread, that subjectively we don’t
really know what we are controlling. Different hypotheses need to be
compared by testing. I agree that you have shown that if perception
is precise, controlling angle gives a declining control ratio with
increasing separation and controlling height differential doesn’t
(this last was something I had known before I started on the
experiment, though I hadn’t realized that the angle would give a
declining control ratio with increasing separation).

      MT: That

change was the reason for setting up the experiment, to test
the prediction that the control ratio would get worse as the
ability to discriminate decreases.

       RM: I know why you set up the experiment. But I think my

model shows that your correct prediction about what would
happen could be accounted for without any assumptions about
decreased ability to discriminate.

Correct. But that doesn't invalidate the prediction based on

perceptual accuracy.

      It has simply been asserted that the ability to discriminate

decreases as s increases

No. I presented a graph with the data showing that it does.
      and that this is the reason for the poorer control. What I

showed is that there is no reason to assume that it is
decreasing ability to discriminate that leads to decreased
ability to control as s increases;

Correct. There is no need. That doesn't mean it is false.

Consider, if it were not true that control performance depends on

perceptual accuracy, why would jewellers and watchmakers use
magnifying lenses? The fact that they can’t make out tiny mechanical
relationships with the naked eye would have no effect on their
ability to make their precision masterpieces.

      MT: No model

that fails to take this into account can be accepted as a
valid way to distinguish the two candidates.

      RM: I think you are saying that a control model that accounts

nearly perfectly for the decrease in ability to control with
increasing s can’t possibly by right because you just know
that it must be the decrease in discrimination that is
responsible for this result. Is that what you mean?

No. I'm saying that any model that doesn't take into account the

perceptual ability to discriminate is incomplete, not that it is
wrong. All models are incomplete in many ways, without necessarily
being wrong. The issue is to determine whether that incompleteness
matters given the noisiness of the data (and the data are very noisy
:slight_smile: And I’d hardly say your model “accounts nearly perfectly” for my
data. It has the same trend, and that’s good. I’m perfectly willing
to believe it may be the right answer. I’m just not convinced yet.

      ----------------



      MT: For your interest, I have put in my Dropbox a snapshot of

my very naively coded experiment as it was a week ago. With
luck, you can download running applications with source from
<https://www.dropbox.com/sh/3fvum0pajdxqga9/cdfw-dzynk/TrackerExperiment_1a >.

      RM: Thanks. Very impressive. I'll just work with the data I

have from you for now but it will be nice to see how the
tracking task actualy works.

I assume that means the link worked for you. That's a relief.

Thanks for the compliment, but if you look at the code, you will see

it is pretty messy. A real program for such experiments would not
look like that. But I’m learning bit by bit, and it might get
better. I only sent that snapshot out so everyone could see just
what we have been talking about. By the way, I did another series of
18 runs this afternoon to see whether the flashing lines made any
difference. Again, the data are really noisy, but they suggest that
there is a difference between having some flashing interference
reduces control quality, but it doesn’t matter in any obvious way
how many of the flashing lines there are. It reduces the control
ratio by a factor of nearly two, but doesn’t affect the
discrimination performance reliably.

Anyway, do your own trials without interference lines, and maybe we

can share data from more than one subject.

Martin

[From Bill Powers (2013.03.11.1915 MDT)]

Martin Taylor
2013.03.11.20.07]–

MT: I don’t agree, however, that
it resolves the issue of what perception I was
controlling.

RM: I agree, the results
of testing for controlled variables, regardless of how it’s done, never
finally resolves the issue of what the controlled variable really is.
But it gets us closer to a correct definition of that variable.

RM: So my model accounts for the behavioral data but not the subjective
data?

MT: No. On the contrary, it accounts for the subjective data as well as
the behavioural data. My problem is that so far neither of us has
shown that it is the only way to account for the behavioural data. I WANT
to believe that the way it feels is the way it is, and that makes me
especially suspicious.

BP: The best way I know of to counteract that bias tendency is to decide
once and for all that we simply want the model that best fits the data.
We won’t get any closer than that to knowing what the real controlled
variable is. We have to avoid falling in love with any particular model,
and when we can, let the experimental procedures govern the
decision.

RM: The evaluation of models
based on subjective data is new to me. How is it done?

MT: Dunno. Ask Bill. It’s how he developed the levels of the hierarchy.
It’s why he always insists that perception is precise and we shouldn’t be
concerned with perceptual uncertainty.

BP: I never said that. What I said is that we can never measure the
perceptual uncertainty in any objective terms, because we know only the
perception, the appearance of the variable we are controlling. The
hierarchical levels I ended up with had to meet certain specific
criteria, such as showing that in order to control a higher-level
variable it is necessary to vary one or more of lower order. But they
didn’t depend on knowing what is actually Out There.

MT: I agree with something
you said early in this thread, that subjectively we don’t really know
what we are controlling. Different hypotheses need to be compared by
testing. I agree that you have shown that if perception is precise,
controlling angle gives a declining control ratio with increasing
separation and controlling height differential doesn’t (this last was
something I had known before I started on the experiment, though I hadn’t
realized that the angle would give a declining control ratio with
increasing separation).

BP: The control ratio (error with and without control) can be evaluated
only after we have found the best definition of the controlled variable.
Without that constraint, we can assume different controlled variables
that yield the same control ratio, or that, with small changes, can be
made to show either variable has the better ratio. The first question
must always be, what is our best guess about what this system is
controlling?

The next question is whether the system is controlling this variable with
small or large errors, or in ways that remain consistent even with
different patterns of disturbance. We can never compare the apparent
controlled variable with the “actual” one.

MT: That change was the reason for setting up the experiment, to test
the prediction that the control ratio would get worse as the ability to
discriminate decreases.

BP: I predict that when the horizontal separation S is small, the
controlled variable will turn out be something close to the difference in
height between cursor and target; for large separations, it will be
closest to the angle of the line connecting cursor and target centroids.
The reasons should be fairly obvious from the subjective
experience.

I think that someone else’s “ability to discriminate” might be
somewhat difficult to define and measure in this experiment. It seems to
me that one indicator of difficult discrimination would be a lack of
repeatability in the performance, because one run might indicate one
controlled variable while a repetition would indicate a different one as
the best definition, more or less at random.

RM: It has simply been asserted
that the ability to discriminate decreases as s
increases

MT:No. I presented a graph with the data showing that it
does.

BP: How did you measure the ability to discriminate? From what I know so
far, it seems that this is a prediction from some theory of
discrimination, not a direct measure of it.

MT: Consider, if it were not
true that control performance depends on perceptual accuracy, why would
jewellers and watchmakers use magnifying lenses? The fact that they can’t
make out tiny mechanical relationships with the naked eye would have no
effect on their ability to make their precision
masterpieces.

BP: Increased magnification, according to optical theories, increases the
sensitivity of the perception to changes in the small forms involved or
their relationships. The loop gain is increased by the amount of
magnification, and that could be what improves performance (up to the
point where instability sets in or noise becomes predominant).

MT: I’m saying that any model
that doesn’t take into account the perceptual ability to discriminate is
incomplete, not that it is wrong.

BP: How do you “take it into account” if you can’t measure it?
As I just indicated, the explanation might be that loop gain changes
rather than discrimination, and discrimination limits might show up as
increased variability in control, not repeatable changes in the amount of
error.

MT: All models are incomplete in
many ways, without necessarily being wrong. The issue is to determine
whether that incompleteness matters given the noisiness of the data (and
the data are very noisy :slight_smile: And I’d hardly say your model “accounts
nearly perfectly” for my data. It has the same trend, and that’s
good. I’m perfectly willing to believe it may be the right answer. I’m
just not convinced yet.

BP: Let’s focus on getting the right answer, not on promoting any
particular hypothesis.

Best,

Bill P.

[Martin Taylor 2013.03.12.00.36]

Yes, that is what I am saying.

Nor did I suggest that they did. But you have in the past several
times said that you derived the levels from your subjective
experience. The criteria you mention came later, or at least so I
have understood you to say.l Agreed. I do think that my discussion that Warren put on PCTweb
remains relevant.
The measure I used was a very crude one, and I would not put any
reliance on the numerical value, as I said to Rick when I first
mentioned the “pseudo-jnd” and when I explained it again. However, I
would put a bit more reliance on the notion that the pseudo-jnd
varies proportionately to a reasonable measure of discriminabilty,
such as d’.
I did the easiest thing I could think of to program. I switched the
task from pursuit to compensatory and made the disturbance very
slow, so that the subject (me) would move the mouse when the height
error (or non-zero-ness of the angle alpha) was detectable. I do
plan to use a rather better task in future, possibly more than one,
but that requires changing the programming in ways I understand,
while I am trying to learn new facets of the language.
Are you saying anything other than the consequences of improved
discrimination?
I would suggest substituting “as a consequence of” for “rather
than”.
How would you get one without the other? Maybe I misunderstand, but
“increased variability in control” sounds like deviations between
the error traces from trial to trial, which would show up as
repeatable changes in the amount of error, wouldn’t it?
That’s the point I’ve been making to Rick. I don’t want to promote
any particular hypothesis, and neither do I want to reject any
hypothesis that remains compatible with the data. When I like a
hypothesis, I usually feel I must try to discredit it.
Martin

···

On 2013/03/11 10:14 PM, Bill Powers
wrote:

[From Bill Powers (2013.03.11.1915 MDT)]

    Martin Taylor

2013.03.11.20.07]–

    MT: I don't agree,

however, that
it resolves the issue of what perception I was
controlling.

      RM:  I agree, the

results
of testing for controlled variables, regardless of how it’s
done, never
finally resolves the issue of what the controlled variable
really is.
But it gets us closer to a correct definition of that
variable.

      RM: So my model accounts for the behavioral data but not the

subjective
data?

    MT: No. On the contrary, it accounts for the subjective data as

well as
the behavioural data. My problem is that so far neither of us
has
shown that it is the only way to account for the behavioural
data. I WANT
to believe that the way it feels is the way it is, and that
makes me
especially suspicious.

  BP: The best way I know of to counteract that bias tendency is to

decide
once and for all that we simply want the model that best fits the
data.
We won’t get any closer than that to knowing what the real
controlled
variable is. We have to avoid falling in love with any particular
model,
and when we can, let the experimental procedures govern the
decision.

      RM: The evaluation

of models
based on subjective data is new to me. How is it done?

    MT: Dunno. Ask Bill. It's how he developed the levels of the

hierarchy.
It’s why he always insists that perception is precise and we
shouldn’t be
concerned with perceptual uncertainty.

  BP: I never said that. What I said is that we can never measure

the
perceptual uncertainty in any objective terms, because we know
only the
perception, the appearance of the variable we are controlling. The
hierarchical levels I ended up with had to meet certain specific
criteria, such as showing that in order to control a higher-level
variable it is necessary to vary one or more of lower order. But
they
didn’t depend on knowing what is actually Out There.

    MT:  I agree with

something
you said early in this thread, that subjectively we don’t really
know
what we are controlling. Different hypotheses need to be
compared by
testing. I agree that you have shown that if perception is
precise,
controlling angle gives a declining control ratio with
increasing
separation and controlling height differential doesn’t (this
last was
something I had known before I started on the experiment, though
I hadn’t
realized that the angle would give a declining control ratio
with
increasing separation).

  BP: The control ratio (error with and without control) can be

evaluated
only after we have found the best definition of the controlled
variable.
Without that constraint, we can assume different controlled
variables
that yield the same control ratio, or that, with small changes,
can be
made to show either variable has the better ratio. The first
question
must always be, what is our best guess about what this system is
controlling?

  The next question is whether the system is controlling this

variable with
small or large errors, or in ways that remain consistent even with
different patterns of disturbance. We can never compare the
apparent
controlled variable with the “actual” one.

          MT: That change was the reason for setting up the

experiment, to test
the prediction that the control ratio would get worse as
the ability to
discriminate decreases.

  BP: I predict that when the horizontal separation S is small, the

controlled variable will turn out be something close to the
difference in
height between cursor and target; for large separations, it will
be
closest to the angle of the line connecting cursor and target
centroids.
The reasons should be fairly obvious from the subjective
experience.

  I think that someone else's "ability to discriminate" might be

somewhat difficult to define and measure in this experiment. It
seems to
me that one indicator of difficult discrimination would be a lack
of
repeatability in the performance, because one run might indicate
one
controlled variable while a repetition would indicate a different
one as
the best definition, more or less at random.

      RM: It has simply

been asserted
that the ability to discriminate decreases as s
increases

    MT:No. I presented a graph with the data showing that it

does.

  BP: How did you measure the ability to discriminate? From what I

know so
far, it seems that this is a prediction from some theory of
discrimination, not a direct measure of it.

    MT: Consider, if it

were not
true that control performance depends on perceptual accuracy,
why would
jewellers and watchmakers use magnifying lenses? The fact that
they can’t
make out tiny mechanical relationships with the naked eye would
have no
effect on their ability to make their precision
masterpieces.

  BP: Increased magnification, according to optical theories,

increases the
sensitivity of the perception to changes in the small forms
involved or
their relationships. The loop gain is increased by the amount of
magnification, and that could be what improves performance (up to
the
point where instability sets in or noise becomes predominant).

    MT: I'm saying that

any model
that doesn’t take into account the perceptual ability to
discriminate is
incomplete, not that it is wrong.

  BP: How do you "take it into account" if you can't measure it?

As I just indicated, the explanation might be that loop gain
changes
rather than discrimination,

  and discrimination limits might show up as

increased variability in control, not repeatable changes in the
amount of
error.

    MT: All models are

incomplete in
many ways, without necessarily being wrong. The issue is to
determine
whether that incompleteness matters given the noisiness of the
data (and
the data are very noisy :slight_smile: And I’d hardly say your model
“accounts
nearly perfectly” for my data. It has the same trend, and that’s
good. I’m perfectly willing to believe it may be the right
answer. I’m
just not convinced yet.

  BP: Let's focus on getting the right answer, not on promoting any

particular hypothesis.

[From Rick Marken (2013.03.12.0855)]

[Martin Taylor 2013.03.11.20.07]

MT: No. On the contrary, it [the angle control model--RM] accounts for the subjective data as well as

the behavioural data. My problem is that so far neither of us has
shown that it is the only way to account for the behavioural data.

RM: So far I have shown that the angle control model accounts for the data and the difference control model doesn’t. I am not trying to show that it is the only model that can account for the data; just that it is a model that can account for the data. So while it is true that neither of us has shown that the angle control model is the only way to account for the data, only one of us has shown any model at all that can account for the data.

MT: I

WANT to believe that the way it feels is the way it is, and that
makes me especially suspicious.

RM: I think you doth protest too much. If you really want to believe it then I suggest that you do some more tests to either make the angle control more believable to yourself or that reject it so that you don’t have to want to believe it any more.

RM: I know why you set up the experiment. But I think my

model shows that your correct prediction about what would
happen could be accounted for without any assumptions about
decreased ability to discriminate.

MT: Correct. But that doesn't invalidate the prediction based on

perceptual accuracy.

RM: But you have presented no model of control that involves perceptual accuracy. So the prediction based on perceptual accuracy is of a very low quality compared to the quantitative predictions based on the angle control model. Once we see the “perceptual accuracy” model then we can compare the models not only in terms of the accuracy of their predictions but also in terms of parsimony. It may turn out, for example, that while the two models make the same predictions one is more parsimonious (fewer parameters required, for example) than the other. So until I see a working control model based on perceptual accuracy I think the angle control model has to be considered the current best model of the behavior in your experiment.

      RM: It has simply been asserted that the ability to discriminate

decreases as s increases

MT: No. I presented a graph with the data showing that it does.

RM: Is that the graph that shows “Average RMS of static resolution” as a function of separation in bits? If so, isn’t it your interpretation that the increase in this measure is a measure of discrimination. If “Average RMS of static resolution” is a measure of RMS deviation of cursor from target in a tracking task, then the angle control model shgould account for this data just as well as what you call the “Performance (bits)” data in teh other graph. Let me know how you computed “Average RMS of static resolution” and I’ll see if the angle control model can handle it.

      MT: No model

that fails to take this into account can be accepted as a
valid way to distinguish the two candidates.

      RM: I think you are saying that a control model that accounts

nearly perfectly for the decrease in ability to control with
increasing s can’t possibly by right because you just know
that it must be the decrease in discrimination that is
responsible for this result. Is that what you mean?

MT: No. I'm saying that any model that doesn't take into account the

perceptual ability to discriminate is incomplete, not that it is
wrong. All models are incomplete in many ways, without necessarily
being wrong.

RM: Then I do wish you would show me what a “complete” control model – one that takes the perceptual ability to discriminate into account – looks like. I would like to compare that model to the simple angle control model.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylot 2013.03.12.13.00]

I agree with this, but only for models that don't take into account

perceptual resolution.
True. But remember “the data” consists of rather more than the set
of control ratios as a function of separation.
The angle control model isn’t in question any more, now I understand
what you were trying to explain. What is in question is its
uniqueness.
I think I mentioned this previously, but if I didn’t, I do now…
I am learning Processing, using the experiment as a medium, because
I thought a general tracking experiment that could be run on several
platforms might be a useful contribution to CSG. One of the
objectives of writing the experiment was to provide a variety of
tasks (the ellipse track being a simple first cut) within which a
variety of models could be tested and optimized. At this stage, I
have not started on the modelling part, but it is in the plan. If I
do it right, anyone should be able to write their own model and plug
it into the program, but I’m a long way from being able to do that
at the moment.
Well, here’s a simple-minded approximation. The “simple-minded”
aspect is that it treats perceptual discrimination as a simple
threshold, and that is almost certainly oversimplified. The question
is whether the difference between perception and reference can be
discriminated. If it can’t, the effective error is zero. So instead
of saying e = r-p, say e = sign(r-p)(max(|r-p|-threshold, 0)),
where “sign(x)” = ±1 or 0 according to whether (r-p) is positive,
negative, or zero. “Threshold” could be a fitting parameter, or
could be taken from the compensatory tracking part of the run. The
RMS value might be a good place to us for “threshold” as a start,
but it’s probably not the most correct value. However, the most
correct value should be a fixed proportion of the RMS value, and
once that proportion is found, it should be valid for all the runs.
(Another caveat – it is impossible to control closer than 1 pixel
in these runs, so there is a lower limit on the achievable RMS
value, which in my results was exactly 1 pixel).
I used a form like e = sign(r-p)
(max(|r-p|-threshold, 0)) in
analyzing some of my “sleepy tracking” trials in 1994, with fits
that were always a little better than fits without the threshold.
The statistical artifact is that when you add an extra parameter,
you usually find that the fit improves, so I don’t know whether the
reason for the better fit is that the model better matches the human
or that it is simply a statistical artifact. One way it might be
tested is to see whether the fit with the threshold shows the little
overshoots we often see when the target changes direction abruptly,
that are not fitted by the “standard model”, or whether there is any
other consistency in the manner that the fit improves. Another way to avoid the artifact is by using data from another
source, which is why I suggested using the data from the
compensatory tracking part of the run when fitting the tracks from
the pursuit tracking part. One potential problem in doing that is
that it is likely to be harder to see the difference between r and p
when both target and cursor are changing than when one is fixed, so
the appropriate threshold would change. But since thresholding is a
simplification anyway, that may not matter much.
Martin

···

On 2013/03/12 11:58 AM, Richard Marken
wrote:

[From Rick Marken (2013.03.12.0855)]

        [Martin Taylor

2013.03.11.20.07]

        MT: No. On the

contrary, it [the angle control model–RM] accounts for the
subjective data as well as the behavioural data. My problem
is that so far neither of us has shown that it is the only
way to account for the behavioural data.

      RM: So far I have shown that the angle control model accounts

for the data and the difference control model doesn’t.

      I am not trying to show that it is the only model that can

account for the data; just that it is a model that can
account for the data. So while it is true that neither of us
has shown that the angle control model is the only way to
account for the data, only one of us has shown any model at
all that can account for the data.

        MT: I WANT to believe

that the way it feels is the way it is, and that makes me
especially suspicious.

      RM: I think you doth protest too much. If you really want to

believe it then I suggest that you do some more tests to
either make the angle control more believable to yourself or
that reject it so that you don’t have to want to believe it
any more.

       RM: Then I do wish you would show me what a "complete"

control model – one that takes the perceptual ability to
discriminate into account – looks like. I would like to
compare that model to the simple angle control model.

[From Rick Marken (2013.03.12.1550)

Martin Taylor (2013.03.12.13.00)-

      RM: So far I have shown that the angle control model accounts

for the data and the difference control model doesn’t.

MT: I agree with this, but only for models that don't take into account

perceptual resolution.

RM: I don’t understand this. The angle control model accounts for the data without taking perceptual resolution into account. I think this is pretty strong evidence that the results of your experiment where you get decreasing levels of control with increasing horizontal separation between target and cursor (s) can be explained without taking perceptual resolution into account. Yet you continually say that any model is inadequate if it does not take perceptual resolution into account. Can a model be inadequate if it accounts for the data perfectly? If so, how would a researcher know that the model is inadequate?

      RM: I am not trying to show that it is the only model that can

account for the data; just that it is a model that can
account for the data. So while it is true that neither of us
has shown that the angle control model is the only way to
account for the data, only one of us has shown any model at
all that can account for the data.

MT: True. But remember "the data" consists of rather more than the set

of control ratios as a function of separation.

RM: So what is “the data” that can be accounted for only by a model that takes perceptual resolution into account?

      RM: Then I do wish you would show me what a "complete"

control model – one that takes the perceptual ability to
discriminate into account – looks like. I would like to
compare that model to the simple angle control model.

MT: Well, here's a simple-minded approximation. The "simple-minded"

aspect is that it treats perceptual discrimination as a simple
threshold, and that is almost certainly oversimplified. The question
is whether the difference between perception and reference can be
discriminated. If it can’t, the effective error is zero. So instead
of saying e = r-p, say e = sign(r-p)*(max(|r-p|-threshold, 0)),
where “sign(x)” = ±1 or 0 according to whether (r-p) is positive,
negative, or zero. “Threshold” could be a fitting parameter, or
could be taken from the compensatory tracking part of the run. The
RMS value might be a good place to us for “threshold” as a start,
but it’s probably not the most correct value. However, the most
correct value should be a fixed proportion of the RMS value, and
once that proportion is found, it should be valid for all the runs.
(Another caveat – it is impossible to control closer than 1 pixel
in these runs, so there is a lower limit on the achievable RMS
value, which in my results was exactly 1 pixel)…

RM: OK, this is a start. But remember that the model must implement an explanation of why control declines as s increases.Why would the control performance of your model decline as s increases. It should be pretty easy to implement a working version of your model by the way. Once we have that we can see how well your model that takes perceptual resolution into account compares to mine that doesn’t.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.12.20.14]

[From Rick Marken (2013.03.12.1550)

        Martin Taylor

(2013.03.12.13.00)-

                RM: So far I have shown that the angle control

model accounts for the data and the difference
control model doesn’t.

        MT: I agree with this, but only for models that don't take

into account perceptual resolution.

      RM: I don't understand this. The angle control model accounts

for the data without taking perceptual resolution into
account.

Yes. That's why I agreed with your statement. What don't you

understand? That I would agree with you?

      I think this is pretty strong evidence that the results of

your experiment where you get decreasing levels of control
with increasing horizontal separation between target and
cursor (s) can be explained without taking perceptual
resolution into account. Yet you continually say that any
model is inadequate if it does not take perceptual resolution
into account. Can a model be inadequate if it accounts for the
data perfectly?

That situation never comes up in science, so it is an abstract

philosophical question. I think the answer is probably “yes”. I
think the phlogiston model accounted pretty well for a lot of the
facts of heat. In the present case, there is a fact that if you
can’t see a disturbance cause change in something, no control system
can control against that disturbance.

We know that changes in perceptual discrimination ability can mimic

changes in loop gain. What you showed is that changes in loop gain
consequent on changes of angle will produce changes in control ratio
that mimic the noisy data in the graph I sent. If you factor in the
increased discrimination of angle as a function of the length of
the line, you might do even better.

      If so, how would a researcher know that the model is

inadequate?

Various ways. Again, this is an abstract philosophical question. One

way is by comparison with other models past or future; another is
that the researcher knows of some factor not incorporated in the
model that might (or might not) affect the result; for me, the
primary way is that to say a model is inadequate is a statement not
about the model but about the use to which the model is to be put.
“Adequate” means that the model serves its purpose. If it does not
do that, it is “inadequate”. For you, the angle control model is
adequate. For me it is not. Newton’s model of gravity is completely
adequate for many purposes. When extreme accuracy is required, it is
inadequate.

          RM: I am not trying to show that it

is the only model that can account for the data; just that
it is a model that can account for the data. So while it
is true that neither of us has shown that the angle
control model is the only way to account for the data,
only one of us has shown any model at all that can
account for the data.

        MT: True. But remember

“the data” consists of rather more than the set of control
ratios as a function of separation.

       RM: So what is "the data" that can be accounted for only by a

model that takes perceptual resolution into account?

How is that question relevant to the fact that the available data

consists of 4096 tracking numbers for cursor and target for each of
18 runs of the experiment? You roughly fitted 6 points derived from
those nearly 74 thousand measures. That’s why ““the data” consists
of rather more than the set of control ratios as a function of
separation.”

  RM: OK, this is a start. But remember that the model must

implement an explanation of why control declines as s
increases.Why would the control performance of your model decline
as s increases. It should be pretty easy to implement a working
version of your model by the way. Once we have that we can see how
well your model that takes perceptual resolution into account
compares to mine that doesn’t.

For several days I have been expecting to see the results of your

comparison of the two models when precision is not taken into
account in matching the human tracks to the model tracks, but I have
yet to see that. If you want to try it while taking accuracy into
account, that makes four models to be compared. Go to it. I will be
fascianted to see the results.

Martin

[From Bill Powers (2013.3.13.1000 MDT)]

Martin Taylor 2013.03.12.00.36 –

BP earlier: The hierarchical
levels I ended up with had to meet certain specific criteria, such as
showing that in order to control a higher-level variable it is necessary
to vary one or more of lower order. But they didn’t depend on knowing
what is actually Out There.

MT: Nor did I suggest that they did. But you have in the past several
times said that you derived the levels from your subjective experience.
The criteria you mention came later, or at least so I have understood you
to say.

They were aways part of the search, though perhaps not clearly
articulated at first. The initial question was whether some perceptions
are composed of others at a lower level.

That is what the hierarchical control idea suggested to me. So I had to
look at my own perceptions to see if some of them seemed to depend on
others, such as motion of an object depending on information about
positions of the objects (as in “strobosacopic motion” or
“relative motion.” For hierarchical control, I had to try to
see if there were some perceptions that had to be altered as part of
controlling others (of higher order). When I found perceptions that fit
the criterion I took that as an affirmative answer. This had nothing to
do with direct knowledge of the external world – it applied only to
relationships among perceptions. And it didn’t imply that these
relationships existed anywhere but in my brain (though I’d love to be
able to prove that they do – or don’t).

BP earlier: Increased
magnification, according to optical theories, increases the sensitivity
of the perception to changes in the small forms involved or their
relationships. The loop gain is increased by the amount of magnification,
and that could be what improves performance (up to the point where
instability sets in or noise becomes predominant).

MT: Are you saying anything other than the consequences of improved
discrimination?

Yes. Increasing the magnification does not alter the noise level of the
perceptual signal as long as the magnification is less than required to
make the noise have a noticeable effect. If noise remains undetectable,
all that increasing and decreasing the magnification does is to increase
and decrease the loop gain. That affects the ability to keep error small,
but does not make performance any less repeatable.

MT: I’m saying that any model
that doesn’t take into account the perceptual ability to discriminate is
incomplete, not that it is wrong.

BP: How do you “take it into account” if you can’t measure it?
As I just indicated, the explanation might be that loop gain changes
rather than discrimination,

MT: I would suggest substituting “as a consequence of” for
“rather than”.

BP: But changes in loop gain that do not change the perceived noise level
will have no measureable effect on discrimination. I assume that a
failure of discrimination means that presenting the same objective
stimulus will result in different control errors each time. That will
happen only if the gain increase is enough to make the effects of noise
show up as different amounts of the perception.

BP: and discrimination limits
might show up as increased variability in control, not repeatable changes
in the amount of error.

MT: How would you get one without the other?

BP: If you view a micromanipulator through a low-power lens, then through
a lens with twice that power, the performance may show no random
component of importance, yet the control error will be greater with the
lower power. That’s simply the effect of the multiplier G/(1+G). No
random component is necessary.

MT: Maybe I misunderstand,
but “increased variability in control” sounds like deviations
between the error traces from trial to trial, which would show up as
repeatable changes in the amount of error, wouldn’t it?

BP: No, because the variability alone will not affect the perceived state
of the controlled variable unless it is large enough to create a
discernible error, which would not repeat from trial to trial. I am
assuming random variability, different on every trial.

BP: Let’s focus on getting the
right answer, not on promoting any particular
hypothesis.

MT: That’s the point I’ve been making to Rick. I don’t want to promote
any particular hypothesis, and neither do I want to reject any hypothesis
that remains compatible with the data. When I like a hypothesis, I
usually feel I must try to discredit it.

BP: I am feeling a lot of pressure from you to agree that quality of
control is affected only by discriminability, without dealing with the
objections I raise. It seems to me that you are insisting on this
hypothesis being true even though there is no way to measure
discriminability directly. In fact, you appear to be assuming a change in
discriminability as a premise in an argument that defends the idea that
there is a change in discriminability behind changes in quality of
control. Petitio principii.

Best,

Bill P.

Why do you keep talking about “noise”? It seems to be a bit of an
idee fixe that keeps getting in the way of understanding.
Are you saying that the watchmaker would have just as repeatable
errors in placing his little gears and jewelled bearings every time
he tried to make a particular model of watch without using his
loupe? I think that is a very dubious assumption.
I didn’t suggest that they do. The effect is in the opposite
direction. Reduced discrimination reduces loop gain, especially for
low levels of error. I had assumed this was understood, but since
you re-reversed my reversal of your statement, I have to assume I
was wrong. So here is a quick analysis. It’s very like (and could be
substituted for) the argument in favour of a tolerance zone at the
comparator. That has the same effect on loop gain for low error.
I assume you agree that there is a limit to sensory resolution,
whatever the sensor, mechanical or human. If not, I hope you will
explain how to avoid the apparent limit, because I would love to be
able to see nano-scale objects at will.
An oversimplified way of representing a resolution limit is as a
threshold. Resolution means the ability to tell the difference
between two states. With a microscope, one can see the difference in
position between two things a micron apart that would look to the
naked eye to be at the same place if they could be seen at all. The threshold concept is a zeroth approximation to what probably
happens, but the basic idea is that the perceived magnitude of an
input (qi in the following diagrams) is zero for |qi| < Threshold
and increases linearly for |qi| > Threshold, as in the top-left
figure. The actual relation of perceptual magnitude to qi is
presumably much more complicated, including functions akin to
logarithms, and so forth, but that doesn’t matter in this context,
since at least for compensatory tracking we are talking about very
small values of qi, not much above threshold. Nonlinearity might
well matter in pursuit tracking if the target makes substantial
changes abruptly.
The figures illustrate why poor discrimination mimics low loop gain.
They illustrate the effective gain multiplier at the perceptual
input when the input value is near threshold. (Gain Multiplier
seemed too long to use in the figures; you can think of it as the
segment gain of the input-to-perceptual-signal, if you like, but
that could be misleading because there is no guarantee that the
basic gain of that segment of the loop is unity).
Figure 1 (top left) just shows the assumed perceptual magnitude
function with a slope of 1.0 if qi is above threshold. Figure 2 (top
right) illustrates the effective gain multiplier for a given value
of qi and various values of the threshold. The slope of a blue line
is the corresponding gain multiplier. Figure 3 (bottom left) shows
the gain multiplier for various values of qi with a fixed threshold.
Finally, Figure 4 sketches the apparent gain multiplier derived from
Figure 3 as a function of qi for a fixed value of the threshold.
Figure 3 and 4 illustrate how the gain multiplier caused by a limit
on perceptual resolution is influenced by the magnitude of the
input. It should be possible to generate this curve from tracking
data by using step-function disturbances. Figure 2 shows how
decreasing discrimination ability reduces loop gain at a given input
value.
What gain increase? Let’s change the gedanken experiment and ask how
long you wait for a bus if you know one arrives every half hour, but
don’t know when the last one came. Is your wait time the same on
every occasion? Conceptually, that is the same situation. You have a
resolution limit of half an hour. No noise is involved.
And I am feeling a lot of pressure from you to treat uncertainty as
if it was the same as noise variability. Your objections keep coming
back to “noise” and “variation”, which seem to me to be irrelevant
except as potential reasons for uncertainty. Furthermore, I don’t think I have claimed that the quality of
control is affected by discriminability, let alone that it is
affected only by discriminability. I have claimed, and do claim,
that the quality of control is limited by discriminabiity.
That is a very strange thing to say, unless you are going to fall
back on your regular appeal to the impossibility of knowing “real
reality”. Discrimination ability is easily tested, at least to a
tight approximation. You ask someone whether they can tell the
difference between two things, or ask them which has more of
whatever you are interested in. In tracking, you fix the target for
a while and allow the subject to bring the cursor as close to it as
he can. There are lots of ways you can estimate discriminability
once you have an agreed measure of it. You do any of these things a
few times and you get a pretty good idea of the resolution limits of
that perceptual task.
I don’t see it that way. In fact I don’t even make the argument that
there is a change in discriminability behind changes in quality of
control. I argue that failure of discrimination puts a limit on the
possible quality of control. I argue that no control system whatever
can control against errors smaller than the differences that the
sensor system can detect. I don’t understand how you can argue that we can control to an
accuracy that we cannot perceive.
Martin

ThresholdLowGain.jpg

···

On 2013/03/13 12:40 PM, Bill Powers
wrote:

[From Bill Powers (2013.3.13.1000 MDT)]

  Martin Taylor 2013.03.12.00.36 --
      BP earlier:

Increased
magnification, according to optical theories, increases the
sensitivity
of the perception to changes in the small forms involved or
their
relationships. The loop gain is increased by the amount of
magnification,
and that could be what improves performance (up to the point
where
instability sets in or noise becomes predominant).

    MT: Are you saying anything other than the consequences of

improved
discrimination?

  Yes. Increasing the magnification does not alter the noise level

of the
perceptual signal

  as long as the magnification is less than required to

make the noise have a noticeable effect. If noise remains
undetectable,
all that increasing and decreasing the magnification does is to
increase
and decrease the loop gain. That affects the ability to keep error
small,
but does not make performance any less repeatable.

        MT: I'm saying

that any model
that doesn’t take into account the perceptual ability to
discriminate is
incomplete, not that it is wrong.

      BP: How do you "take it into account" if you can't measure it?

As I just indicated, the explanation might be that loop gain
changes
rather than discrimination,

    MT: I would suggest substituting "as a consequence of" for

“rather than”.

  BP: But changes in loop gain that do not change the perceived

noise level
will have no measureable effect on discrimination.

  I assume that a

failure of discrimination means that presenting the same objective
stimulus will result in different control errors each time. That
will
happen only if the gain increase is enough to make the effects of
noise
show up as different amounts of the perception.

    MT: That's the point I've been making to Rick. I don't want to

promote
any particular hypothesis, and neither do I want to reject any
hypothesis
that remains compatible with the data. When I like a hypothesis,
I
usually feel I must try to discredit it.

  BP: I am feeling a lot of pressure from you to agree that quality

of
control is affected only by discriminability, without dealing with
the
objections I raise.

  It seems to me that you are insisting on this

hypothesis being true even though there is no way to measure
discriminability directly.

  In fact, you appear to be assuming a change in

discriminability as a premise in an argument that defends the idea
that
there is a change in discriminability behind changes in quality of
control. Petitio principii.

[From Bill Powers (2012.03.13.1905 MDT)]

Martin Taylor 2013.03.12.00.36 --

MT: Are you saying anything other than the consequences of improved discrimination?

BP: Yes. Increasing the magnification does not alter the noise level of the perceptual signal

MT: Why do you keep talking about "noise"? It seems to be a bit of an idee fixe that keeps getting in the way of understanding.

BP: Because if there is no system noise, repeating a trial will result in exactly the same behavior every time at any one magnification. Control will be poorer if the magnification is low than if it is high, but it will be poorer in exactly the same way every time the run is repeated. The plot of behavior against time on one trial will lie exactly over the trace of behavior against time in any other run. If there is enough noise (and magnification) that noise becomes apparent as small random variations in the perceptual signal, then repeating the run with the same conditions will not result in the same behavior every time. That is the kind of result that I would interpret as a result of loss of discrimination.

MJ: Are you saying that the watchmaker would have just as repeatable errors in placing his little gears and jewelled bearings every time he tried to make a particular model of watch without using his loupe?

BP: Yes, as long as the magnification remains the same in repeated runs and there is no noise anywhere in the watchmaker's control systems. The errors would be greater without the loupe, but they would be exactly the same errors on every trial. The errors would be due to low loop gain, not to random variations added to the perceptual signal. To get different errors on successive trials, you would have to introduce random fluctuations that are different on every trial and large enough to make a measurable difference.

Just write the equations for simulating some simple control task, with no random variables at all. Every time you run the simulation, you will get exactly the same fluctuations in all system variables. If you change the loop gain, this will still be true but the fluctuations will be different, though they will still repeat every time you rerun the same simulation with the new loop gain. You will get worse control (larger errors) with low loop gain, but the mistakes will be exactly the same every time you rerun the simulation.

BP: But changes in loop gain that do not change the perceived noise level will have no measureable effect on discrimination.

MT: I didn't suggest that they do. The effect is in the opposite direction. Reduced discrimination reduces loop gain, especially for low levels of error.

BP: No, it doesn't. I am claiming that reduced discrimination results from noise, not low loop gain. You are reversing cause and effect. If you just introduce noise, loop gain is not affected.

MT: I assume you agree that there is a limit to sensory resolution, whatever the sensor, mechanical or human. If not, I hope you will explain how to avoid the apparent limit, because I would love to be able to see nano-scale objects at will.

BP: But what is it that limits sensory resolution? It is only the signal-to-noise ratio. If there is no noise, then any change in the inputs will result in a change in the perceptual signal when it is just at the threshold of detection. Now it is only the loop gain that limits how much an error will be reduced by the negative feedback.

MT: An oversimplified way of representing a resolution limit is as a threshold. Resolution means the ability to tell the difference between two states. With a microscope, one can see the difference in position between two things a micron apart that would look to the naked eye to be at the same place if they could be seen at all.

BP: If that is all that was involved, even with a nanometer of difference between the two states, the perceptual signal would vary in the same proportions that the input quantity varied. The size of the error signal would vary as 1/G, with one factor of G being the input sensitivity. But that error would repeat exactly on every trial, so discrimination would remain the same.

I think this whole disagreement turns on what you mean by discrimination. I am taking this term to mean the ability to perceive a change in qi reliably, rather than seeing changes due to actual changes in qi plus system noise. Isn't this how discrimination is usually measured? You give repeated trials in which the subject states whether the second presentation is the same as the first one or different from it. The score is the percentage of correct estimates. If there were no noise anywhere in the organism or its environment, changes would always be detected if greater than a threshold, and never detected if below it. You could add another threshold in the comparator, so errors would have an effect on action only if larger than that second threshold.

MT: The threshold concept is a zeroth approximation to what probably happens, but the basic idea is that the perceived magnitude of an input (qi in the following diagrams) is zero for |qi| < Threshold and increases linearly for |qi| > Threshold, as in the top-left figure.

BP: This diagram has no noise in it, so whatever the threshold, the perceptual signal begins to change as soon as the input rises above the threshold by any amount, however small. If there were no other thresholds around the loop, the output would start to increase and the error signal would begin to be less than the open-loop amount right at the threshold. How much less will depend on the loop gain. At high loop gain, the errors will be smaller than at low loop qain.

MT: The figures illustrate why poor discrimination mimics low loop gain.

BP: But it doesn't. With poor discrimination, the person will miss some real changes and see some changes where there are none, these mistakes happening differently on every trial run. That can happen only if there is random noise in the system -- otherwise the person would always behave the same way under the same conditions. The mistakes would repeat exactly on successive trials. Your diagram shows only one sloping line, indicating that there are no random variations in the system.

I think your analysis of the figures is incorrect; you are showing only the open-loop relationship between input and perceptual signal and you are ignoring the effects of system noise. The best way to investigate this question is to set up a simulation and actually compare what happens with and without noise, and with high and low input gain. You are skipping over steps that need to be shown in detail.
...

BP earlier: I assume that a failure of discrimination means that presenting the same objective stimulus will result in different control errors each time. That will happen only if the gain increase is enough to make the effects of noise show up as different amounts of the perception.

MT: What gain increase?

BP: The loop gain increase (due to increasing Ki by using the loupe rather than the naked eye).

MT: Let's change the gedanken experiment and ask how long you wait for a bus if you know one arrives every half hour, but don't know when the last one came. Is your wait time the same on every occasion? Conceptually, that is the same situation. You have a resolution limit of half an hour. No noise is involved.

BP: This is a totally different case, with discrete variables and uncertainties. But yes, if there is no system noise, your behavior will repeat exactly. What could cause it to change?

BP earlier: It seems to me that you are insisting on this hypothesis being true even though there is no way to measure discriminability directly.

MT: That is a very strange thing to say, unless you are going to fall back on your regular appeal to the impossibility of knowing "real reality". Discrimination ability is easily tested, at least to a tight approximation. You ask someone whether they can tell the difference between two things, or ask them which has more of whatever you are interested in.

BP: Yes, and you are attributing any errors to --- what? You are looking at the consequence, not at the cause.

MT: In tracking, you fix the target for a while and allow the subject to bring the cursor as close to it as he can. There are lots of ways you can estimate discriminability once you have an agreed measure of it.

BP: That's what we don't have here.
  ...

MT: I don't understand how you can argue that we can control to an accuracy that we cannot perceive.

BP: I am not arguing that way. I am looking at the implications of assumptions. If we assume no noise, then behavior has to repeat exactly. That is not what we see in tests of discrimination. If there is no noise, then the quality of control can still vary as loop gain varies, but the uncorrected errors will repeat exactly on successive runs of a simulation, whereas if noise is responsible, they will not repeat.

In a simulation you can actually show that the noise-free system does exactly the same thing every time the simulation is run, whereas the noisy system does not. Rick Marken has been looking at that.

Best,

Bill P.

[From Rick Marken (2013.03.13.2100)]

Martin Taylor (2013.03.12.20.14)–

        MT: I agree with this, but only for models that don't take

into account perceptual resolution.

      RM: I don't understand this. The angle control model accounts

for the data without taking perceptual resolution into
account.

MT: Yes. That's why I agreed with your statement. What don't you

understand? That I would agree with you?

         RM: No, it's the part about agreeing but "only for models that don't take

into account perceptual resolution".

      RM: So what is "the data" that can be accounted for only by a

model that takes perceptual resolution into account?

MT: How is that question relevant to the fact that the available data

consists of 4096 tracking numbers for cursor and target for each of
18 runs of the experiment? You roughly fitted 6 points derived from
those nearly 74 thousand measures. That’s why ““the data” consists
of rather more than the set of control ratios as a function of
separation.”

      RM: Yes, but, again,  what is "the data" that can be accounted for only by a

model that takes perceptual resolution into account?

  RM: OK, this is a start. But remember that the model must

implement an explanation of why control declines as s
increases.Why would the control performance of your model decline
as s increases. It should be pretty easy to implement a working
version of your model by the way. Once we have that we can see how
well your model that takes perceptual resolution into account
compares to mine that doesn’t.

MT: For several days I have been expecting to see the results of your

comparison of the two models when precision is not taken into
account in matching the human tracks to the model tracks, but I have
yet to see that. If you want to try it while taking accuracy into
account, that makes four models to be compared. Go to it. I will be
fascianted to see the results.

RM: I am still working on it. I will get something to you tomorrow. But in the meantime maybe you could tell me what you would like to see analyzed. What aspect of the data will the angle control model be unable to account for that a perceptual resolution model will?

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.14.00.00]

[From Rick Marken (2013.03.13.2100)]

        Martin Taylor

(2013.03.12.20.14)–

                  MT: I agree

with this, but only for models that don’t take
into account perceptual resolution.

                RM: I don't understand this. The angle control model

accounts for the data without taking perceptual
resolution into account.

        MT: Yes. That's why I agreed with your statement. What don't

you understand? That I would agree with you?

       RM: No, it's the part about agreeing but "only for models

that don’t take into account perceptual resolution".

Sigh...

English is an ambiguous language. I suppose all natural language

are.

Let me try to rephrase. I agree with your statement (" RM: So far I

have shown that the angle control model accounts for the data and
the difference control model doesn’t.") for models that do not take
perceptual resolution into account (as is the case for your models).
I cannot agree with it for models that do take perceptual resolution
into account, because I have no data on such models. Hence “only for
models that do not take perceptual resolution into account”.

              RM: So what is

“the data” that can be accounted for only by a model
that takes perceptual resolution into account?

        MT: How is that question relevant to the fact that the

available data consists of 4096 tracking numbers for cursor
and target for each of 18 runs of the experiment? You
roughly fitted 6 points derived from those nearly 74
thousand measures. That’s why ““the data” consists of rather
more than the set of control ratios as a function of
separation.”

        RM: Yes, but, again,  what is "the data" that can be

accounted for only by a model that takes perceptual
resolution into account?

I have not suggested that any such data exist. I have pointed out

that “the data” consist of nearly 74K points, of which you roughly
fitted four (not four thousand). Whether these 74K points can be
accounted for by any particular model has not yet been demonstrated.
I have no opinion on the matter.

  RM: I am still working on it. I will get something to you

tomorrow.

I look forward to it, and I hope the result is clearcut. I also am

working on my programming, and I think that after I finish creating
the set of different kinds of disturbance that I am now working on,
control modelling must come next.

  But in the meantime maybe you could tell me what you

would like to see analyzed. What aspect of the data will the angle
control model be unable to account for that a perceptual
resolution model will?

Not having tried it, I have no idea. I'm expecting that the angle

model will account for it rathe well. I’m also expecting that a
model based on the graphs I sent a few hours ago will also account
for it rather well, though I hope one of the models will be
appreciably better than the other and we can stop this thread.

However, let me suggest a topic for future simulation work. It's one

I mentioned before – the consistent little overshoots when
something changes abruptly. Here’s an example run in which the
disturbance is a constant rate of change with the sign switching at
random (Poisson-distributed) moments. If you look closely, you will
see that there is a consistent overshoot at each change of sign,
with a recovery that looks to my eye fairly to be similarly shaped
on many of the switching occasions. Red = target, green = cursor,
yellow = error.

![TriangleTrack.jpg|665x178](upload://5V2ufjvzOar36FkF2SG89jTHojP.jpeg)

Martin

[Martin Taylor 2013.03.14.13.17]

[From Bill Powers (2012.03.13.1905 MDT)]

Martin Taylor 2013.03.12.00.36 --

MT: Are you saying anything other than the consequences of improved discrimination?

BP: Yes. Increasing the magnification does not alter the noise level of the perceptual signal

MT: Why do you keep talking about "noise"? It seems to be a bit of an idee fixe that keeps getting in the way of understanding.

BP: Because if there is no system noise, repeating a trial will result in exactly the same behavior every time at any one magnification. Control will be poorer if the magnification is low than if it is high, but it will be poorer in exactly the same way every time the run is repeated.

To me, talking about "noise" in this context is akin to talking about the source of a disturbance when trying to analyze the behaviour of a control loop in the presence of the disturbance. It doesn't matter where the disturbance came from; what matters is the effect of the disturbance on the environmental variable that is defined by the perceptual input function.

Although noise is definitely a cause of uncertainty, it isn't the only cause, and uncertainty is what matters. That's why I mentioned the bus example as a case in which the uncertainty about one's perception of the time until the next bus is not caused by noise.

I have avoided talking about noise, specifically because you have always been so keen to make it clear that you believe there is no noise in the perceptual system. Now you invoke noise to explain why the watchmaker will not always put the gear in the same wrong place each time he tries to make a particular model of watch without using his magnifying loupe. Well, I agree with you that there is always noise in the perceptual signal, and that noise is one factor in determining perceptual resolution. But I don't agree that the noise itself is the essential element; the _effect_ of the noise is what matters.

You are right that simulation should be used to test the analysis I presented in verbal form. What I think you have ignored in saying that the simulation without noise will give the same results every time is that control loops do not operate in the absence of disturbances. You often complain about this omission when talking about "feed forward control" discussed by other writers, so I don't think you should ignore it here. Simulation without noise does not mean running the same disturbance waveform in each trial.

If a change of the disturbance is so small that it does not influence the value of the perceptual signal, that change will not affect the error signal or the output value. Changing disturbance with no corresponding change of output means complete loss of control, which results in a change in the way output and disturbance combine to affect future perceptual signal values. And even if the disturbance value is large enough to change the CEV enough to affect the perceptual signal, if the change is near the threshold value, its influence on the perceptual signal is less than it would have been if the perceptual threshold had been smaller.

So, I am not reversing cause and effect when I suggest that poor perceptual resolution reduces loop gain. And I don't need to consider why the resolution might be poor, so I don't need to ask whether the reason might be system noise or something else that limits the resolution (such as in the bus example, where the uncertainty is caused by not knowing the time of the previous bus).

Martin

[From Rick Marken (2013.03.14.1200)]

Martin Taylor (2013.03.14.00.00)

  RM: I am still working on it. I will get something to you

tomorrow.

MT: I look forward to it, and I hope the result is clearcut.

Well I don’t want to seem like I’m evading you. I worked on this all morning and I will just give you the quick results because I have other work to do and I’ll discuss it in more detail later. I analyzed the data in Sp0Sep0_3, Sp0Sep20_3, Sp0Sep50_3,Sp800Sep0_2. I presume there are trials where the separation between cursor and target was 0, 20, 50 and 800 pixels, respectively. There is very little difference between the control of v and control of arctan(v/s) models for separations less than 50 pixels. The correlation between model and human cursor variations is ~.99 and the rms deviation of model from human cursor variations is ~5 pixels for both models at separations of 0, 20 and 50. For the 80 separation the control of arctan(v/s) model somewhat better than the control of v model in terms of correlation between model and human cursor movements (.94 vs .92) and more importantly in terms of rms deviation (5.76 vs 6.38). the difference in rms deviation may seem small but the smaller rms error for the arctan(v/s) model results from the fact that it captures the consistent offset of the human trace from the target trace, which is clear from inspection of the plots of the target and cursor traces. The increase in horizontal separation, s, between target and cursor doesn’t increase the variability of the cursor (which suggest that there is no decrease in resolution involved); increases in s only increase the constant offset between cursor movements and target movements. This offset is produced by the human as well as the model controlling arctan(v/s) but not by the model controlling just v.

I’ll do some more analysis later; I have real work to do now. But so far the results are kind of amazingly consistent with the control of arctan(v/s) model.

Best

Rick

···


Richard S. Marken PhD

rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2013.03.14.15.43]

Sounds good. I look forward to more, and I hope it won't be too long

before I can embed model-optimization into my program.
Martin

···

On 2013/03/14 3:02 PM, Richard Marken
wrote:

[From Rick Marken (2013.03.14.1200)]

        Martin Taylor

(2013.03.14.00.00)

  Well I don't want to seem like I'm evading you. I worked on this

all morning and I will just give you the quick results because I
have other work to do and I’ll discuss it in more detail later.
… For the 80 separation the control of arctan(v/s) model
somewhat better than the control of v model in terms of
correlation between model and human cursor movements (.94 vs .92)
and more importantly in terms of rms deviation (5.76 vs 6.38). the
difference in rms deviation may seem small but the smaller rms
error for the arctan(v/s) model results from the fact that it
captures the consistent offset of the human trace from the target
trace, which is clear from inspection of the plots of the target
and cursor traces. The increase in horizontal separation, s,
between target and cursor doesn’t increase the variability of the
cursor (which suggest that there is no decrease in resolution
involved); increases in s only increase the constant offset
between cursor movements and target movements. …

  I'll do some more analysis later; I have real work to do now. But

so far the results are kind of amazingly consistent with the
control of arctan(v/s) model.

[From Bill Powers (2013.03.16.1835 MDT)]

Martin Taylor 2013.03.14.13.17 --

MT: To me, talking about "noise" in this context is akin to talking about the source of a disturbance when trying to analyze the behaviour of a control loop in the presence of the disturbance. It doesn't matter where the disturbance came from; what matters is the effect of the disturbance on the environmental variable that is defined by the perceptual input function.

BP: It does make a difference, because noise generated in the input function affects the perceptual signal without (directly) affecting qi. Expressing the noise in units of the perceptual signal, we can see that it shows up as equivalent to a superimposed random variation in the reference signal, not as a second disturbance (because it doesn't act directly on qi as a real disturbance does).

As a result of the input noise, the control system will continually be producing outputs to correct changes in the perceptual signal, outputs which will cause changes in qi. The quality of control will be accordingly reduced.

I claim that that is where discrimination is reduced, too -- the subject will be causing errors rather than correcting them, and will (a) perceive changes in p that do not correspond to actual changes in qi and (b) fail to perceive changes that do. So judgments of "same" and "different" will be affected by the noise -- there will be more false positives and false negatives.

I will try to work out the mathematical model tomorrow or soon after. It's not hard -- just messy.

On further study of the figures you sent, I realize that I have no idea how you got Fig. 2,3, and 4. You"re taking too many shortcuts in imagination that aren't visible to anyone else.

Best,

Bill P.

[From Rick Marken (2013.03.17.1510)]

Here are two figures that summarize the results of my analysis of some
data from Martin's pursuit tracking task where the main independent
variable was the horizontal separation, s, between cursor and target.
I analyzed the data for 5 different separations ( values of s measured
in pixels): 0, 20, 50, 450 and 800. I analyzed two tracking runs for
each separation; one run had a hard (high frequency) disturbance and
the other had an easy (low frequency) disturbance.

Figure 1 shows the subject's performance, measured in "bits"
(-log2([var(c-t)/var(d)^1/2); the higher the bit measure, the better
the performance (the smaller the average deviation of cursor, c, from
target, d). So control declines as horizontal separation increases.

Also shown in Figure 1 is the rms deviation (RMS error) from the human
data of the model controlling the vertical distance between cursor and
target -- "v" model deviation -- and the rms deviation of the model
controlling the angle arctan(v/s) -- "a" model deviation; the
deviations are the average difference between the model from human
cursor movements. The Figure shows that the "a" model does better than
the "v" model (lower RMS deviation of model from human data) at all
separations; but as separation increases, the RMS error for both
models increases.

By the way, both the "v" and "a" models fit the data extremely well in
terms of R^2; the average R^2 for both models -- the proportion of
variance in the detailed human cursor movements that is accounted for
by the model cursor movements -- is .999.

So both models move the cursor in exact parallel to the human cursor
movements; the difference between the models is in terms of the RMS
separation of these cursor movements from the human movements, the"a"
model giving a better fit to the human movements than the "v" model.
Figure 2 suggests the reason why; the performance of the human (in
bits) is compared to that of the two models (in bits). The decline in
performance of the "a" model, with increasing s, is similar to that of
the human, while there is no corresponding decline for the "v" model.
The decline for the "a" model reflects increases in the deviation of
cursor from target that result from over or undershooting the target
during the slow tracking movements. The decline in performance of the
"a" model with increasing s is not a result of increases in high
frequency cursor jitter.

So the angle control model does better than the vertical distance
control model at accounting for the decline in performance with
increasing s. The superiority of the angle control model at accounting
for this decline seems to result from the reduction of loop gain
produced by increasing s (and, this, decreasing the derivative of
arctan(v/s)). But Figure 1 suggests that there is an increase RMS
deviation of both models from the human data as s increases. This
suggests that for both models there is an increase in unexplained
cursor variance as s increases. I can think of two possibilities; one
is that system noise is amplified as s increase; the other is that the
same level of existing noise in the system "shows through" in
performance more when the loop gain goes down (as it does as s
increases). This will be the next thing I try to test with the models.

Best

Rick

Figure1.PNG

Figure2.PNG

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Content-Type: image/png; name="Figure1.PNG"
Content-Disposition: attachment; filename="Figure1.PNG"
X-Attachment-Id: f_heep7f2l0

Content-Type: image/png; name="Figure2.PNG"
Content-Disposition: attachment; filename="Figure2.PNG"
X-Attachment-Id: f_heep7jeq1

[From Rick Marken (2013.03.18.2130)]

Rick Marken (2013.03.17.1510)--

RM: Here are two figures that summarize the results of my analysis of some
data from Martin's pursuit tracking task where the main independent
variable was the horizontal separation, s, between cursor and target...
I can think of two possibilities; one is that system noise is amplified as s
increase; the other is that the same level of existing noise in the system
"shows through" in performance more when the loop gain goes down
(as it does as s increases). This will be the next thing I try to test with the models.

RM: Well, not much interest in this apparently. But I find it
fascinating. I did try adding a fixed level of noise to the simulation
(a random number between -.4 and .4 that was added to the input
variable of the v and a model). As expected the noise degraded
performance equally for all separations for the v control model; but
the noise degraded performance more as separation increased for the a
control model.

I've attached the revised results as Figures 1 and 2. These are very
similar to the one's I already posted byt they show the result when
_the same level of noise_ is added to the models at each separation.
Figure 1 shows that both models do equally well in terms of RMS
deviation; the a model is no longer particularly superior. The R^2
fits for both models are still average >.99. The interesting results
are in figure 2, which shows that by adding the appropriate level of
noise -- the SAME level of noise at each separation -- the behavior of
the model, in terms of quality of control, fits the observed data
nearly perfectly for the a model, and not at all for the v model. This
means that decrease in performance that is seen as s increases is not
necessarily due to an increase in noise (and consequent decrease in
discriminability) but to a reduction in loop gain that lowers the
system's ability to filter out the noise.

These results seem to me to be more evidence that itis a rather than v
that is controlled in a tracking task. Not necessarily a bit discovery
but a good example, I think, of how to do research (using modeling) to
figure out what perception is under control.

I think this could make a nice little paper; how about a joint paper
Martin? Bill?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Content-Type: image/png; name="Figure 1.PNG"
Content-Disposition: attachment; filename="Figure 1.PNG"
X-Attachment-Id: f_hegk0gc00

Content-Type: image/png; name="Figure 2.PNG"
Content-Disposition: attachment; filename="Figure 2.PNG"
X-Attachment-Id: f_hegk0kaj1

[From Rick Marken (2013.03.21.1510)]

Happy Bach's birthday!!

Rick Marken (2013.03.18.2130)--

Rick Marken (2013.03.17.1510)--

RM: Well, not much interest in this apparently.

RM: Apparently! :wink:

Where are you Martin? I though you were interested in the results of
this analysis. Do I have to have a two part invention with myself?
Well, I guess that's the way I always have them, now that I think of
it;-)

Best

Rick

···

But I find it
fascinating. I did try adding a fixed level of noise to the simulation
(a random number between -.4 and .4 that was added to the input
variable of the v and a model). As expected the noise degraded
performance equally for all separations for the v control model; but
the noise degraded performance more as separation increased for the a
control model.

I've attached the revised results as Figures 1 and 2. These are very
similar to the one's I already posted byt they show the result when
_the same level of noise_ is added to the models at each separation.
Figure 1 shows that both models do equally well in terms of RMS
deviation; the a model is no longer particularly superior. The R^2
fits for both models are still average >.99. The interesting results
are in figure 2, which shows that by adding the appropriate level of
noise -- the SAME level of noise at each separation -- the behavior of
the model, in terms of quality of control, fits the observed data
nearly perfectly for the a model, and not at all for the v model. This
means that decrease in performance that is seen as s increases is not
necessarily due to an increase in noise (and consequent decrease in
discriminability) but to a reduction in loop gain that lowers the
system's ability to filter out the noise.

These results seem to me to be more evidence that itis a rather than v
that is controlled in a tracking task. Not necessarily a bit discovery
but a good example, I think, of how to do research (using modeling) to
figure out what perception is under control.

I think this could make a nice little paper; how about a joint paper
Martin? Bill?

Best

Rick

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com