[VIRUS SCAN ERROR] Re: Steering task model, help needed

[From Bill Powers (2011.02.04.1008 MST)]

AM: On Thu, Feb 3, 2011 at 5:31
AM, Bill Powers
powers_w@frontier.net
wrote:Ok. So I’ll have a few experimental situations (4-5?) each with a
different constant speed and different path curvature. Does that sound
ok? If the speed is greater then the path should be longer so I could get
the same number of data points (equal duration of all
runs)?

BP: From your description, there are no extra points for all your work
with writing programs – the object is to do an experiment. So attached
is a start on the program that I had in mind, from which you can probably
develop the program you had in mind. Note that the DisturbFR2 unit has
been simplified to make it easier to use. I have generated target
movements in the simplest way possible, just using one disturbance table
for x and another one for y. The velocities change instead of staying
constant. The problem with the suggestion I send you previously, in which
the x and y position are integrations, is that you can’t be sure the
target will remain on the screen. This way it always does. As you will
see, the disturbance tables are smooth enough to give easy tracking. It’s
set up for difficulty 1 but you can make that a run-time choice. You may
want to modify the way the disturbance tables are read into the program
if you want to be able to use exactly the same disturbance patterns for
different subjects, for comparison. Just remove the comments, as noted in
the program listing.

As of now, the data for target and cursor position are left in an array,
with no storing to a file. Mousex, mousey, targetx, and targety are saved
in the array (as mox, moy, tax, tay).

I’ve made idleloop a little easier to understand by separating the
program being iterated into a different procedure instead of putting it
inside idlelooop itself.

Let me know what you think.

Best.

Bill P.

If you use constant speed, and 3600 data points at 60 per second, the
length of the path will be the same no matter what the curvature. The
end-points of the path will be closer together if the path is more curvey
but along the path the distance traveled will be the same.

No trick up my sleeve, just an
idea I tried to implement and got stuck. I’ve read somewhere that the
direction of movement could be predicted from eye movements, and I was
wondering where exactly a subject might be looking at when changing
direction of movement.

How accurately do you think you could measure where on the screen the
subject is looking? Within 2 pixels? 20 pixels? 200 pixels? I would guess
something at the high end of this range. And the subject isn’t
necessarily always looking at the object being attended to. Further, the
changes in fixation point will probably be saccades most of the time, but
if the subject is watching the moving cursor, it can also be smooth
pursuit tracking. I just don’t think you will get very good data from
measurement eye-movements.

If you have a moving target, at least you can be pretty sure it is the
distance between the cursor and the target that is being controlled, with
the target being moved independently. I can’t program in C# yet,
buyt

···

At 07:13 PM 2/3/2011 +0100, Adam Matić wrote:

I wrote a small function
that returns the “slope of longest straight line going from a given
point toward the end of path”. Something like a tangent of the inner
curve on a path. It seemed to me my eyes were doing that. It’s all realy
vague to me. No need to complicate things just yet, perhaps when I finish
this one, I’ll go play with it.

BP: You’re going to have one data point for every 1/60 second. Don’t
mess with that: it’s your basic connection between the physics of motion
in physical time and the computational iterations.

AM:

I wasn’t clear on that. I meant data points per run. I’ve got two options

  • longer path for greater speed to get the same nuber of data points; or
    same length and different number of data points. I like the first option
    more, but don’t really know why :smiley:

BP: I wouldn’t worry about the damping – you could just set it to a
constant and get a decent fit. We probably won’t whittle down the last
sources of variability until we put a realistic muscle into the model,
like Hill’s muscle model.

Ok. Thanks.

Best,

Adam

BP: From your description, there are no extra points for all your work
with writing programs – the object is to do an experiment. So attached
is a start on the program that I had in mind, from which you can probably
develop the program you had in mind.

AM:

I received no attachment. It also seems your email was interrupted in mid-sentence. Perhaps there is a problem with the server. Could you send the attachment to my email?

BP: How accurately do you think you could measure where on the screen the
subject is looking? Within 2 pixels? 20 pixels? 200 pixels? I would guess
something at the high end of this range. And the subject isn’t
necessarily always looking at the object being attended to. Further, the
changes in fixation point will probably be saccades most of the time, but
if the subject is watching the moving cursor, it can also be smooth
pursuit tracking. I just don’t think you will get very good data from
measurement eye-movements.

If you have a moving target, at least you can be pretty sure it is the distance between the cursor and the target that is being controlled, with the target being moved independently. I can’t program in C# yet, buyt

There’s the mid-sentence interrupt. Was there more text?

I see what you mean. I’ll do the experiment like you suggested in the first e-mail, it makes sense that way and now I have clear thinking about what to do.

Just to describe where my initial ideas went: if the subject sees a whole drawn path that he needs to go trough, then he will have a constant “way of looking” when changing direction. I wouldn’t directly measure eye movements, but somehow make a model that could be fitted to “look” at various points along the path. There were some studies of eye movements (like this one: http://www.journalofvision.org/content/3/11/3.full ) that I got the idea from (it was actually a study of eye movements during car-driving).

Best

Adam