Btw. there is a missing parenthesis in the output function on the diagram,
The text gives a different equation
Maybe that is where the confusion is with the leaky integration.
Btw. there is a missing parenthesis in the output function on the diagram,
The text gives a different equation
Maybe that is where the confusion is with the leaky integration.
RM: Yes, but the improvement in tracking ability with respect to delays is very small relative to what you get just from control using a leaky integrator. And it’s especially small when a disturbance is added to the cursor.
MP: The point in the paper is that the feedback delays are likely to be around 100 ms as a minimum, and that for sinusoid targets and ellipses, participants do not show a response delay even this long, meaning that their feedback delay has been comprehensively compensated in that case.
RM: But we already know that from all the modeling that has already been done. A leaky integrator control model typically fits the data extremely well (r>.99) with just a leaky integration in the output and not transport lag. Adding a transport lag produces very little change in the fit to the model. So the fact that there is a neural transport lag on the order of 100 msec in a tracking task has already been shown to be no problem for a control model.
RM: The only reason I posted my comment about your article was to try to encourage you to use your research skills to do PCT-based research – research aimed at testing for controlled variables.
MP: Just to be clear, in the EBR article both models had a leaky integrator (slowing factor) and so the comparison there is exactly as you describe (though without a disturbance as you have pointed out). In contrast, the spreadsheet was a simplification and did not include a leaky integrator in either model.
RM: Yes, I’m afraid the spreadsheet misled me; because of the huge effect of the disturbance to the cursor I assumed that your model was not controlling T - C when, in fact, it was; the huge effect of the disturbance that I found for your extrapolation model compared to my non-extrapolation leaky integration model was a function of the difference in the nature of the output functions.
MP: The main findings of that paper were
- For pseudorandom signals the position extrapolation model fits no better than the usual position control model (both with leaky integrators) - indicating that participants do not use extrapolation when tracking these targets.
RM: It looked to me like you were measuring the fit of model output variations to target variation. I wasn’t able to find any reports of measures of the fit of the model to individual subject behavior.
MP: 2. In contrast, the position extrapolation model is a better fit to the participant data when loop delays are sufficiently long.
RM: So you did fit the model to the participant data? I can’t find reports of such fits in the report.
MP: The breakdown of results into phase differences and amplitude ratio shows that the benefit is entirely due to delay compensation - indicating that it may be the case that participants make use of target velocity (in addition to position) when tracking these targets.
RM: If that’s the case then the “extrapolation” model should provide the best fit to both the sine and pseudo random target tracking data for the subject’s. This is because the subjects’ neural transport lag was the same in both sine and pseudo-random target tracking and if extrapolation is a hedge against transport lag it would be controlled in both cases. Is that what you found? If the extrapolation gave a better fit in the sine than in the pseudo random tracking situation then it certainly has nothing to do with delay compensation.
RM: At the risk of eliciting more profanity from Mr. Matic I predict that “extrapolation” provides the best fit to the subjects’ behavior only when they are controlling the sine wave target, not the pseudo random target. And that’s because with the sine wave target the subjects are able to control for a high level perception of “extrapolation” – probably something like dT/dt – by varying cursor position appropriately.
Rick, [insert profanity] read the paper. You’re mixing (1) phase delays between the target and cursor sinusoids with (2) feedback transport delay. No point in discussing the paper if you are not going to read the thing. There are also comparisons of fit between the model and subjects for random and sinusoid inputs [more profanity].
The [C-T distance + Tv] is a pretty regular type of a controlled variable, no need for higher levels or nothing like that. The target is visible, the cursor is visible, all information is available on the screen from vision.
A great demonstration of how to do the TCV for tracking sinusoids! It never occurred to me to try target velocity, even though I tried cursor velocity and (T-C) velocity.
Frustrating that I missed this in the proofing process! Thanks for pointing that out Adam!
Yes, the comparisons of model simulated cursors and participant cursors were conducted and are presented in the line graphs, The bar graphs show the fit of the participant cursors to the target.
I don’t disagree that the position control model fits the behaviour well as per the correlations, however, I would say that correlations are a poor measure of temporal asynchrony in tracking (as is RMS as demonstrated in the paper). The leaky integrator position control model is unable to compensate transport delays effectively and the extrapolation model performs substantially better under conditions of delay. In fact, this is a challenge for the position control model because:
I disagree that the extrapolation model should necessarily have a more accurate fit to both sinusoid and pseudorandom cursor data. I expected that it would because velocity information is available in both cases. However, given that participants don’t exhibit zero phase tracking when tracking pseudorandom signals (they track with a phase delay of about 180ms) this demonstrates that they are not compensating their intrinsic transport delays (as they must be when tracking with <100 ms phase delay for sinusoids). The fact that participants do not appear to utilise velocity information when tracking pseudorandom targets is noteworthy and several potential explanations for this are set out in the discussion…
So yes, the neural transport lag is the same in both cases, but the participants only compensate it when tracking sinusoids (as is seen in their behaviour, irrespective of the model), and suitably, the extrapolation model only fits the cursor data better in that condition.
It may be the case that controlling higher level perceptions could also provide this ability, controlling for the long term pattern of the repeating sinusoid, for instance. One interesting thing, as Adam said, is that controlling cursor velocity - target velocity error does not confer the same advantage (I tried this before simply the target velocity).
A post was merged into an existing topic: Powers’ Model of a PCT-Based Research Program
I’ve been trying out some things with elliptic target tracking data. This was just me, tracking the target on the screen, using a graphics tablet and a pen, like this one:
Some example data:
The two-third in the title is related to the power law stuff, for the x and y data it means that the trajectories were pure sinusoids when plotted against time like in this plot. This was one of the slower targets, vel 3.0 which looks like 0.2Hz or so.
The situation is not quite like tracking sinusoids in one dimension, since there might be more information in the visual scene for the participants to see and control, but let’s say it is similar if we decompose the trajectory to x and y over time.
The phase difference is near zero, with a larger variance at higher speeds.
You also got fairly small phase differences, but were they negative for any cases? I seem to get them about the same amount as positive ones, especially for high-frequency targets. This would mean that on average the subject was leading in front of the target.
Btw. I used curve-fitting to find the phase, frequency and amplitude of the cursor and of the target wave, and then calculated the difference of the phases. I did not use FFT. The amplitude seems to be 1:1 between cursor and target.
Next I tried running the two models against the target and comparing them. The extrapolation model seems better than just position difference. Here is one example of comparing the fit between the two models and the cursor:
The red line is closer to the yellow line than the green one, meaning the model2 is fitting subject behavior better than model1. I’ve moved the curve to be centered at 0 px.
Here are the results for all tracks:
Lower is better, so all the model2 runs seem to fit better than model1 to subject behavior.
I’ve used Kp=8, delay=0.125ms, for model1 and model2. In model1, the damping gain is 0.2, in model2 the damping gain is 1.2, and the Kv = -130.
Though, maybe Kv should be multiplied by dt (which was 0.005). I did not change these parameter values between runs.
What parameter values would you say fit subject behavior best for the extrapolation model?
So, looks like a pretty good improvement over the t-c model for sinusoid tracking.
I was digging through the archives too. I remember reading that Bill had some ideas about extrapolating velocity when tracking targets.
[From Bill Powers (950529.1230 MDT)]
Hans Blom (950529) –
Nothing of this is new, of course.
- Tracking a regular (“predictable”) pattern with approximately zero delay has
frequently been encountered in human operator response speed studies.
You will find a demonstration of this effect in the 1960 paper by myself, Clark, and MacFarland. This can be be explained, as I said, using a hierarchical model. The first level is an ordinary control system controlling the cursor used for tracking. The second level perceives the synchronization of the cursor and target movements (the principle of the phase-locked loop will serve as the perceptual function), and the output function consists of a variable-frequency variable-amplitude oscillator which varies the reference signal for the cursor-position control system. This will achieve approximately zero average delay, without needing the kind of world-model in your program.
If the target moves in a sine-wave, there is an even simpler way of achieving zero delay, even with a single-level system. Simply add some first derivative to the perceptual signal representing target-minus-cursor. This can actually result in the cursor movements occurring slightly ahead of the target movements – a negative delay! We would use that model, of course, only if the real human
behavior also showed a negative delay.
[From Bill Powers (2005.02.15.0816 MST)]
What do we mean by “predicting?” One meaning is surely to make a statement or form an expectation about something that hasn’t happened yet. But another is simply to act on the basis of the extrapolated future state of some variable – i.e., its first derivative times some adjustable constant, added to its present state. That’s how “anticipation” works in heating and cooling systems, and elsewhere. The effect of prediction or anticipation in this latter sense is, as we often find in control systems, the opposite of what
Btw. Max, did you use Matlab’s Simulink for modeling?
I’m just learning it. The lower right part is the leaky integrator output, there is probably a better (simpler) way of modeling it. The target speed gain is 0.
And then in this one the target speed gain is larger, looks like zero phase difference.
Thanks everyone, this discussion is adding nicely to the research paper. It’s priceless reading Bill’s 2005 ideas and I wish we’d had them already to quote in the article as he converged in a similar solution!
I do hope Rick has realised that the original paper already describes neatly everything he mentioned in detail - the TCV, the fact that it is a present time composite perception, the leaky integrator and the model fit with each individual participant. I think we all agree that adding an unseen disturbance and showing the PCT model advantage would have made a clearer point about the uniqueness and superiority of a PCT model. However, Rick has done plenty of that excellent research already!
All the best
Yes, some participants exhibited phase advances and these tended to increase in likelihood as the cursor reached the sinusoid maximum (or minimum), that could be as a result of delayed sampling of velocity as during target deceleration, or as a result of overshoot - though in your track, as you said, the amplitude ratio seems about 1:1. Oher authors have also found shorter phase delays than I did (I found average 50ms for sines).
It looks like the tracking method you’re using is much more stable than the joystick we used. Probably not surprising given participants’ prior experience with these and fewer weird dynamic properties of the joystick that come in to play in our experiments. Later on we used a steering wheel for 1d tracking which was much smoother.
With respect to optimal parameters, it depends heavily on the transport delay you give the model and the phase delay the participant is exhibiting. In matlab I tend to use lsqnonlin to fit the parameters to a given trace. I think there is a sample script using this algorithm to estimate parameters of a position control model in Matlab on my GitHub page: https://github.com/maximilianparker/Tracking_Matlab/blob/master/test_opt_pos.m
In any case, I’m glad the model generalises well to your tablet task!
To add to the Bill quotes, Martin Taylor used a similar model in the modafinil and tracking conference paper ‘Effects of Modafinil and Amphetamine on tracking performance in sleep deprivation’, definitely worth a read!
Yes, I have looked at Simulink a little, definitely a good way to visualise the models. Bruce Abbott was experimenting with it a while ago too and also wrote both position and made a simulation of both position control and extrapolation models.
All the best,
MP: With respect to optimal parameters, it depends heavily on the transport delay you give the model and the phase delay the participant is exhibiting.
I’ve tried fitting the extrapolation model to pseudorandom tracking. I also got no improvement or very slight improvement over the distance control only model (both findings replicated). It is strange. There is the information of target velocity available to the subjects, and they are not using it?
In the TrackAnalyze program, fitting a model is something like measuring the properties of the subject. So, if the model with 130ms of transport delay fitts best, we say that the subject probably has ~130ms of transport delay in the distance control loop. That is what we get for pseudorandom targets. For step-jumps or square waves (if they are not periodic) we get larger estimates of transport delay using the distance control model. The target just appears up or down, there is no velocity information.
Maybe that is the “real” transport delay. Could it be that in procedures such as TrackAnalyze, we are underestimating the transport delay in pseudorandom tracking, because we are using a model that does not use velocity extrapolation, while the subjects are using it? Well, just guessing.
Excellent that both findings were replicated. Yes, exactly, it seems a strange proposition that the participants do not use velocity for pseudorandom targets. I had assumed that the utility of extrapolation should be negatively correlated with acceleration or deceleration. If the target velocity is changing quickly and often then delayed estimate of velocity should be inaccurate, and so the extrapolated position becomes less accurate. Based on this assumption, increasing target signal frequency (or frequencies for pseudorandom targets) should lead to reduced use of velocity extrapolation perhaps? However, your finding that phase delay decreases as frequency increases in sinusoids doesn’t really support that. I can’t imagine phase reduces for pseudorandom targets in that way though.
Yes, we also collected some unpredictable step wave data though I haven’t analysed it much yet it is clear that the time between target jump and the beginning of detectable movement are long. As you say, these are ‘pure’ in the sense that there can only be positional feedback. However, they do seem a little bit too long to be just the transport delay though, perhaps there is an additional time cost in driving movement from still (as well as inertia which obviously affects the delay across the movement).
I think it could be the case that we are underestimating transport delays, but there is some strong evidence in reaching studies that transport delays can be pretty fast, I think 80ms was the fastest estimate i’ve seen but usually in the region of 100-150ms.
If the phase difference is in milliseconds and not in degrees (like in the previous plot), it remains near zero on average, maybe slight delay. Looking at just x wave forms of cursor and target:
Looks pretty random, but also short.
Another thing - did you ever have a trial with a sudden target jump during tracking a sinusoid? As in, tracking a sinusoid for half a minute in one place, then the target jumps to another place? Or disappears?
No, not really. Although I remember when I was speaking to Joost Dessing at that Measuring Behavior conference he thought it would be good look into it. We were talking about the idea that the overshoot in tracking is due to inertia in the limb rather than part of the control strategy per se. He was suggesting testing the idea with ‘jumps’ in target location during the sinusoid/pseudorandom. I can’t remember what exactly he expected to see in behaviour there though to test that assumption. Any idea? I think also he was suggesting a sort of impulse away from the sine and then back to it, rather than adding a step input signal to a sinusoid, if that makes sense?
A small note about the entire discussion so far. Two points:
(1) Bill’s own later simulations often had either velocity or acceleration as the lowest level of control.
(2) In off-line spreadsheet work with Bill P 10 or 20 years ago, I compared Bill’s own tracking data with band-limited “white noise” or “pink noise” disturbances (I don’t remember which or whether the tracking was was pursuit or compensatory, or how much transport lag we used or found) against the performance of an ideal tracker, using the autocorrelation function of the disturbance to generate the ideal using just position, position and velocity, and position, velocity, and acceleration. Bill’s actual performance exceeded the ideal tracker’s performance unless at least velocity was included. Including acceleration made the ideal tracker perform a little better. Both velocity and acceleration imply prediction.
These historical facts seem relevant, despite not having been published. I hope they help further discussion.
Hi Martin, nice to hear from you. I think that If we say that velocity and acceleration ‘imply prediction’ when used as controlled input variables, then we might as well hand the towel in! The way that most non-pct researchers use the term ‘prediction’ is not the same…
MParker: He was suggesting testing the idea with ‘jumps’ in target location during the sinusoid/pseudorandom. I can’t remember what exactly he expected to see in behaviour there though to test that assumption. Any idea? I think also he was suggesting a sort of impulse away from the sine and then back to it,
I don’t remember much, but the reactions to jumps might be different when the CV is at a lower level, and when it is at a higher level. Sometimes we might be controlling c-t distance, or c-t distance + t velocity; this would be the lower level. And other times, we might be controlling amplitude and frequency of movement by varying some oscillator and the output of the oscillator is a reference to the c-t distance system.
Assuming that the transport delay is longer in the higher-level loop than in the lower level loop, the phase shift between the disturbance input step (step in the target) and the output cursor step (step in the cursor) will be longer when controlling for amplitude and frequency, than when controlling for c-t distance + t velocity.
The problem is that phase shifts are already very long in step tracking, so if there is a long shift after a step during sine tracking, we wouldn’t know if it comes from controlling at a higher level, or from whatever process is causing the long phase shifts in regular step tracking, like inertia or some muscle nonlinearities or whatever.
That’s an interesting idea. Could you remind me a little of how you model the oscillator?
I have just tried to implement a similar thing, maybe different to what you mean here, but I thought that if the model uses memory of the previous positions of the target (memory) it can effectively fit a sinusoid to this ‘memory trace’ and then look up the target position on this trace (accounting for the transport delay), nulling the distance. I had imagined this to operate in parallel to position control which would be the default option. But is effectively two position controllers, one c-t and one c-t_hat where t_hat is the oscillator memory estimate of the current position of the target.
I imagine that the memory system would have to introduce some noise because in target occlusion studies the participants are not good at replicating the frequency of the sinusoid over the occlusion. However, it would presumably be less affected by noise in the input signal than the c-t controller.
Of course, even in the parallel control version they could have different transport delays…
Interesting. I wonder whether there is such a long phase shift following a step within a sinusoid target signal. If it was shorter than the ‘pure’ delay when tracking step signals maybe that would help us exclude possibilities for the long delay in step tracking.
The way most non-pct researchers use the term “perception” is not the same either. Surely “prediction”, to everybody, implies using expectation about things that have not yet happened as part of the current set of perceptions that affect control. How far ahead those predictions go depends on the distribution of probable states x milliseconds, hours, years, or mega-years in the future. In simple tracking, the transport lag sets the minimum prediction time to be used, even in simple position control.
What limits the accuracy of prediction for, say, simple pursuit tracking, is the autocorrelation function of the disturbance, the variance accounted for by knowledge of the current value of a variable. The variance unaccounted for is the limit to prediction accuracy x time-units in the future, counting from the time of the most recent observation. Of course, the autocorrelation function itself cannot be determined without prior observations, just as the velocity and acceleration need prior observations for their determination.
In an experiment, however, the experimenter who designed the disturbance noise knows the autocorrelation function, so can be sure of the actual limits of prediction possible to an ideal tracker. It depends on the equivalent white noise bandwidth. For a sine-wave disturbance, for example, the equivalent bandwidth from the experimenter’s point of view is zero, and the ideal would be able to predict infinitely far into the future. With such a sine-save disturbance, any failure of a tracker in practice must include the inability of the perceiving apparatus to determine the equivalent disturbance bandwidth. The “failure to predict” noise of inability to use the experimenter-constrained bandwidth should be included along with other noise sources around the control loop when analyzing deviations from ideal tracking performance.
Now, perhaps we can extrapolate this to vastly different time scales, and recognize that it is in the everyday sense of “prediction” that astronomers tell us that in (If I remember correctly) 5 billion years, the Andromeda Galaxy will collide with our Milky Way. For the observations that lead to such a prediction, the disturbances are slow, with autocorrelation functions stretching billions of years into the future (so far as astronomers can determine).
Not all disturbances can be reduced to an equivalent white noise. For example, in most countries an election is a source of disturbance to the perceived structure of the Government. In some countries, such as the USA, it is almost certain there will be no federal election in the next few days after one was completed, or in the next few months, but in the few days around exactly two and exactly four years after one election, another is almost certain to occur. That disturbance is highly predictable in its timing, but its magnitude and direction is not. They are the only uncertainty in predicting how the structure of the Government may change. In other countries, the time of the next election is less precisely predictable (i.e. that an election will be held during a specified week is near zero for any week for a while after an election event, then rises until it become near unity by the legally limited life of the Government).
Maybe I have made the point. When the disturbance statistics are not rhythmic, velocity and acceleration are reasonable proxies for predictability, and controlling them as well as position in the kind of hierarchy sometimes used by Powers in his later simulations can lead to improved tracking performance. The work with Powers on his own tracking performance that I mentioned demonstrated that he could not have been tracking position alone, but that he could have been using at least velocity-based prediction, and maybe acceleration as well.
There is, of course no guarantee that living control systems use any of these variables (position, velocity, and acceleration) in pure form. Most nerves, AFAIK, seem to change their firing rate right after a change to their input, and then trend back toward their resting state. To me, this suggests that some combination of position, velocity, and acceleration, is what is reported by a “Neural current”, not a bare position, a bare velocity, and a bare acceleration. Whatever is actually reported up through the hierarchical input levels, the output from muscles can only apply force to the environment. Force affects mass motion, and affects position only by way of its influence on velocity both through F=ma and the effect of force on steady velocity through a viscous medium.
Do you still think that the way “velocity” and “acceleration” affect control is not everyday “prediction”, at least as closely as a controlled “perception” is an everyday perception?
The oscillator itself is the “integral oscillator”, It probably has a more common name than that, I found it in some old posts in the archives. f sets the frequency in Hz, and a and b are two sinusoids. It is like a spinning circle, with a and b being the sin(phase) and cos(phase).
a = 1 b = 0 loop: a = a + f * b * dt * 2 * pi b = b - f * a * dt * 2 * pi
The problem is how to make the frequency and phase of the oscillator equal to the frequency and phase of the input or target, assuming amplitude is equal. One solution is to make the frequency proportional to the phase difference. When the phase difference is constant, this means the frequencies are equal.
When a target moves in a circle or ellipse the phase difference is easy to measure, it is the angle closed by target > center of the circle > cursor. If that angle is constant, the frequencies are equal.
When the target movement is one-dimensional, then that is a bigger problem.
Here is my first barely working model, only follows a small range of frequencies. I’ve been fighting with it for a while:
Tell me more. How do you account for the direction of the target in memory? If the target is moving in a sinusoid, it will be in the same place twice, but first moving up, and then moving down. Or am I missing something?
Yeah, I guess we could try designing some target waveforms for start.
Step references with random time distances between steps? A sinewave reference. Then a sinewave plus the random-spaced step reference?