The feedforward compensation in the vestibulo-ocular reflex is a little different, but still similar in that head motions are detected rather than predicted from efferent signals and the signal is added to the output to the extraocular muscles to counteract (to some degree) the effect of voluntary or involuntary head motion on the direction of gaze. There is some small delay in this system, but not a lot because the signal pathways are very short. But the most important and suggestive aspect of this reflex is that it has to be continually recalibrated, and that requires feedback information.
When image-stabilization techniques are used to alter the motion of an image on the retina relative to the head motions, the vestibulo-ocular reflex goes out of calibration and is no longer accurate enough to prevent blurring. But after only a few minutes of practice under the new conditions, the timing of the reflex changes until the eye movements are once again in the right relationship to head movements. The only possible basis for this retraining is the visual information, probably simply the blurring that results from miscalibration. I seem to recall that 20 seconds of practice is sufficient for recalibration, but that seems a bit short. The reorganization principle could easily explain how that works, as it does not rely on any information but the average error signal over some period of time.
I think the general rule must be that for every open-loop feedforward compensation, there must be a superordinate negative feedback loop that keeps recalibration accurate. It is not sufficient just to have qualitative feedforward; the feedforward must be timed and sized correctly to reduce the effect of the disturbance and not make it worse. This is especially true for visual effects, since the eye’s resolution is around one minute of arc or 1/60 degree, and any miscalibration greater than that will immediately worsen the quality of vision. Achieving that sort of precision in an open-loop reflex would be impossible without the feedback loop continually recalibrating the reflex.
When we recognize the need for the higher-order recalibration process, the whole problem reduces simply to the normal operation and adjustment of control processes involving more than one level. An example is the model for the behavior of extending a hand in a straight line away from the shoulder. To do this properly, it is necessary for a higher-order system to increase the reference signal for the vertical shoulder angle at one rate and simultaneously decrease the reference for the external angle at the elbow at twice that rate (assuming the upper and lower arm segments to have the same length). This will move the hand straight away from the shoulder if done perfectly, but of course perfection does not last long and reorganization is necessary to maintain good stabilization of the vertical angle of the hand as the reaching is repeated. This compensation can be done either on the perceptual side or the output side of the higher-order system. I have modeled it both ways.
These compensations are transient, and are needed only for a tenth of a second or so, during the short time before negative feedback control can become effective after a step-disturbance. We know that the delay can be compensated by increasing the time constant in the output function, and that after the initial perturbation the control system even with a delay can act very quickly and accurately. See the Live Block Diagram in the demos that go with LCS3.
This feedforward nonsense is just a play on words, motivated, I suspect, by the idea that if there is feed “back” there must also be feed “forward.” In a control system circuit there are signals going both ways, the feedforward signals simply traveling internally from input functions to output functions while the feedback signals go through the external environment to get back to the input function from the output function. There is actually only one direction of signal travel around the loop; there is no part of the circuit in which signals sometime go one way, and sometimes the opposite way. Axons can carry signals either way, but the only time they go backward (“antidromic” conduction) is when an experimenter injects electrical stimulation in the middle of the axon. Synapses work in only one direction.
OK, I found this: http://en.wikipedia.org/wiki/Feedforward_(management). This Wiki claims that Marshall Goldsmith originated this term in an article on management, which is hardly a good recommendation for a technical term.
Here’s a better one for a more technical, but clear, treatment:
http://www.clear.rice.edu/engi128/Handouts/Lec14.pdf
Best,
Bill P.
Background
Self-generated body movements require compensatory eye and head adjustments
in order to avoid perturbation of visual information processing. Retinal
image stabilization is traditionally ascribed to the transformation of
visuovestibular signals into appropriate extraocular motor commands for
compensatory ocular movements. During locomotion, however, intrinsic
“efference copies” of the motor commands deriving from spinal central
pattern generator (CPG) activity potentially offer a reliable and rapid
mechanism for image stabilization, in addition to the slower contribution of
movement-encoding sensory inputs.
Results
Using a variety of in vitro and in vivo preparations of Xenopus tadpoles, we
demonstrate that spinal locomotor CPG-derived efference copies do indeed
produce effective conjugate eye movements that counteract oppositely
directed horizontal head displacements during undulatory tail-based
locomotion. The efference copy transmission, by which the extraocular motor
system becomes functionally appropriated to the spinal cord, is mediated by
direct ascending pathways. Although the impact of the CPG feedforward
commands matches the spatiotemporal specificity of classical
vestibulo-ocular responses, the two fundamentally different signals do not
contribute collectively to image stabilization during swimming. Instead,
when the CPG is active, horizontal vestibulo-ocular reflexes resulting from
head movements are selectively suppressed.
Conclusions
These results therefore challenge our traditional understanding of how
animals offset the disruptive effects of propulsive body movements on visual
processing. Specifically, our finding that predictive efference copies of
intrinsic, rhythmic neural signals produced by the locomotory CPG supersede,
rather than supplement, reactive vestibulo-ocular reflexes in order to drive
image-stabilizing eye adjustments during larval frog swimming, represents a
hitherto unreported mechanism for vertebrate ocular motor control.
Best regards
Roger
Prof ROGER K MOORE BA(Hons) MSc PhD FIOA MIET
Chair of Spoken Language Processing
Speech and Hearing Research Group (SPandH)
Department of Computer Science, University of Sheffield,
Regent Court, 211 Portobello,
Sheffield, S1 4DP, UK
e-mail: r.k.moore@dcs.shef.ac.uk
web: http://www.dcs.shef.ac.uk/~roger/
tel: +44 (0) 11422 21807
fax: +44 (0) 11422 21810
mobile: +44 (0) 7910 073631
Editor-in-Chief: COMPUTER SPEECH AND LANGUAGE
http://ees.elsevier.com/csl/