Evidence for predictive (non-sensory) control

I thought some members of the list might be interested in a paper that's
just been published in Current Biology that shows gaze control in tadpoles
during swimming action using fast efferent (feedforward) predictive control,
as opposed to slower visual feedback.

The citation is ...
Lambert, F. M., Combes, D., Simmers, J., & Straka, H. (2012). Gaze
stabilization by efference copy signaling without sensory feedback during
vertebrate locomotion. Current Biology.

And the weblink is ...
http://www.sciencedirect.com/science/article/pii/S096098221200810X

Background
Self-generated body movements require compensatory eye and head adjustments
in order to avoid perturbation of visual information processing. Retinal
image stabilization is traditionally ascribed to the transformation of
visuovestibular signals into appropriate extraocular motor commands for
compensatory ocular movements. During locomotion, however, intrinsic
�efference copies� of the motor commands deriving from spinal central
pattern generator (CPG) activity potentially offer a reliable and rapid
mechanism for image stabilization, in addition to the slower contribution of
movement-encoding sensory inputs.

Results
Using a variety of in vitro and in vivo preparations of Xenopus tadpoles, we
demonstrate that spinal locomotor CPG-derived efference copies do indeed
produce effective conjugate eye movements that counteract oppositely
directed horizontal head displacements during undulatory tail-based
locomotion. The efference copy transmission, by which the extraocular motor
system becomes functionally appropriated to the spinal cord, is mediated by
direct ascending pathways. Although the impact of the CPG feedforward
commands matches the spatiotemporal specificity of classical
vestibulo-ocular responses, the two fundamentally different signals do not
contribute collectively to image stabilization during swimming. Instead,
when the CPG is active, horizontal vestibulo-ocular reflexes resulting from
head movements are selectively suppressed.

Conclusions
These results therefore challenge our traditional understanding of how
animals offset the disruptive effects of propulsive body movements on visual
processing. Specifically, our finding that predictive efference copies of
intrinsic, rhythmic neural signals produced by the locomotory CPG supersede,
rather than supplement, reactive vestibulo-ocular reflexes in order to drive
image-stabilizing eye adjustments during larval frog swimming, represents a
hitherto unreported mechanism for vertebrate ocular motor control.

Best regards

Roger

···

_____________________________________________________________

Prof ROGER K MOORE BA(Hons) MSc PhD FIOA MIET

Chair of Spoken Language Processing
Speech and Hearing Research Group (SPandH)
Department of Computer Science, University of Sheffield,
Regent Court, 211 Portobello,
Sheffield, S1 4DP, UK

e-mail: r.k.moore@dcs.shef.ac.uk
web:��� http://www.dcs.shef.ac.uk/~roger/
tel:��� +44 (0) 11422 21807
fax:��� +44 (0) 11422 21810
mobile: +44 (0) 7910 073631

Editor-in-Chief: COMPUTER SPEECH AND LANGUAGE
http://ees.elsevier.com/csl/
________________________________________________________________

[From Bill Powers (2012.08.20.1006 MDT)]

I thought some members of the list might be interested in a paper that's
just been published in Current Biology that shows gaze control in tadpoles
during swimming action using fast efferent (feedforward) predictive control,
as opposed to slower visual feedback.

The citation is ...
Lambert, F. M., Combes, D., Simmers, J., & Straka, H. (2012). Gaze
stabilization by efference copy signaling without sensory feedback during
vertebrate locomotion. Current Biology.

And the weblink is ...
http://www.sciencedirect.com/science/article/pii/S096098221200810X

I have no problem with organism-initiated actions involving feed-forward compensation. The organism can determine when the action is to be initiated and how large and in what direction it is to be. This means that delays may be present, but will not be apparent to an external observer because the compensations can be launched at the same time as or even earlier than the output is generated. Use of a central pattern generator makes this easy, since all the preparatory steps are available to help synchronize the final result. This would be true of swimming motions that disturb the direction of gaze, as in the article. The CPG can be in the output function of a higher-order control system.

The feedforward compensation in the vestibulo-ocular reflex is a little different, but still similar in that head motions are detected rather than predicted from efferent signals and the signal is added to the output to the extraocular muscles to counteract (to some degree) the effect of voluntary or involuntary head motion on the direction of gaze. There is some small delay in this system, but not a lot because the signal pathways are very short. But the most important and suggestive aspect of this reflex is that it has to be continually recalibrated, and that requires feedback information.

When image-stabilization techniques are used to alter the motion of an image on the retina relative to the head motions, the vestibulo-ocular reflex goes out of calibration and is no longer accurate enough to prevent blurring. But after only a few minutes of practice under the new conditions, the timing of the reflex changes until the eye movements are once again in the right relationship to head movements. The only possible basis for this retraining is the visual information, probably simply the blurring that results from miscalibration. I seem to recall that 20 seconds of practice is sufficient for recalibration, but that seems a bit short. The reorganization principle could easily explain how that works, as it does not rely on any information but the average error signal over some period of time.

I think the general rule must be that for every open-loop feedforward compensation, there must be a superordinate negative feedback loop that keeps recalibration accurate. It is not sufficient just to have qualitative feedforward; the feedforward must be timed and sized correctly to reduce the effect of the disturbance and not make it worse. This is especially true for visual effects, since the eye's resolution is around one minute of arc or 1/60 degree, and any miscalibration greater than that will immediately worsen the quality of vision. Achieving that sort of precision in an open-loop reflex would be impossible without the feedback loop continually recalibrating the reflex.

When we recognize the need for the higher-order recalibration process, the whole problem reduces simply to the normal operation and adjustment of control processes involving more than one level. An example is the model for the behavior of extending a hand in a straight line away from the shoulder. To do this properly, it is necessary for a higher-order system to increase the reference signal for the vertical shoulder angle at one rate and simultaneously decrease the reference for the external angle at the elbow at twice that rate (assuming the upper and lower arm segments to have the same length). This will move the hand straight away from the shoulder if done perfectly, but of course perfection does not last long and reorganization is necessary to maintain good stabilization of the vertical angle of the hand as the reaching is repeated. This compensation can be done either on the perceptual side or the output side of the higher-order system. I have modeled it both ways.

These compensations are transient, and are needed only for a tenth of a second or so, during the short time before negative feedback control can become effective after a step-disturbance. We know that the delay can be compensated by increasing the time constant in the output function, and that after the initial perturbation the control system even with a delay can act very quickly and accurately. See the Live Block Diagram in the demos that go with LCS3.

This feedforward nonsense is just a play on words, motivated, I suspect, by the idea that if there is feed "back" there must also be feed "forward." In a control system circuit there are signals going both ways, the feedforward signals simply traveling internally from input functions to output functions while the feedback signals go through the external environment to get back to the input function from the output function. There is actually only one direction of signal travel around the loop; there is no part of the circuit in which signals sometime go one way, and sometimes the opposite way. Axons can carry signals either way, but the only time they go backward ("antidromic" conduction) is when an experimenter injects electrical stimulation in the middle of the axon. Synapses work in only one direction.

OK, I found this: Feedforward (management) - Wikipedia. This Wiki claims that Marshall Goldsmith originated this term in an article on management, which is hardly a good recommendation for a technical term.

Here's a better one for a more technical, but clear, treatment:

http://www.clear.rice.edu/engi128/Handouts/Lec14.pdf

Best,

Bill P.

···

At 01:29 PM 8/20/2012 +0100, R K Moore wrote:

Background
Self-generated body movements require compensatory eye and head adjustments
in order to avoid perturbation of visual information processing. Retinal
image stabilization is traditionally ascribed to the transformation of
visuovestibular signals into appropriate extraocular motor commands for
compensatory ocular movements. During locomotion, however, intrinsic
"efference copies" of the motor commands deriving from spinal central
pattern generator (CPG) activity potentially offer a reliable and rapid
mechanism for image stabilization, in addition to the slower contribution of
movement-encoding sensory inputs.

Results
Using a variety of in vitro and in vivo preparations of Xenopus tadpoles, we
demonstrate that spinal locomotor CPG-derived efference copies do indeed
produce effective conjugate eye movements that counteract oppositely
directed horizontal head displacements during undulatory tail-based
locomotion. The efference copy transmission, by which the extraocular motor
system becomes functionally appropriated to the spinal cord, is mediated by
direct ascending pathways. Although the impact of the CPG feedforward
commands matches the spatiotemporal specificity of classical
vestibulo-ocular responses, the two fundamentally different signals do not
contribute collectively to image stabilization during swimming. Instead,
when the CPG is active, horizontal vestibulo-ocular reflexes resulting from
head movements are selectively suppressed.

Conclusions
These results therefore challenge our traditional understanding of how
animals offset the disruptive effects of propulsive body movements on visual
processing. Specifically, our finding that predictive efference copies of
intrinsic, rhythmic neural signals produced by the locomotory CPG supersede,
rather than supplement, reactive vestibulo-ocular reflexes in order to drive
image-stabilizing eye adjustments during larval frog swimming, represents a
hitherto unreported mechanism for vertebrate ocular motor control.

Best regards

Roger

_____________________________________________________________

Prof ROGER K MOORE BA(Hons) MSc PhD FIOA MIET

Chair of Spoken Language Processing
Speech and Hearing Research Group (SPandH)
Department of Computer Science, University of Sheffield,
Regent Court, 211 Portobello,
Sheffield, S1 4DP, UK

e-mail: r.k.moore@dcs.shef.ac.uk
web: http://www.dcs.shef.ac.uk/~roger/
tel: +44 (0) 11422 21807
fax: +44 (0) 11422 21810
mobile: +44 (0) 7910 073631

Editor-in-Chief: COMPUTER SPEECH AND LANGUAGE
Error - Elsevier Support Center
________________________________________________________________

[From Rick Marken (2012.08.20.1810)]

Bill Powers (2012.08.20.1006 MDT)]

I thought some members of the list might be interested in a paper that’s

just been published in Current Biology that shows gaze control in tadpoles

during swimming action using fast efferent (feedforward) predictive control,

as opposed to slower visual feedback.

The citation is …

Lambert, F. M., Combes, D., Simmers, J., & Straka, H. (2012). Gaze

stabilization by efference copy signaling without sensory feedback during

vertebrate locomotion. Current Biology.

And the weblink is …

http://www.sciencedirect.com/science/article/pii/S096098221200810X

I have no problem with organism-initiated actions involving feed-forward compensation. The organism can determine when the action is to be initiated and how large and in what direction it is to be. This means that delays may be present, but will not be apparent to an external observer because the compensations can be launched at the same time as or even earlier than the output is generated. Use of a central pattern generator makes this easy, since all the preparatory steps are available to help synchronize the final result. This would be true of swimming motions that disturb the direction of gaze, as in the article. The CPG can be in the output function of a higher-order control system.

Thanks for this Bill. This would all be clearer to me if there were a diagram of what’s going on. I don’t suppose you have one handy?

Best

Rick

···

At 01:29 PM 8/20/2012 +0100, R K Moore wrote:

The feedforward compensation in the vestibulo-ocular reflex is a little different, but still similar in that head motions are detected rather than predicted from efferent signals and the signal is added to the output to the extraocular muscles to counteract (to some degree) the effect of voluntary or involuntary head motion on the direction of gaze. There is some small delay in this system, but not a lot because the signal pathways are very short. But the most important and suggestive aspect of this reflex is that it has to be continually recalibrated, and that requires feedback information.

When image-stabilization techniques are used to alter the motion of an image on the retina relative to the head motions, the vestibulo-ocular reflex goes out of calibration and is no longer accurate enough to prevent blurring. But after only a few minutes of practice under the new conditions, the timing of the reflex changes until the eye movements are once again in the right relationship to head movements. The only possible basis for this retraining is the visual information, probably simply the blurring that results from miscalibration. I seem to recall that 20 seconds of practice is sufficient for recalibration, but that seems a bit short. The reorganization principle could easily explain how that works, as it does not rely on any information but the average error signal over some period of time.

I think the general rule must be that for every open-loop feedforward compensation, there must be a superordinate negative feedback loop that keeps recalibration accurate. It is not sufficient just to have qualitative feedforward; the feedforward must be timed and sized correctly to reduce the effect of the disturbance and not make it worse. This is especially true for visual effects, since the eye’s resolution is around one minute of arc or 1/60 degree, and any miscalibration greater than that will immediately worsen the quality of vision. Achieving that sort of precision in an open-loop reflex would be impossible without the feedback loop continually recalibrating the reflex.

When we recognize the need for the higher-order recalibration process, the whole problem reduces simply to the normal operation and adjustment of control processes involving more than one level. An example is the model for the behavior of extending a hand in a straight line away from the shoulder. To do this properly, it is necessary for a higher-order system to increase the reference signal for the vertical shoulder angle at one rate and simultaneously decrease the reference for the external angle at the elbow at twice that rate (assuming the upper and lower arm segments to have the same length). This will move the hand straight away from the shoulder if done perfectly, but of course perfection does not last long and reorganization is necessary to maintain good stabilization of the vertical angle of the hand as the reaching is repeated. This compensation can be done either on the perceptual side or the output side of the higher-order system. I have modeled it both ways.

These compensations are transient, and are needed only for a tenth of a second or so, during the short time before negative feedback control can become effective after a step-disturbance. We know that the delay can be compensated by increasing the time constant in the output function, and that after the initial perturbation the control system even with a delay can act very quickly and accurately. See the Live Block Diagram in the demos that go with LCS3.

This feedforward nonsense is just a play on words, motivated, I suspect, by the idea that if there is feed “back” there must also be feed “forward.” In a control system circuit there are signals going both ways, the feedforward signals simply traveling internally from input functions to output functions while the feedback signals go through the external environment to get back to the input function from the output function. There is actually only one direction of signal travel around the loop; there is no part of the circuit in which signals sometime go one way, and sometimes the opposite way. Axons can carry signals either way, but the only time they go backward (“antidromic” conduction) is when an experimenter injects electrical stimulation in the middle of the axon. Synapses work in only one direction.

OK, I found this: http://en.wikipedia.org/wiki/Feedforward_(management). This Wiki claims that Marshall Goldsmith originated this term in an article on management, which is hardly a good recommendation for a technical term.

Here’s a better one for a more technical, but clear, treatment:

http://www.clear.rice.edu/engi128/Handouts/Lec14.pdf

Best,

Bill P.

Background

Self-generated body movements require compensatory eye and head adjustments

in order to avoid perturbation of visual information processing. Retinal

image stabilization is traditionally ascribed to the transformation of

visuovestibular signals into appropriate extraocular motor commands for

compensatory ocular movements. During locomotion, however, intrinsic

“efference copies” of the motor commands deriving from spinal central

pattern generator (CPG) activity potentially offer a reliable and rapid

mechanism for image stabilization, in addition to the slower contribution of

movement-encoding sensory inputs.

Results

Using a variety of in vitro and in vivo preparations of Xenopus tadpoles, we

demonstrate that spinal locomotor CPG-derived efference copies do indeed

produce effective conjugate eye movements that counteract oppositely

directed horizontal head displacements during undulatory tail-based

locomotion. The efference copy transmission, by which the extraocular motor

system becomes functionally appropriated to the spinal cord, is mediated by

direct ascending pathways. Although the impact of the CPG feedforward

commands matches the spatiotemporal specificity of classical

vestibulo-ocular responses, the two fundamentally different signals do not

contribute collectively to image stabilization during swimming. Instead,

when the CPG is active, horizontal vestibulo-ocular reflexes resulting from

head movements are selectively suppressed.

Conclusions

These results therefore challenge our traditional understanding of how

animals offset the disruptive effects of propulsive body movements on visual

processing. Specifically, our finding that predictive efference copies of

intrinsic, rhythmic neural signals produced by the locomotory CPG supersede,

rather than supplement, reactive vestibulo-ocular reflexes in order to drive

image-stabilizing eye adjustments during larval frog swimming, represents a

hitherto unreported mechanism for vertebrate ocular motor control.

Best regards

Roger


Prof ROGER K MOORE BA(Hons) MSc PhD FIOA MIET

Chair of Spoken Language Processing

Speech and Hearing Research Group (SPandH)

Department of Computer Science, University of Sheffield,

Regent Court, 211 Portobello,

Sheffield, S1 4DP, UK

e-mail: r.k.moore@dcs.shef.ac.uk

web: http://www.dcs.shef.ac.uk/~roger/

tel: +44 (0) 11422 21807

fax: +44 (0) 11422 21810

mobile: +44 (0) 7910 073631

Editor-in-Chief: COMPUTER SPEECH AND LANGUAGE

http://ees.elsevier.com/csl/



Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.08.20.2252 MDT)]

Rick Marken (2012.08.20.1810)]

BP: … The CPG can be in the output function of a higher-order
control system.

Thanks for this Bill. This would
all be clearer to me if there were a diagram of what’s going on. I don’t
suppose you have one handy?

No, but the idea isn’t hard to understand. Think of a CPG as a generator
(located in the output function) of a variable signal, or group of
related signals, which can be adjusted by inputs from a comparator. In
tracking large regular movements of someone else’s finger, you start
generating your own large movements and then adjust their speed and
amplitude until the speed error, amplitude error, and phase error are as
small as possible. You’re really creating those movements from scratch,
by yourself, and then altering the parameters or control inputs of the
CPG to shape the way it is working to make its output pattern match the
the movements of another person’s finger.

This is the kind of system I think would be responsible for walking and
running.

Best,

Bill

[From Rick Marken (2012.08.21.0915)]

Bill Powers (2012.08.20.2252 MDT)–

Rick Marken (2012.08.20.1810)–

BP: … The CPG can be in the output function of a higher-order
control system.

RM: Thanks for this Bill. This would
all be clearer to me if there were a diagram of what’s going on. I don’t
suppose you have one handy?

No, but the idea isn’t hard to understand. Think of a CPG as a generator
(located in the output function) of a variable signal, or group of
related signals, which can be adjusted by inputs from a comparator. In
tracking large regular movements of someone else’s finger, you start
generating your own large movements and then adjust their speed and
amplitude until the speed error, amplitude error, and phase error are as
small as possible. You’re really creating those movements from scratch,
by yourself, and then altering the parameters or control inputs of the
CPG to shape the way it is working to make its output pattern match the
the movements of another person’s finger.

This is the kind of system I think would be responsible for walking and
running.

Thanks, this is much easier for me to understand without all the physiological gobbldygook. I understood “feedforward” to be basically equivalent to what we call a reference signal and I also understood a CPG to be an output function (at any level) that generates a temporal pattern of outputs. So I guess the thing is to true to get the “feedforward” folks to understand that feedforward is just a component of a closed loop control system. I bet the way to do that is by having them produce a working model of, say, the tadpole behavior and see how their feedforward process actually works.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2012.08.21.1217 MDT)]

Rick Marken (2012.08.21.0915) --

Thanks, this is much easier for me to understand without all the physiological gobbldygook. I understood "feedforward" to be basically equivalent to what we call a reference signal and I also understood a CPG to be an output function (at any level) that generates a temporal pattern of outputs.

BP: Actually, the feedforward effect is often shown coming from a copy of the reference signal that bypasses the comparator and goes directly into the output function. It's rather silly, because without that bypass, the reference signal would still pass through the comparator to the output function before there was any chance for the perceptual signal to start changing. And that initial fraction of a second is the only time during which feedforward can (slightly) improve performance.

In another version, the disturbance of the plant goes through a sensory feedfoward connection to the same comparator where the reference signal comes in. But that path adds nothing to the PCT diagram because in PCT, the effect of the disturbance is already contained in perception of the controlled quantity. There is no difference in the delays to be expected.

RM: So I guess the thing is too true to get the "feedforward" folks to understand that feedforward is just a component of a closed loop control system.

BP: Yep, you don't have to add feedforward to a control system. It's already there.

Best,

Bill P.

[Martin Taylor 2012.08.21.15.18]

I thought some members of the list might be interested in a paper that's
just been published in Current Biology that shows gaze control in tadpoles
during swimming action using fast efferent (feedforward) predictive control,
as opposed to slower visual feedback.

The citation is ...
Lambert, F. M., Combes, D., Simmers, J., & Straka, H. (2012). Gaze
stabilization by efference copy signaling without sensory feedback during
vertebrate locomotion. Current Biology.

And the weblink is ...
http://www.sciencedirect.com/science/article/pii/S096098221200810X
... The efference copy transmission, by which the extraocular motor
system becomes functionally appropriated to the spinal cord, is mediated by
direct ascending pathways.

Roger,

Do you have access to the full paper, and can you explain what they mean by "mediated by direct ascending pathways"?

Martin

[From Matti Kolu (2012.08.22.1945 CET)]

Martin Taylor wrote at 2012.08.21.15.18:

Do you have access to the full paper, and can you explain what they mean

by "mediated by direct ascending pathways"?

Here's the full paper as an attachment.

Regards,
Matti

lambert, combes, simmers, straka - 2012 - gaze stabilization by efference copy signaling without sensory feedback vertebrate locomotion.pdf (1.54 MB)