Springs and Muscles, Part 5

[From Bruce Abbott (2015.02.15.1030 EST)]

In Part 4 I described the Level 1 control system proposed by Bill Powers (1973) and later incorporated within his “Little Man” demo (Powers, 1999). This model was based on the anatomy and physiology of the system that operates the muscles. The two-level control system that Powers envisioned includes a bottom-level control system controlling force (as signaled by the Golgi tendon organs) and a top-level control system controlling muscle length, whose output provides the reference levels for the force-controller. Although similar in some ways to Merton’s servomechanism proposal, the Powers model does not suffer the same difficulties that led to the rejection of Merton’s servo hypothesis.

Both the Merton and Powers models are applications to the problem of motor control of early cybernetic principles. However, as the field of cybernetics continued to develop and its newer principles and techniques used to improve the design and performance of automated systems, these new approaches were soon applied to the problem of biological motor control. In particular, lessons from the field of robotics seemed particularly relevant. This new approach found a home within cognitive psychology, where it came to be known as “computational motor control,” a subfield of “computational neuroscience.”

Computational Motor Control

The computational approach to motor control developed hand-in-hand with the growth of the digital computer – first the large “mainframe” computers of the 1950s and beyond and culminating in today’s powerful personal computers. These computers can perform complex computations at very high speed, allowing modelers to create simulations involving complex transformations, signal filtering, Fourier analysis, and other processes to be carried out in real time. Without them, running a simulation that requires these processes would require an impractical amount of time, if it could be done at all.

In a sense, the computational approach assumes that the brain and spinal cord are capable of carrying out the sorts of processes that are included in the computational models, although not necessarily in the way that a digital computer does them. It assumes that cognitive processes such as planning, or developing and maintaining an internal model of some aspect of the external world are carried out in the biological “wetware” of the nervous system.

Computational motor control involves a number of stages and processes, beginning with the establishment of a task goal. To illustrate this process, consider the decision to reach for a coffee cup, located to your front-right on the table. Given your present position and posture, the location of the cup on the table, and the initial position of your arm and hand relative to the cup, there are a number ways you could move your arm in order to contact the cup, involving different combinations of rotations of the shoulder, elbow, and wrist joints as well as different variations in speed, all of which will get you to the goal position. Selecting which of these solutions to implement involves the process of “motor planning.” This planning may take into account various constraints such as there being an object in the path between your hand and the cup. It may also involve finding a solution that in some sense optimizes the process, such as minimizing energy requirements or related kinematic and dynamic costs of the movement.

Another aspect of the task is to make use of internal models that are built up from and modified by experience. Forward models predict the effects of motor commands on the “plant,” the environmental system that the control system is designed to control. For example, the motor system may learn the pattern of reaction forces that develop during movement of the arm as a consequence of its mass, rotational inertia, and so on. Control actions involve “transformations” from motor variables to sensory variables that are accomplished by the environment and the skeletal-muscular system. The motor system can learn these transformations, which become “internal forward models” that are implemented by the neural circuitry. These models come in two types: forward dynamic models that predict the next state (positions, velocities) given the current state and motor command, and forward output models, which predict the sensory consequences of those actions.

These forward models serve a variety of roles. For example, they may be used in a system that uses a copy of the command (efference copy) to anticipate and cancel the sensory effects of the resulting movement. They can also help to cope with delays in sensory feedback within negative feedback control systems.

Presumably you have built up internal forward models that enable you to predict how much force you will have to exert to lift that coffee cup once you have it in your grip. An incorrect prediction can have comical results. I recall lifting a gallon jug of milk that I thought was full when it actually was nearly empty. I pulled up too hard and the jug hit the shelf above it in the refrigerator.

Inverse models are internal models that transform desired sensory consequences of motor commands into the motor commands designed to produce those consequences. Inverse models allow a control system to operate open-loop, i.e., without sensory feedback. This type of control can perform reasonably well so long as the model is at least roughly accurate and no unexpected disturbances to the controlled variables occur during command execution. In some models, they provide an initial action that is then corrected by a feedback controller.

Inverse kinematics determine the changes in joint parameters (joint angles, velocities, etc.) that will bring about a given motion from the current position to an end position. Inverse dynamics determine the forces that will need to be applied by the muscles in order to produce those changes. In robotics the kinematic and dynamic equations of the model may be established by the designer of the robot or there may be algorithms that allow the robot’s controller to learn from its own experience with control. Models of biological motor control may also incorporate learning algorithms for this purpose.

Getting back to the coffee cup example, you’ve decided to have a sip of coffee, developed a motor plan that will get your hand from its current location to the cup, consulted an internal model to determine the forward kinematics and dynamics that will produce the required motion, taking account of the geometry and physical characteristics of your arm such as mass and rotational inertia, then, using internal inverse models, computed the inverse kinematics and dynamics to find the motor commands that will accomplish those movements, and finally, executed the motor plan. Whew!

image00330.jpgYet another method that may be incorporated within a computational motor-control system is state estimation. A state estimator or “observer” monitors both the motor commands and the sensory feedback to provide a better estimate of the state of the system at any given moment than sensory feedback alone may provide. Such estimators are robust against problems such as sensor failure (they can substitute estimated sensor values derived from the internal models) or delays in sensory feedback.

In this brief overview I’ve only been able convey a small part of the methods and their applications in computational motor control. The diagram at left illustrates schematically how these different elements may interact. The diagram is Figure 1 from an excellent review of computational motor control by Michael Jordan and Daniel Wolpert (1999). The paper presents a number of simpler system diagrams illustrating various kinds of control-systems that use selected components, such as a system employing both feedforward and feedback controllers and feedback error-learning.

As you can see, the computational approach to motor control has evolved to include a diverse set of processes assumed to underlie motor control. On the one hand, critics have argued that many of these process require that the nervous system perform complex mathematical operations such as those involved in inverse kinematic and dynamic computations, and that such operations are unlikely to be carried out with the necessary speed and accuracy by the nervous system, unlike the robotic systems for which these techniques were originally developed. On the other hand, the developers of these computational systems have applied their methods toward solving a number of difficulties in motor control that other approaches are just beginning to address, if they address them at all.

Perhaps the most successful of these other approaches that do not rely on such complex computational processes is the equilibrium point, or “EP” hypothesis developed by Russian physiologist Anatol Feldman and his colleagues. Up next: The EP hypothesis.