Arm model speed; inverse calculations

[From Bill Powers (930117.1430)]

Some added comments on Ave Andrews' 930116.--:

"Recruitment" of neurons is not a special process unto itself. It
results when a signal carried by many redundant neurons enters a
pool of receiving neurons where the neurons have different
threshold biases. For low-frequency inputs, the receiving neurons
with the lowest thresholds start producing output first, with
higher-threshold neurons starting to respond at greater and great
signal levels. Characterizing this process as "recruitment" is
strictly a metaphor. I have read somewhere that for stretch
feedback signals, the result of this recruitment effect is to
linearize the overall response of motor neurons to a multiple-
pathway neural signal.

RE: speed of the arm model

The speed of the arm model is not limited by the computer speed
(although on a slower 286 computer it is). The computer speed
makes no difference anyway, because one iteration of the computer
program is defined as equal to 0.01 second of real time, and all
statements about speed of response of the model are in terms of
that definition -- whether one iteration takes 0.01 sec or 0.01
day. The inertial properties of the arm are computed with dt =
1/4 of the main program's dt -- but are computed four times as
often. So the dynamics of the arm is properly represented on the
time scale of 1 iteration = 0.01 sec no matter how long an
iteration of the program actually takes.

In fact about 90% of the computing time in the Little Man model
is devoted to plotting the display. Without the need to display
the results, the program could run faster than real time even on
an XT. For all fast computers, the iterations are synchronized to
the vertical frame rate of the display screen; the program
actually waits for the next frame before starting the next
iteration. This is to keep bands of light and dark from moving
across the display.

ยทยทยท

--------------------------------------------------------------

Chris Malcolm (930117) --

Are you saying that the designers of these robots are
computing inverse kinematics? They're not using direct
feedback control of joint angles?

They are doing both. Each motor is PID speed and position
controlled by direct feedback of joint angle (actually motor
rotn). If well done this by itself is sufficient to get the arm

to move on a smooth and reversible trajectory from A to B,
with all motors starting and finishing together, along a curve
close to the minimum energy curve, known as joint-interpolated
motion.

Are the speed profiles calculated in advance? The Little Man
doesn't demand any particular speed profile; it takes whatever
comes out of the control process. Basically, velocity at a joint
is controlled, but the reference signal for speed is the position
error signal. The velocity error sets the reference signal for
torque (which is proportional to angular acceleration). The basic
output is a torque at a joint.

By adjusting time constants in the various loops, it's possible
to make compound movements cause the fingertip to approach the
final state in a curve that moves more or less directly toward
the target position, so all torques come to their final values at
once. The fingertip trajectories with a square-wave change of
position reference signals are quite like the trajectories in a
real human arm movement at maximum speed -- neither the human nor
the modeled movements are straight lines, but they're reasonably
close.

If the upper arm vertical position reference signal varies as 1/2
the elbow-angle reference signal, the fingertip moves in an
exactly straight line away from the shoulder as the position
reference signal changes (provided the two arm segments are equal
in length). No inverse kinematic calculations are needed. Higher
systems can then control the fingertip position in a shoulder-
centered spherical coordinate system.

In general, I don't see a need for the end of the arm to move in
exactly straight lines. Of course if you wanted to do that you
could compute how the position reference signals should change,
but I don't see the advantage.

3 degrees is trivially easy to do almost any way you like. It's
the last two degrees of freedom (5 & 6) that create the
computational difficulties.

Actually, I think that the dynamic interactions among the 3
degrees of freedom in the Little Man model are far from trivially
easy to overcome. In a fast motion from a fingertip position to
the left and far from the body to a position to the right and
close to the body (for example), there are strong Coriolis forces
operating both ways, and for vertical movements there are strong
interactions between damping forces in the vertical plane at the
shoulder and at the elbow. With the arm in various
configurations, the moments of inertia change radically. I was
dumbstruck at how good nature's solution to this problem was: the
feedback arrangements in the combined stretch and tendon reflexes
allow stable independent fast control about each degree of
freedom with a very minimum of residual interaction.

But if you want the end of the robot to move in a straight line
you need to solve the inverse kinematics. Because it is
hard/expensive to do this within the usual 10/20 msec motor
control loop cycle times, the usual procedure is to approximate
the straight line out of a lots of little joint-interpolated
curves, by calculating enough intermediate points along the
straight line, and feeding the next set of target joint angles
into the motor control loops before the motors have got to the
previous target.

I haven't done this yet, but I'm pretty sure you could achieve
the same result by suitable perceptual calculations. Basically
it's a transformation from a spherical to a Cartesian coordinate
system (the basic joint-angle control systems can easily create
the spherical coordinate system centered on the shoulder).

As to "expensive," that's a different problem. I think that the
basic problem is the lack of good visual perception in three
dimensions. Without the visual system you DO have to calculate
trajectories, because you can't control convenient relationships
between the fingertip and other objects. You can't perform a
straight-line motion by maintaining a constant distance between
fingertip and some straight edge, for example. On the other hand,
I think that with accurate angle transducers on the joints, you
can calculate where the finger is in space and control that
perception with fewer calculations than you would need to
produce, blindly, a known position in space. There would be no
inverse calculations to do: just a few lines of trigonometric
calculation.

Concerning the addition of degrees of freedom: this should not
cause computational difficulties unless you are trying to find
unambiguous solutions. If you're controlling perceptions,
basically you don't care which solution of a multiple-valued
equation yields the desired perception, as long as one of them
does. If you only want the hand to be palm-down and horizontal at
a fixed distance from the shoulder, you don't care where the
elbow is or how the wrist is cocked. If you do care, you add a
control system for each loose variable you care about.

Your remarks about BCP came as a very pleasant surprise. Thanks.
---------------------------------------------------------------
Best to all,

Bill P.