Notes on arm model

[From Bill Powers (920330.0800)]

Greg Williams, Joe Lubin --

Notes on arm model.

Some solutions to the trajectory problem came together this morning. The
model needs at least one more level, the transition level, which I now
am convinced could also be called the "path" level. I've been rejecting
the idea of path planning because those who use it in the literature
have coupled it to the use of inverse kinematic calculations to produce
torques as if the situation were open loop. But I now see that in order
to create the kinds of tangential velocity profiles shown in the Atkeson
and Hollerbach article, it's necessary to generate movements along a
path in perceptual space (inside the model) with that velocity profile.
Otherwise the distortions created in going from external Cartesian space
to joint-angle space will (and do) create very strange trajectories of
the fingertip.

The spinal systems in the model see to it that the actual joint angles
follow reference signals for muscle stretch very closely. The actual
kinematic properties of the arm are wiped out for all motions that take
longer than about 0.15 sec to complete. So this level, which I suppose
encompasses intensities (force) and sensations (stretch), is working
satisfactorily.

The next level should control in configuration space. We could have a
visual and a kinesthetic configuration space, but I'm using only a
visual space. Including configuration space based on actual sensed joint
angles and combining it with visual space is a project for the future.

In configuration space we have r, theta, and phi coordinates for the
target and fingertip, the coordinates being angles and depth as reported
by the visual perceptual system at this level.

By adding head angles (and later eye angles) into this information the
visual configuration space can be defined relative to the body; I
haven't done that yet. Kinesthetic configuration information, when
that's added, can also be computed relative to the body, and then the
perceptual functions in visual and kinesthetic modalities can be
modified so they yield a common body-centered configuration space. All
these are projects for the future.

What's been the problem in controlling the path of the fingertip for
rapid movements is that when the fingertip moves in a straight line in
objective space, it doesn't move in a straight line in perceptual space
or kinesthetic space -- and vice versa. If the fingertip is to the right
of the eyes and the target jumps horizontally to the left of them, a
straight-line movement to the target would bring the fingertip closer to
the eyes at first, then farther away again. As the fingertip gets closer
to the eyes, the control systems push it farther away, so the path bows
outward. But then, as the fingertip keeps moving to the left, it now is
too far away relative to the target distance from the eyes, and the path
bows inward again. The net result is that the fingertip doesn't
naturally want to move in straight lines.

At first I thought this was a kinematic problem. Then I introduced the
switch that allows the dynamics to be turned on and off, and found the
same problem with dynamics off. It's simply a problem with the geometry
of vision, coupled with the geometry of arm movement.

In order to bypass this problem for the time being, I used a "phantom
target" that moved through intermediate positions from the initial to
final positions. This target moves in Cartesian (objective) space in a
straight line, because the x, y, and z coordinates of the target are set
by a common parameter that goes from 0 to 1, scaled for each coordinate.
If this phantom target moves slowly enough, the fingertip always stays
on the target, and all the nonlinearities are eliminated by the feedback
processes. And naturally, the fingertip moves in a straight line.

The Atkeson and Hollerbach article shows that straight-line motion is
not required (at least when subjects aren't told to follow any
particular path). The "default" paths are somewhat curved, depending on
the placement of the end-points. Rough measurements of their plots show
that the radial distance from the shoulder simply changes from the
initial radius to the required final radius in a smooth curve, as does
the shoulder-to-fingertip angle. While the data aren't enough to verify
this, it would seem that the movement of the fingertip in radius and
vertical angle is controlled parametrically -- that is, as if the finger
reference position is being varied simultaneously in these two
dimensions by a change in a common parameter. Velocity profiles indicate
that this supposed parameter traverses its range in a way that
accelerates to the midpoint, then decelerates to the final point. The
authors indicate that this is a "minimum-jerk" movement, meaning that
the rate of change of acceleration is smooth.

All this is very suggestive. To me, it suggests that we need one more
level, a transition level, to control the parameter that moves the
fingertip reference position from the initial position to the target
position, along a straight line in perceptual space. This will be a
curved line in objective space, at least for some directions of movement
(radial changes will be along straight lines). The transition control
system can easily be set up to produce the same velocity profile for all
movements and speeds of movement; this will naturally come out of the
feedback dynamics. I can see now that we will be able to make a
prediction: the velocity profile, normalized as in the article, will
also be the same for different spatial separations of initial and final
positions, at least for rapid movements.

As the model is set up, a geodesic movement in perceptual space will be
transformed into an eye-centered movement in objective space. According
to the Atkeson and Hollerbach data, the actual movement seems to occur
in a space that compromises between shoulder-centered and eye-centered.
This is very hard to judge, though. In the end I think we will have some
sort of body-centered space with both visual and kinesthetic perceptual
functions being modified to reconcile them with a common space. Then a
geodesic movement will be straight in this common space, but still
probably curved in objective space. True straight-line movement in
objective space will have to wait for the relationship level to be
built, I think. Something will have to determine that a particular
curved path in configuration space has especially nice properties when
you follow it. Or maybe this space eventually adapts to achieve some
minimum-energy-of-action state, and distorts until straight lines are
really straight. All this ought to be great fun to work out, but I hope
my poor muddled brain isn't called on to do it all.

Of course once we can produce geodesic path control, we can produce any
other kind. All that has to be done is lay out a different path in
perceptual space, and let the transition-control system run the
reference signal settings for the fingertip along that path. This is
starting to sound pretty real, isn't it? Now we can have path-planning
that simply amounts to tracing out a path in imagination and then making
the fingertip position reference signals trace it in r, theta, and phi.
Everything will be under tight feedback control all of the time.

I think this addition will make the model more acceptable to
conventional modelers, because it does allow for trajectory planning. It
eliminates the need for those awful inverse kinematics computations,
which means that a simulation or hardware model actually has a chance of
working (for indefinite periods of time). I don't think that anyone has
actually tried running a model based on the inverse kinematics approach,
at least not for any extended period of time. Those integrations would
get out of whack pretty fast!

I've had vague ideas that the transition level is involved in path
control, even in BCP (p. 133), but this concept seems closer to
realization now than it's ever been.

Joe Lubin, I hope you continue to be interested and can find more
students willing to extend this model-building. I have such mathematical
limitations that I really can't do all of the things that I can
envision. When I get into coordinate transformations and such I feel
that I'm whipping a reluctant old horse through a thick fog. It's really
getting to be time for some young sharp brains to get into this act.

Greg, how much of this do you think we actually need to get working
before we submit a paper on this model? I'll try to get some kind of
parametric path control into it, but I do think it's time to get into
print. The Science referees on that letter in effect challenged me to
put up or shut up, so I think there's a good chance of publication
there.

Joe, How about some details on what your student has accomplished so
far?

ยทยทยท

---------------------------------------------------------------
Best

Bill P.e