Marken's conflict experiment

[From Bill Powers (930119.1615)]

Rick Marken (930118.2000)

The conflict experiment is great. It may lead to some unexpected
territory:

The coefficients in

x = a1Hx + b1Hy + dx
y = a2Hx + b2Hy + dy

look like a rotation matrix. If

   a1 = cos(t)
   b1 = -sin(t)
   a2 = sin(t)
   b2 = cos(t)

the determinant will be 1. But this is just a rotation of the
cursor x-y axes relative to the handle x-y axes. If you alter the
coefficients arbitrarily, some of the alteration will amount to a
rotation of the cursor coordinates relative to the handle
coordinates, and some will amount to changing the angle between
the transformed axes. If you use the rotation matrix, the angle
between the axes will remain 90 degrees.

The model always perceives in the same x-y coordinates. However,
the human being may be able to rotate the perceptual space. This
would cause a difference between the model's behavior and the
person's; the loop gain for the model would fall off on both axes
while that for the person would not, when a significant rotation
angle occurred.

A quick way to check this would be to compute the four
coefficients by the rotation matrix above, using the angle t as
the parameter. This might create difficulties, but it will not
create conflict because the axes will remain orthogonal. If the
human being shows the same parameters for the rotated
transformation (after practice), it's pretty clear that there is
a rotation of axes taking place inside the person somewhere in
the loop. The model, which does not rotate axes, would show a
falloff in precision with an increase in angle t, while the
person would not.

This leads in two directions. First, you might try leaving one
axis alone and just varying the rotation of the other axis. You
could say a1 = 1, b1 = 0; a2 = 1*tan(theta), b2 = 1. This would
rotate just one axis, leaving the other the same. Presumably, if
the person is rotating the visual field or the output matrix, the
whole field would be rotated, not just one axis in it. So under
those conditions the person might not do any perceptual rotation
at all. If that's true, the model and the person should show the
same falloff in control as the angle between the axes changes. If
the person does do some rotation, the model will behave worse
than the person but not as much worse as in the pure rotation
where the axes are kept orthogonal.

Second, you might try adding a control system that acts by
altering the coefficients in a perceptual rotation matrix. I
don't know how this rotation would be related to error, but
perhaps it could be based on the difference or ratio of errors in
the two control systems. This is definitely a step away from our
simple canonical model. A way to proceed would be to set up the
two-system model as it is, and look at how it behaves as the axes
are rotated and kept orthogonal. The relationship between the
error signals and the perceptual signals might give a hint about
the way to organize another control system that would rotate the
axes.

I have an experiment that could be used to test whatever model
you come up with. You may have seen it; I don't remember. In this
experiment, a circle is drawn on the screen, and a dot moves at a
regular speed around the circle. A disturbance and the control
handle act on the moving spot in the RADIAL direction. This is a
bit of a mind-twister, because as the spot goes around the
circle, the relationship of the radial spot movements to the
handle movements is continually rotating. It sounds perfectly
simple -- if the spot is too far out, move the handle left, and
if it's too far in, move the handle right.

Actually doing it, however, makes you want to tilt your head 360
degrees -- "in" and "out" don't seem to be the natural perceptual
directions.

If there is a perceptual rotation control system, it would be
needed in a model of this experiment, I think. If a pure radius-
controller works, however, then the rotation model isn't needed.
All we can do is try it and see.

I think it's important to compute the model's prediction error
for both axes.

···

-------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 930120 15:45]
(Bill Powers 930119.1615 to Rick Marken 930118.2000)

The model always perceives in the same x-y coordinates. However,
the human being may be able to rotate the perceptual space. This
would cause a difference between the model's behavior and the
person's; the loop gain for the model would fall off on both axes
while that for the person would not, when a significant rotation
angle occurred.

I think it more likely that the human is not working with orthogonal
control systems at all. It seems more likely that there are many, rather
than two directions of control, and that the observed actions of handle
(mouse) movement are the result of these many outputs in the different
directions. There is neurological as well as psychophysical support for
this kind of notion. So, to enhance the model, I would put more ECSs,
which look at error in different directions, rather than impute to the
human an ability to rotate the space.

Looking at my own performance on Rick's stack, I find that as the conflict
grows (the tracking ellipse gets narrower), the model produces a tracking
ellipse that is more or less tangent to my own, but is more upright in the
direction of its major axis. I suspect this might not happen if the model
were not so strongly oriented to the x and y directions.

If one does make a model with more ECSs, there are more degrees of freedom
with which to fit the data, which makes it more difficult to determine whether
the new model is an improvement over the old. So I would propose that the
degrees of freedom should be reduced by taking into account psychophysical
data on human ability to perceive things at different orientations, and
give the x-y ECSs about twice the gain of ECSs at 30 degrees away from the
major axes. If one did this, the proposed model system would have all its
gains set by a single number. If Rick would send his updated stack, I might
try to set up such a model, with four ECSs, ones at zero and 90 degrees
having equal gain, twice that of ones at 30 and 60 degrees. Or Rick
could try it himself.

A side note: my first summer-student job in psychology was on the effect
of orientation on the ability to detect fine lines. We found that lines
near vertical and horizontal were much more readily detected than were
off-axis lines (near 30 degrees from horizontal was worst), but the axes
of best performance were not exactly at 90 degrees apart for any of our
subjects; about 95 degrees was more common. As part of the study, we wanted
to find out whether the reference was retinal or gravitational, so we tilted
the subjects' heads 45 degrees. What happened was neither. The angle
between the peaks of best performance widened, though I've forgotten whether
it was the horizontal that stayed nearer the retinal reference and the
vertical nearer the gravitational, or the reverse. The effect was not
very big, but it was big enough to be noticed and puzzled over.

Martin

[From Rick Marken (930120.1800)]

I want to give a hearty thank you to both Bill Powers
and Martin Taylor for their suggestions on the conflict
experiment. I really appreciate it. This "networked"
research could be a very powerful use of CSG-L.

I'm going to have to go off and ponder these suggestions;
but based on a cursory perusal( and an even more cursory
understanding) I think I will take both of their advice
(advices?).

Martin -- Thanks for the personal note. A copy of the latest
stack will be in the e-mail tonight. Not terribly different
(or better) than the old one -- but you can use a random
disturbance and measure the model fit.

Best

Rick

[From John Gardner (930121.0000)]

Bill Powers (930120.1530)--

Whew! One of the reasons I've resisted speaking up before now is
that I can't imagine how you folks find the time to keep up with
this correspondence. I *knew* that I'd be sucked in eventually.
Oh well, here we go....

There's one thought to hold firmly in mind: it is highly unlikely
that the lower reaches of the brain and the cerebellum are
computing the quantitative inverse of a 6x6 (or when you come
down to it, a 26x26) matrix in order to move the limbs, walk,
balance, and so on, all at the same time and in real time. SO
THERE HAS TO BE A MUCH SIMPLER WAY.

I couldn't agree with you more. I only went into the 6x6 matrix
description because it typifies the industrial robot control problem and
there appeared to be a great deal of confusion about it. This very
question has been batted around the robotics circles for some time and
there have been some interesting concepts. I'd like to mention a few.

in 1975, Jim Albus (of NBS) published a series of papers (American
Society of Mechanical Engineering Transactions on Dynamic Systems,
Measurements and Control, --I don't' have the complete reference in
front of me) in which he introduced the Cerebellar Model Articulation
Computer (CMAC) as a model and method of articulated arm control based
on some then-recently published ideas of human and animal motor control.
This concept (which is still the topic of research today) was about 15
years ahead of its time in that it is essentially a 'connectionist' or
Artificial Neural Network (ANN) approach to robot control. Without
going into the gory details, Albus suggests an approach in which the arm
states (angles and velocities) can be used as addresses to a large
memory system which contains values of muscle torques/forces. These
tables are built up, either through evolution or learning and the system
runs and a funny kind of hybrid open/closed loop system. He also
describes methods in which the tables can be built (learning) and there
are also provisions for taking weighted averages of a number of
neighboring memory cells (generalization). I haven't really done the
technique justice here, but I hope I gave you a flavor of it.

More recently, the robotics crowd has gone positively crazy over ANN's,
especially feedforward models trained using backpropagation. There is
nothing very mysterious here. It has been proven that ANN's are
'universal approximators'-that is, they are capable of learning any
continuous nonlinear function to an arbitrary degree of accuracy given
'enough' neurons and at least one hidden layer. The excitement for me
lies in the possibility of very fast robot controllers (nanosecond
range) once neural chips come out of the prototyping stage. I've
published a handful of papers in this area and have recently graduated a
Ph.D. student whose work entailed the on-line training of neural
networks to control a 2 DOF robot in the laboratory. There is of course
a large body of work in ANN's dealing with autonomy, sensory processing,
etc., but I'm focusing on their advantages for control, which brings us
back to Bill P's comment about the brain. I would argue that the lower
reaches of the brain do, indeed compute these transformations, they just
do it the massively parallel manner similarly to the way ANN's do.

Another researcher with some fascinating (and a bit wacky) ideas is Hans
Moravec at Carnegie-Mellon's Robotics Institute. He wrote a book a few
years ago entitled Mind Children: The Future of Human and Robotic
Intelligence (Again, I don't have it in front of me, but I believe it
was put out by MIT Press). In his book, he discusses the
disappointments of the robotic field. In the 1960's, we were teaching
computers to play chess but in the 1990's, walking is till pretty much
out of our reach and running, jumping, etc. is out of the questions. He
claims that the reason for this is that the so-called 'high-level' brain
functions like reasoning, and logic have been highly over-rated. We (as
humans) have only been working on it for a few million years. However,
running, walking, seeing, balancing, etc., have had intense evolutionary
pressure (for survival) for thousands times longer. We are veritable
Olympians at motor coordination and simpletons at logic (to heavily
paraphrase Dr. Moravec). Anyway, one of the points of his book is that
we need to approach robotics in a "bottom up manner"--by that he means
that we need to set up structures for learning and acquiring information
(like neural nets or CMAC), give the robots sensory systems (vision,
touch, etc.) and let 'em loose so that they can teach themselves how to
walk, etc. I happen to agree with the principle here, the "bottom-up"
approach to intelligent control has a great deal more intuitive appeal
to me than the "top down" approaches of expert systems and the like.
Again, I'm highly paraphrasing Moravec and I highly recommend the book,
it's rather short and not written on a very technical plane.

The solution is to build one control system that controls the
vertical shoulder angle and a second one that controls the
EXTERIOR elbow angle. The y dimension of the fingertip position,
as seen, is altered by varying the shoulder reference signal. The
z dimension is controlled by sending a reference signal r to the
elbow angle control system, and a reference signal -r/2 to the
shoulder vertical angle control system (adding to the other
reference signal).

This reminds me of performing dynamic system simulation using analog
computers. (Analog computers are simply a bank of amplifiers with
capacitors in the feedback loops, they act as integrators and can be
hooked up with other electronic components to behave in a manner
'analogous' to any linear (and some nonlinear) dynamic systems) They
were very popular among electrical engineers before digital computers
became cheap, fast and reliable. The point I'm trying to make is that
by manipulating the reference signals in that way, you're just
performing the exact same computations (inverse Jacobian), you're just
doing it 'in the hardware' instead of software. By your own arguments,
isn't it even less likely that the 'lower reaches of the brain' are
capable of the kind of high-level reasoning that was required for you to
come up with the solution you outlined above? Or are these
relationships 'hard-wired by evolution?

I see no reason in principle why this approach can't be extended
to four or more degrees of freedom.

Except for the fact that our limited brain capacity can't really see the
kind of relationships that you intuitively set out for the problem
above. I'd hate to rely on intuition and insight to work on anything
more complicated. And any systemmatic approach starts looking like
the jacobian or kinematic-type solution.

If we can agree that the brain is NOT computing quantitative inverses
of 6x6 matrices to better than one percent accuracy, this seems to be
the only remaining approach. Each degree of freedom is put under direct
feedback control. Then the control systems are put under the
control of a higher level of control that senses combinations of
the variables, removing ambiguities not by literally solving
simultaneous equations, but in the analog mode, by assuming a
solution and using it to feed back to alter the variables in the
right direction to bring about that solution. This process bears
some resemblance to solving large sets of equations by successive
substitution, but it does so without the use of any inverse
calculations. It's more like a method of steep descent.

Sounds to me like your describing a control based on neural networks.
As I mentioned above, the neural networks compute the required
transformations, etc. in a massively parallel analog mode, they learn by
iteration and the backpropagation method tends to minimize the rms error
by following the steepest local gradient of the error surface.

This is essentially how I thought it was done. Tell me something:
in order to get, say, one percent accuracy in the final position
in x,y, and z, what kind of computational accuracy must be
maintained during the matrix calculations, and how accurately do
parameters like muscle response to signals and arm mass
distribution and geometry have to be known?

No simple answers here, but I'll give it a shot. There's actually two
different computations going on in the method I described. First, the
joint angles are sensed and that information combined with link lengths,
offsets, etc. (robot geometry)is used to find where the end effector is
in x-y-z space- this is the forward kinematics computation. Then we can
use the inverse Jacobian to find us some joint rates to point us at the
desired location. The overall accuracy is dependent on the first
computation, not the second. (this has to do with the fact that I'm
commanding rates and desiring angles, which theoretically can give be
perfect accuracy eventually, but I'm digressing). We must know the
manipulator geometry VERY well, and perform the computations with
reasonably accuracy, FORTRAN single precision would probably be OK,
given the limitations of our sensors. This, by the way, is the limiting
factor in industrial robot payloads. We assume that the links don't
bend and it doesn't take a very heavy payload to cause significant link
bending when the robot arm is extended. So the bottom line is that
geometry is important, arm mass and muscle response, while important for
dynamic behavior and contour path following, have essentially no impact
on stead-state accuracy.

If gravity were being
switched randomly on and off during this movement at
unpredictable times, what would happen to the outcome?

Hmm, interesting. Since the system I described has a feedback loop, the
disturbances would eventually be rejected, the tip would arrive at the
desired location. I think I see what you're driving at though. The
scheme I described (and similar ones described on the CSGNET earlier)
are what we call 'local control schemes' based on local velocity or
position control loops at each motor. Since we are using geometry to
compute desired set points for those loops, things which cause
disturbing torques (like gravity) are compensated, more or less, by the
local loops. Of course, issues like stability and dynamic response are
very interesting here, but we're getting into it pretty deep. There's
another family of schemes, called inverse-plant approaches which
actually attempt to solve the differential equations and compute motor
torques directly. These lead to compuational nightmares which are
actually much worse that the local schemes.

Do you see
any way at all that the human nervous system, working with
frequency-coded analogue signals having a dynamic range of
something like 50:1, could do the necessary matrix inversions
with enough accuracy to explain the behavior we see?

No real argument from me here. I think my discussion up to this point
deals with this pretty well. I've never seen that number on the dynamic
range of the neural signals. Where can I find that in the literature?

I think it's
very important to question whether the nervous system has the
properties necessary to solve this problem as you describe. It's
important for the reason I stated at the beginning: if the
nervous system lacks the capacities needed to carry out this
approach, yet produces the kind of exquisite control that we
observe, it must be operating on a completely different, and much
simpler, principle. I think that the PCT approach is at least a
hint as to what that principle might be.

I think what it comes down to is the tool at hand. When we started
controlling robots we had (and still have) computers based on the 'von
Neuman' model. That is, instructions stored in memory and one
instruction being executed at a time. The only way to attack the
problem is by hacking at the geometry and coding it the computer. The
robotics folks, I think, agree with the basic premise--our brains aren't
doing that. But, on the other hand, our brains are certainly not von-
Neuman model computers. I think this lies at the heart of the
excitement behind artificial neural networks and also the artificial
life folks( don't I remember some pretty nasty comments directed at them
some months back???) Anyway the "engineer's answer" to the question is:
Use the technique best matched to your tool. If you have a hammer, use
nails, if you have a screwdriver,... well, you get the picture.

Wow, this is really interesting and challenging. I'm enjoying this
exchange. I believe there is some substantial common ground here and I
look forward to continuing the discussion.

John Gardner
Dept. of Mechanical Engineering
Penn State