nu14, strategies for design

[From Bill Powers (940802.0630 MDT)]

Avery Andrews (940802.1547) --

Is @qt( ) a function that defines a pair of quotation marks?

the excess df's can be left in the care of simple control systems (like
the one in my nu14.c which tries to apportion the total roll at the
wrists between the shoulder and elbow-roll df's.

The degrees of freedom are "excess" only until you need them. If you pin
your elbow against your side by holding a newspaper there, you suddenly
have only your forearm and wrist df to hold the glass of water level --
but it can be done, within limits. You involve your body and legs a lot
more, to be sure.

Less straightforward, I fear, is the problem of too few df's at the
higher levels, and the limited ranges within which the available ones
operate. Consider the problem of moving a full container liquid around,
perhaps from one place to another. At least two control-systems must
successfully interact. One is responsible for keeping the container
level, so the liquid doesn't spill. This will be achieved to a
considerable extent by adjustments at the wrist pitch and yaw dfs, as
well as 'wrist roll' (the sum of roll from the elbow and shoulder
joints). But these df's have limited range, & can't always cope with
the attitude disturbances required to reach certain positions (hold a
glass in the normal way, and lower your hand-eventually the glass will
have to tip). The other system(s) is/are responsible for attaining the
desired final position.

This, as you note, is what the higher levels of control are for -- and
at the same time why we even attempt actions that require higher-level
control. Sequencing (put glass down, clear a space on table, move glass
into space) is the eighth level in the HPCT model. It may have come into
being because of conflicts generated by degrees-of-freedom limitations
at lower levels (a simpler creature would have to give up trying to move
things or itself in certain ways). A dog on a long tether in a yard
experiences a loss of df outside a certain range. If it wraps its tether
around a tree, all it can do is bark until someone with higher levels of
perception comes along to unwind it (N = 2 dogs I have known). The
concept of reversing a sequence seems too difficult for a dog to
control.

One consideration that applies to this case is that the hand attitude
reference is one that needs to be maintained at all times (to some
degree of accuracy, depending on the cirucmstances), while the
positional reference is what one might call a @qt(some-time goal): it
doesn't need to be attained immdiately, tho sometime soon would be
good.

That's asking a lot of a reference signal. I would say that a goal is a
goal -- either it's set or it's not set. The "some-time" aspect of the
behavior isn't a property of the goal, but a strategy of the system that
sometimes sets the goal and sometimes doesn't -- for other reasons.

The control systems in the 14df or nu14 model don't have to achieve
perfect independence of degrees of freedom. All they have to do is
provide reasonably orthogonal modes of control for higher-level systems
to use. The higher systems will take up the slop. They, like you, are
always monitoring the results.

Therefore it is appropriate for the orientation control system to
either overwhelm or somehow inhibit the position system (I haven't
managed to build this into nu14 in a way that looks good on the screen,
but I haven't really tried very hard yet).

This introduces ad-hoc wiring between systems at the same level, and you
only want such wiring to exist under very special conditions. It's much
better to leave such things to higher levels of control, so when you
_don't_ want the lateral relationship, it goes away. You can get hung up
on trying to solve such problems at one level, forgetting that there are
higher levels.

The basic principle of HPCT as I understand it is that the control
systems at a given level are _simple_ general-purpose devices, and that
when they are combined to create more complex modes of control, it is a
simple higher-level system that does the combining by selecting which
reference signals to vary at the lower level. The df conflicts at the
lower level are simply facts of nature to the higher-level systems, just
as is the fact that you can't put two objects in the same place at the
same time in the environment. Instead of thinking of degrees of freedom
as a problem, think of them as natural constraints. We have to learn to
do everything we do under the constraints of natural laws. The hierarchy
consists of layers of solutions for problems of how to accomplish goals
under natural constraints. We can't solve all control problems. I still
can't put my computer monitor and my printer into the monitor box, even
though it would be handier to be able to do so when carting my computer
to a meeting.

But we want the positional goal to be achieved eventually, so what
seems to happen is some combination of grip or postural adjustments
(including perhaps fairly dramatic ones, such as putting the thing
down, getting a chair and chair and dragging it into position, then
picking the thing up again and trying again to put it into the desired
position.

That's a problem for higher levels, isn't it?

So how is all of this going to work. My intuition is that the
straightforward position-control systems as in Little Man 2 won't
really do the job (here the new position-reference is fed thru slowers
so that the hands move to the desired position). One consideration is
that we want to control the maximum speed and accelaration of, say a
glass of liquid, so that it won't slop out of the container.

Again, a levels problem. The control systems at the levels of the 14df
or nu14 model simply have to make certain perceptions of the arm follow
reference signals, with as little lag, as much precision, and as much
resistance to disturbance as possible. Spilling water from a glass isn't
their problem. Trajectories aren't their problem. The orientations of
objects relative to other objects (or to gravity) aren't their problem.
Those are problems for higher-level systems to solve. It's far easier to
solve a problem that requires certain limb trajectories if limb position
can be set simply by setting a reference signal, without regard to
dynamics, loads, or trajectories. You just run the various reference
signals through the waveforms that create the desired perceived
trajectories. The people (like Massone in the reference Bruce Nevin gave
us yesterday, thanks) who are trying to solve trajectory problems all at
one level are making the design task extraordinarily difficult for
themselves. Not being able to solve it, they throw the burden onto an
adaptive neural net -- which really doesn't solve it very well, either.
Neural nets should work great if you give them simple enough problems to
solve. The point of HPCT is to simplify control problems, not to
complicate them.

Another is that we may need to plot a course that will avoid
obstacles.

Obstacle avoidance is a matter of controlling relationships among
objects (one's own limbs being objects, too). It's not often necessary

to plot the course in advance (as Rodney Brooks has been saying for some
time). The people in CROWD don't plan any paths, yet they avoid
obstacles and follow other people or get to goals. The fact that we see
a person following a certain trajectory doesn't mean that the trajectory
was planned. If your limbs will achieve the reference-configurations you
set for them, that's all you can ask of them. The rest is up to higher
levels.

A possibility might be something like this. Suppose the hand is at a
position, in some attitude (holding a cup, let's say). A new position-
reference for the hand-cup is established, in 'sometime mode' (there
would be other modes in which you wouldn't care at all about the cup,
and just get the hand to somewere ASAP). The discrepancy between
actual and reference hand position induces an error signal (vectorial,
a presume, hopefully frequency-coded multiplexing isn't relevant here).
I'd suggest that this error signal gets transformed into a hand-
velocity reference vector. In this simplest case, this would point in
the same direction as the error-vector, but have its length limited and
temporally smoothed (it starts small, builds up slowly (I recall from
somewhere that cubic polynomials are good for this sort of things),
levels off, then winds down slowly as the reference position is
attained).

Again, this is trying to get a low level of control to accomplish too
much -- and even as you do this, you're introducing your own higher
levels of perception and control. Put them into the model, if possible.

You never set a low-level reference signal to a new value in one
instantaneous jump. That would surely spill the glass of water. The
solution is not to slow down the lower-level systems, but to slow down
the changes in the reference signal. The lower level systems should
always be designed to act as fast as possible while they're in use. To
get the hand from A to B in a controlled fashion, a higher-level system
controls the perceived transition rate by slowly and smoothly changing
the reference signal for position from Ax to Bx, Ay to By, Az to Bz,
etc..

Look how Little Man Version 1 draws slow circles around the target
(toggled by the "!" key). This is not done by slowing down the position
control systems, but by varying the position reference signals in slow
sine and cosine waves. The position control systems are just as fast as
ever; fast enough to follow a slowly-varying reference signal quite
accurately. By varying the x, y, and z reference signals slowly and
smoothly in various patterns, you can create any trajectories of
pointing movements at any speed you like, without loosening control at
any lower level. The accuracy is maintained until the speed approaches
the limits of the lower systems.

The hierarchy consists of _general-purpose_ control systems. The same
control system that is used (by higher systems) to keep a glass of water
level at one time is used to tip the glass and pour the water out on
another occasion, or to turn a screwdriver or to operate a hatchet. We
don't want ad-hoc connections that optimize the control systems at one
level for a particular task and leave them useless for accomplishing any
other task. We don't want to have to postulate extremely complex
switching networks to turn special-purpose interconnections on and off
depending on the task. That's a patchwork design -- adding new design
features to make up for mistakes in the original design. The object is
to come up with a simple and elegant design that allows each control
system to work in exactly the same way no matter what it's being used
for.

The architecture you start with determines the problems you run into
later. The problem with other approaches to behavioral modeling is that
the basic architecture is just sort of thrown together without any real

attempt to get an elegant structure; this is what leads into such
horrendous mathematical complexities. Like setting up the basic system
so the only way it can work is by computing inverse dynamics and
kinematics.

Of course we assume that a simple and elegant design is likely to be the
natural design as well.

ยทยทยท

-----------------------------------------------------------------------
Best,

Bill P.