Arm control

I am very much interested in the current discussion on arm control.
The reason is that we have built a behaviour-based robot arm
controller which enables a five degrees of freedom robot arm to grasp
a rolling ball using 2D vision input from a hand wrist camera. (The
approach and technical details of this work are described in a
technical paper: Asteroth et al. "Tracking and Grasping of Moving
Objects - A Behaviour-Based Approach", GMD Working Paper No. 603,
December 1991).

The basic idea is to have individual behaviours which interact through
a subsumption-like architecture controlling the robot arm. Each
behaviour is triggered by particular aspects of the visual input
(e.g., light intensity at an edge of the image triggers the
orientation of the robot - base and wrist joints are acting; a
centered light spot triggers the robot arm to approach the rolling
ball - shoulder and elbow joints are acting; a camera image having
many white pixels - due to a ball very close to the wrist camera -
triggers a grasp reflex). All behaviours operate in parallel. No
apriori assumptions are made about velocity and direction of the ball.

It would be very interesting to us to have a more "natural model" of decompo-
sition into different behaviours. At the moment, our decomposition is based
on our intuition. Additional, it would be interesting to find out whether
it is possible to describe such a model of behavioural organisation using
a system theory point of view in order to compare the properties and the
performance level of our system to other "classical" robot arm controllers.

Looking forward receiving your comments.




Uwe Schnepf e-mail: usc@gmdzi.uucp
AI Research Division
German National Research Center phone: +49-2241-142704
for Computer Science (GMD) fax: +49-2241-142618
Schloss Birlinghoven
P.O. Box 1316
5205 Sankt Augustin 1