Small world convergences

[From Phil Runkel on 2000 January 24 about noon]

commenting on Bill Powers's of (2000.01.24.1036 MST).

Whay a joy! I got lost at a place or two, but I got enough of it that my
spine responded with zings and wriggles. Not only does it sound like
sound progress, but it also sounds like scientific community -- like some
people standing with hammers and screwdrivers in their hands, taking a
deep breath or two, and saying, "By golly, look at that, it's really
taking shape!"

Congratulations to all! --Phil R.

ยทยทยท

Some disparate lines of enquiry are coming together in an interesting way.
I'm sure that more practical-minded people have wondered what bugs,
disembodied arms, image reconstruction, inverted pendulums, and playing
with Gnu software have to do with the behavior of human beings. In fact, I
have learned a lot from these pursuits, and they're all converging toward
the same general model.

Back in the 1960s, I tried to devise an image sharpening program based on
the idea of a whole lot of little control systems, each one fuzzing out one
point in a trial sharp image and trying to match the blurred result to a
blurry region in the original image by adjusting the intensity at that one
point. I've now discovered that this method is known to others in the
imaging business as the van Cittert method, and that it was published in
1931. In 1998, realizing that I now had enough computing power to test my
method with fairly large images, I sold my old telescope and bought a newer
cheaper one along with a CCD camera, and started trying to sharpen images
of the Moon. As I got the telescope to work right, and got enough good
seeing weather to record some decent images of the Moon, I began writing
the reconstruction program. It turned out that I could not handle a 324 x
242 pixel image with the Pascal compiler I had; it was necessary to crop
the image and work on only a portion of it.

I had heard about a C compiler from the Free Software Foundation, which I
thought might serve better. Having looked into it a year or so ago, I
seemed to remember that it had the capability of working with very large
stored arrays. So when the weather was bad (a wierdly large percentage of
the time lately, for the Southwest) I worked on getting the Gnu compiler,
called DJGPP, running on my computer. It had some bugs of its own, but
eventually I got it to work -- and discovered that it could indeed handle
very large arrays -- would you believe 32 megabytes? Now, suddenly, I could
easily handle the three 324 x 242 arrays that were needed. That was shortly
after I had distributed some of my initial efforts at image reconstruction.

At that point, with visions of sugarplums dancing in my head, I started
looking for "prior art", with the idea of protecting my software. Of course
I started finding it right away. My method was just one of many that had
already been invented.

Now let's back up about 15 years to the initial model of the Little Man.
Version 1 didn't have any dynamics in it, but with the help of Greg
Williams, version 2 did. I tried to equip the model with the most accurate
model of a muscle and of the stretch and tendon reflexes that I could find
in the literature, and this model worked quite well. I discovered something
of great interest: these reflexes form a three-level hierarchy of control.
The lowest level controls angular acceleration about a joint; the next
level controls the integral of angular acceleration or angular velocity;
the next level controls the integral of angular velocity or angle. The
first level of control is stabilized by the internal viscosity of the
muscle; the next two levels are simple proportional controllers that need
no further stabilization. In fact, the hierarchical arrangement naturally
generates a phase advance in the higher feedback signals, owing to the lag
in the feedback at the first level. This phase advance acts as a stabilizer
of higher levels.

Fast-forward. At the 1998 meeting of the European CSG, I presented a model
that controls a simulated inverted pendulum. The pendulum is mounted on a
cart, and three control systems organized just like the spinal reflexes
control the acceleration, velocity, and position of the cart on its rails.
Then three more levels organized the same way control the angular
acceleration, angular velocity, and angle of the pendulum, making a total
of six hierarchically-organized levels of control. This hierarchical
control system worked very nicely, as I think those who have seen it
running agree.

While I was working on the inverted pendulum, Richard Kennaway was working
to apply hierarchical control to a six-legged bug model. He got this bug
running in time to present it at the same meeting in Germany where I
presented the inverted pendulum. Dag Forssell videotaped both of us talking
about these models.

Then Richard decided that he needed a more reliable way of treating the
dynamics of the bug's body and limbs, and started looking for a better
approach to the physical dynamics. Eventually he came across a British
program called MathEngine, which simulates the physical dynamics of bodies
linked together in various ways, and which is potentially capable of
supporting very complex models, like models of a whole human body. Just
recently, he sent me a MathEngine simulation of an arm with four degrees of
freedom, and I supplied the control systems which, after a few adjustments
back and forth between Durango and East Anglia, resulted in a stable model
of an arm, rendered as a solid 3-D model on the screen. This model consists
of 12 control systems at three hierarchically-related levels
(accelerationm, velocity, and position again -- is this sounding like a
broken record?). I had to buy Microsoft's Visual C++ to accomodate the
MathEngine software, and learn to use it (with a lot of help from Richard,
who seems to understand just about everything in mathematics, computing,
and physical dynamics). But we now have a 4-df arm model, and Richard is
busy absorbing it into the leg control systems of his bug, "archie".

Now for the final vignette in this ongoing drama. A few days ago, still
looking into other people's ways of processing images, I came across a 1994
paper on the Web by Coggins, Fullton, and Carney called
"Iterative/Recursive Deconvolution with application to HST data" (that's
Hubble Space Telescope). Try the first three words in a Google search,
perhaps with "Coggins". All the other methods I had seen looked quite
similar to my basic method, but this one was a little different. It used
the same basic iterative deconvolution method, but it introduced the idea
of doing it using multiple levels (the "recursive" part).

In the "CFC" method, the highest level system, as usual, blurs out the
trial sharp image and compares the blurred result with the original image.
Now, however, instead of the difference being used in a feedback loop
directly, the difference is passed to a lower level in which it becomes the
target for matching a second difference, blurred out, to the target second
difference. And the new difference that results is passed to still a lower
level that deals with a third difference. If this is the bottom level, the
third-difference fuzzy image is processed for a few iterations, and the
resulting sharp(er) third-difference image is integrated (added) to the
second-difference sharp image, and that sharp image is integrated to become
the first-level sharp image. Just the way acceleration is integrated to
become velocity and velocity is integrated to become position.

What we have here, if the observer has been brought up right, is an
easily-recognizeable three-level hierarchical control system, one such for
each image point, dealing with acceleration, velocity, and position -- or
the optically analogous quantities. This method works considerably better
than the single-level methods.

Oh, by the way, this method, with three levels, requires _nine_ images,
each holding 324 x 242 pixels, each pixel's brightness being represented as
a four-byte floating point number, for a grand total of just about 3
megabytes of stored information. Without DJGPP I couldn't have tried this
method at all.

And of course now that DJGPP is available, we can (eventually) port the
CrowdV3 software that is still slowly coming along into DJGPP, and handle
crowds of tens of thousands of people instead of just a hundred or so. And
when we're ready to expand the bug into a complete person with all 150 or
so degrees of freedom, we will be able to store all the control system data
and even a history of the values of the variables.

Nothing is ever wasted.

Best,

Bill P.