FDMB

[From Bill Powers (951103.0940 MST)]

Shannon Williams (951103) --

RE: vita.

All I can say, Shannon, is that you'd better fall in love and marry some
real nice PCT person very soon. You are too dangerous to be floating
around loose.

···

----------------------------------------------
Whenever I hear that someone is a mechanical engineer, my eyes light up
because I have a problem that only a really really good mechanical
engineer can solve. I don't know whether you've come across my Little
Man version 2 simulation of pointing behavior. It's available on the CSG
Web page or from Dag Forssell, or direct from me. It's a model of an arm
with three degrees of freedom: pitch and yaw at the shoulder, pitch at
the elbow. It uses a fairly good muscle model, and the arm itself is
modeled using physical dynamics. The simple control systems work very
nicely, very stably, and without using any inverse calculations at all.
They mimick the spinal reflexes. The model (programmed in C) runs on a
486-33 in real time.

My problem is this. The only reason I was able to incorporate the
physical dynamics of the arm into the model is that Greg Williams, who
skulks in the background of CSG-L nowadays (and with whom I promise to
get in touch Real Soon Now about the book in gestation), is still
mechanical engineer enough to have looked up the equations in several
robotics textbooks, checked the derivations (finding some mistakes), and
put the equations into a form that I could use. But I want to do more
than model one 3-df arm or leg at a time. I want to model the whole body
at once.

This, unfortunately, requires writing the forward dynamics equations for
all the major body segments as well as their interactions with each
other, the ground, other objects, and applied forces like gravity and
wind. All we need is a model that will convert all the muscle-generated
torques and external forces into a resulting angular acceleration,
angular velocity, and angle around each joint or pivot point. We don't
need the inverse calculations because the PCT model doesn't use them.
You'd think that somewhere in the world, somebody must have done this,
but what with everybody going gung-ho after models that use inverse
calculations, and what with this being a somewhat difficult problem, I
haven't been able to come up with a source. And I sure as hell don't
know how to do this myself.

What I need is a test bed into which I can build control systems.
Without the control systems, the modeled body would lie on the ground
like a collapsed marionette. My problem (any anyone's else who wants to
get in on the programming, like Avery Andrews and our other PCT
programmers) is to build the control systems that will make this body
straighten out, wave its arms and legs, roll over, crawl, stand up, and
walk, in roughly that sequence. This may be slightly more than I can
hope to accomplish in the rest of my life, but I'd like to get it
started and show that it's interesting enough for someone else with more
time ahead to take over. It's sort of like having a baby.

Both Dag Forssell and Greg Williams have mechanical engineering
backgrounds. There may be others hanging around CSG-L, for all I know.
There may be some physicists with good dynamics backgrounds, too. It
seems to me that building the Forward Dynamics Model of the Body - FDMB
-- is a finite and ultimately doable project, and something that could
kick off a whole new approach to modeling physical behavior. With half a
dozen people working on it, how long could it take to come up with the
basic test bed?

Who knows, maybe someone on the net will have enough credentials to get
some money to do this, perhaps from NASA.

What do you think?
----------------------------------------------------------------------
Best,

Bill P.

[From Shannon Williams (951108)]

Bill Powers (951103.0940 MST)--

Whenever I hear that someone is a mechanical engineer, my eyes light up
because I have a problem that only a really really good mechanical
engineer can solve.

I am not a mechanical engineer (yet). But I studied mechanical engineering
because I was interested in robotics. My main interest right now (or, my
second main interest, I should say), is AI. I would like to model
learning. I think that the way to model learning is to create a method for
creating control loops. Once this method is determined and implemented in
some environment, learning/intelligence will evolve from it.

My problem (any anyone's else who wants to
get in on the programming, like Avery Andrews and our other PCT
programmers) is to build the control systems that will make this body
straighten out, wave its arms and legs, roll over, crawl, stand up, and
walk, in roughly that sequence.

You want to start in the middle of the evolving process. Sorta take a
snapshot of high level control loops that you want and artificially
develop the underlying control processes that allow the higher level
processes to succeed.

You want low-level control loops like move_arm_down and move_leg_up, which
could be motivated by roll_over and stand_up. And all four of these loops
could be motivated by _crawl_ or _walk_. _Crawl_ or _walk_ must be
artificially motivated by the main control loop or by some (fancy) trigger,
like another loop that recognizes commands for "crawl" or "walk".

I have not mentioned anything about forward dynamics equations and not
using inverse calculations. But it seems that if we use some control loops
and a little fuzzy logic, we could teach the little man how to move without
using a lot of complicated equations. How many processors does the Little
Man get to have?

This project sounds very fun.

-Shannon

[Avery Andrews 951112]
(Shannon Williams (951108))

Our problem is that we can't do good simulations, because we don't know
how to calculate the forward dynamics. So Bill & I have slightly different
versions of a kinematic model of an arm with 14dfs, but it's not a real
arm simulation, since it doesn't have dynamics. Bill's Little Man has
dynamics, but only 3 dfs. I reckon that a good place to start would be
the dynamcs of a 4df arm, with 3 dfs at the shoulder (elevation,
azimuth and rotation), and one at the elbow (flexion); there as also
rotation at the elbow, but I'd be surprised if it made any any significant
contributions do the dynamics, and likewise for the further df's at the
wrist (until you start modelling an arm swinging a club, which would be
a good next project ...)

With an accurate simulation of the forward dynamics, we could demonstrate
(perhaps), that you don't need sophisticated equations to control a 4+ df
arm in a human-like manner (which, is, I might add, in case some
industrial roboticists are listening in, extremely different in nature
from the problem of controlling an industrial robot. I've actually got
a mechanics book sitting on the bookcase on my desk, but I haven't
managed to do much with, it presumes somewhat more math than I actually
know.

Avery.Andrews@anu.edu.au

[From Bill Powers (951108.1025 MST)]

Shannon Williams (951108) --

Honeymoon's over. You're going to have problems, Shannon, if you try to
import AI, fuzzy logic, and other such stuff into PCT, at least right
off the bat. The main virtue of fuzzy-logic controllers, as the
Frenchman who first thought up the idea was cited on this net as saying
a couple of years back, is that they were patentable. As controllers,
they're pretty poor.

But that's not the main problem. The main problem is in

     You want low-level control loops like move_arm_down and
     move_leg_up, which could be motivated by roll_over and stand_up.
     And all four of these loops could be motivated by _crawl_ or
     _walk_. _Crawl_ or _walk_ must be artificially motivated by the
     main control loop or by some (fancy) trigger, like another loop
     that recognizes commands for "crawl" or "walk".

In outline, that's it, but we're starting a lot further down and we
would use a very different approach to the control loops. You say

     ... it seems that if we use some control loops and a little fuzzy
     logic, we could teach the little man how to move without using a
     lot of complicated equations.

We don't have any choice about the complicated equations. The equations
model the environment that has to be controlled. The ones I described
are those that compute the angular accelerations, velocities, and
positions at the joints as a function of torques applied (by muscles) at
the joints. They simulate the physical properties of the limbs and body.
We can't just say "move-arm-up" and solve the problem. In fact, that IS
the problem. The problem is how to generate the signal variations that
have to be sent to the muscles to create the torques that will cause a
jointed arm with several degrees of freedom to move in such a way that
for EACH JOINT the sensed position continuously matches a reference
signal that is changed continuously as a way of moving the arm. This is
a problem in analog control, not digital logic. And it starts with a
problem in physics -- or if you like, mechanical engineering.

Analog control does not work in terms of discrete events; it works in
terms of continuous relationships. The appropriate mathematics is not
logic, but differential equations. There are no discrete "states" in an
analog system; there are only the current values of continuous
variables, their time-integrals, and their time-derivatives. The
appropriate measure of a neural signal in an analog control system is
frequency, not on-off bits. The neurons themselves work like the neurons
in neural network theory; they are analog computers, not switches.

The AI approach doesn't come into PCT until you get way up to the levels
that deal with discrete symbols. There we DO expect to find systems that
generate symbol-strings like "move_arm_up". But between those systems
and the systems that actually control arm position there are more
detailed control processes, most of which I think can be modeled only as
analog systems. The statement that an arm is to move up has to be
converted into appropriate reference signals for continuous variables;
by themselves they can't cause the appropriate muscles to vary their
tension in just the right way for the arm to move up from one position
to another or at a specific rate, without oscillation or overshoot. By
themselves they can't direct the arm control systems to adjust their
muscle tensions according to dynamic interactions among the moving
masses of the arm, or according to changes in relative direction of
gravity, or according to changes in external forces acting unexpectedly
on the moving parts.

The FDMB -- forward dynamic model of the body -- project as I envision
it is to develop the differential equations that describe how the body
moves when arbitrary torques are applied to all the joints at the same
time. The Little Man Version 2 model does this with an arm that has only
three degrees of freedom; we need a model that handles at least 30
degrees of freedom, even if we assume that the hands are massless claws.

When my son took control theory in his mechanical engineering courses
three or four years ago (at Boulder), all they taught was the inverse-
kinematics approach. Given that the end of an arm is to describe a
specified trajectory to a specified end-point, how should the impulses
be sent to a set of stepper-motors to create the desired result? I saw
this as an enormous come-down, not to mention dumb-down, from the way
control processes were approached before the digital revolution. Given
enough computer power, such systems can be made to work, but at what
expense! Massive mountings, with massive arms on precision bearings;
vast amounts of computation to specify a path; vaster amounts of high-
speed computation to calculate the inverses of complex equations in real
time. And still more effort to prepare the environment so it will be
free of uncertainties and disturbances: the funniest sight in the world
is a huge machine going swiftly through its motions while the workpiece
lies on the floor where it fell off the jig.

This is probably why I can't find a source for my equations. Hardly
anybody is doing analog any more. Some are, but they've gone off on this
inverse-dynamics-and-kinematics kick, which I have demonstrated is
completely unnecessary in a control-system model. And nobody that I know
of has tried to handle the entire body.

Your basic idea of levels of control -- of "motivation" -- is the right
structure. But these ideas have to be translated into actual control
systems, not logical relationships. That is the easy part, or as you
say, the fun part (I agree about the fun). The hard part is to build a
realistic model of the device that these control systems have to
operate. If we had the technology and the money, we could build a
physical device, a robot. But it would be much quicker and cheaper to
build the simulation. Once we have that, we can start devising the
control systems that will handle all the dynamical problems and start us
on our journey up the hierarchy.

···

----------------------------------
I suppose that your programming experience is mostly on mainframes or
workstations. Us peons have grown up with desktop computers, and I am
limited to programming on PC-compatibles -- programming Macs is just too
hard. Do you have access to PCs? Could you buy one? At the moment, most
of us who program are programming in Turbo Pascal 7.0, with a few of us
being relatively fluent in C (Little Man v. 2 is in C). Rick Marken,
wouldn't you know, likes Macs, but he also has a PC and can program in
TP7 or even, with lots of loud complaints, C.

The reason I ask is that you really need to run some of our demos and
experiments to get the feel of this approach. Maybe you should run down
to Houston and let Tom Bourbon show you his arsenal of programs. You're
very quick to understand the principles, but I still think that seeing
the applications in living color and as a participant will be a
revelation to you. Or better yet, get your department to pay Tom to come
up and give a seminar with his portable computer. Tom could use the
money and your department could use the education.

Or you could come to Durango and I would show you. I THINK that Mary
might let a smart SWF through the door. But Tom actually has a bigger
selection of programs at his fingertips. Betty would probably let you
through the door, too.
-----------------------------------------------------------------------
Best,

Bill P.

[From Tom Bourbon (951109.0038 CST)]

[From Bill Powers (951108.1025 MST)]

Shannon Williams (951108) --

. . .

The reason I ask is that you really need to run some of our demos and
experiments to get the feel of this approach. Maybe you should run down
to Houston and let Tom Bourbon show you his arsenal of programs. You're
very quick to understand the principles, but I still think that seeing
the applications in living color and as a participant will be a
revelation to you. Or better yet, get your department to pay Tom to come
up and give a seminar with his portable computer. Tom could use the
money and your department could use the education.

Hi, Willie. If you don't have a PC, come on down. Or I will come on up.
Anything for money. :slight_smile: Or you can check with Isaac Kurtzer, a shy and
retiring young guy who hangs out in the Dallas area. :wink:

I would be happy to show you the Gulf Coast collection of PCT demos and
programs.

Tom

<[Bill Leach 951110.20:57 U.S. Eastern Time Zone]

[Shannon Williams (951110)]

How does a system of loops work except as a network of fuzzy-logic?
Each loop maintains its own reference level- or subset of reality. When
its reference level is disturbed, the loop responds. This sounds very
fuzzy-logic-ish to me.

Trust me! There is nothing "fuzzy" about multiple control loops -- about
reorganization? maybe.

Each control loop "adjusts" its' output to maintain its' own perception
matched to its' own reference. For muscle related systems there will be
some "in phase" component between individual systems at the lower levels
however, these in phase components appear to be disturbance to the
respective perceptions and are handled accordingly.

"Arm" performance suggests that position and force (tendon tension) are
sufficient to result in smooth transition without any degree of
"anticipation signals" or other methods of "compensating" for output
interactions.

Or maybe we need a third choice? We need something as controlled as
digital logic, but as instantaneous/volatile(?) as analog control.

We, of course, use digital logic (Bill might be willing to sell
everything that he and Mary own to get his hands on some of the Analog
Computers that I have worked with in the past but since that probably is
not an option...).

The digital computer is attempting to emulate analog circuits. The
_tough_ calculations are NOT the control loop calculations but rather the
calculations necessary to properly and accurately simulate the
environment (where environment includes force generated by "actuators",
mechanics of bones and joints, mass effects and friction).

The additional problem is that we don't have the specific necessary to do
information the calculations... for example specific mass density
gradients for various parts of the body, moment arm values and joint
rotation limits, estimates of muscle force available for various muscles
and the like.

The "ARM" program is pretty compelling evidence that control theory
provides the tool necessary to produce a simulation that appears to
behave very much as a human is observed to behave but simulating the
environment with a complete physical properties data set is vital to
prove that what appears to be true from observing ARM in operation is
true for the real system.

You asked in a previous message, "How many processors?". I suspect that
a pretty complete simulation of the control loops could be done on
something like five or six machines but the environment could well
require a many dozens of machines to simulate in real time.

(After this post, you may deny that there ever was a honeymoon).

If you missed it, then it must not have been much!

You are going to build the hundreds of little control loops
required for the analog control of your Little Man? These
control loops are what occur in nature, but it does not mean
that we need to recreate them.

In principle, this is quite correct except that it is not "hundreds" but
more like many millions.

While ideally, every muscle that has independent control in the living
organism should be simulated it may well be sufficient initially to
"group" some muscles together and compute an effective force vector as
opposed to simulating each and every muscle (this may or may not gain
anything as what is saved in simple control loops costs in complexity of
the "grouped" control loop though the grouped control loop also might
simplify the environmental simulation some).

The control loops need to be able to act on and react to each other in
real time. The only reason you would need analogue control is so that
actions and reactions can be immediate. Why can't a dedicated digital
controller react quickly enough to simulate a number of small loops?
You have to admit, it is much easier to make a digital loop than an
analogue one.

This IS what is done. The digital computer is simulating several analog
control loops at one time.

-bill

[From Shannon Williams (951112.before the Cowboys play)]

Bill Powers (951111.0630 MST)--

The problem with fuzzy logic is that (as I understand it) it tries to
approximate continuous relationships through using probability
distributions.

We would not use it to approximate continuous relationships. We would use
it to understand the world from the control loop's point-of-view. In other
words, how do we know what a control loop needs to perceive? As humans we
can tell that if inputs A,B, and C to loop D are activated, then the robot's
arm is moving too fast. The output of loop D causes the arm to slow down.
I would call this fuzzy logic because the system is described/programmed
according to its (behavioral?) needs. There are not algorithms that say
"accelerate forearm for 2 seconds, coast for three seconds, and then
decelerate for three seconds." Or whatever.

Everything is programmed to trigger whatever particular behavior is needed
at any given moment.

The other factor is that each relationship involved in the control
system has to be represented by different rules over various regions of
the range of variation of a variable, both as to its magnitude and its
rate of change. Thus in a (simulated) fuzzy-logic controller I saw for
balancing an inverted pendulum, you had conditions like small-amount-
left-of-center, large-amount-left-of-center, left-of-center-and-moving
left, left-of-center-and-moving-right, and so on and so on -- I believe
there were more than 20 rules altogether, all with probability
distributions.

Exactly! But these rules should be one or more control loops. They are
not rules, they are perceptions.

All this to accomplish a control task that could be done
with half a dozen lines of code representing continuous relationships,
and done a hell of a lot better. The performance of this model was very
poor: slow, wobbly, and greatly upset by disturbances.

I would need to see your balancing broom examples before I would want to
comment on what method was chosen and whether I would have programmed it
that way.

I have a hunch that you're not used to working with systems in which the
variables are continuous (real-number domain) and the functions that
relate them are also continuous (continuous physical relationships). Is
that so?

The closest thing that I have done to what you are talking about is
controlling the stepper motor to a label applicator. But it does not
really matter. If you can give me examples demonstrating the failure of
a digital system to control continuous variables, then I can visualize your
problem.

I can remember in my Dynamic Systems Modeling class that we had to model
for effects in 3-D. We modeled things like heat, compression, tension,
etc. In other words, we modeled the effect of the environment on a thing.
In PCT, we would be modeling the (domino) effect of an environment on the
reference levels of a bunch of interlaced control loops. But the concept
is the same.

the reason we need analog control (or a digital approximation of it,
which is what we use) is that analog control systems work differently
from truly digital ones. Digital control systems work with variables
that have only two values. Variables in analog systems are continuously
variable.

? I do not understand. Digital systems can work with values that have from
two to 0xFF and more values.

Ah, that makes you less threatening.

This is a funny statement. If we ever meet I need to remember to growl and
hiss.

Can you work in Turbo Pascal? Can
you get, or do you have, version 7.0? Actually, we haven't been using
objects and any version from 3.0 up should work with the programs we've
been writing.

Can you browse the Web? If so, you can download a lot of programs that
will run on your PC. Or give me the word (and your snail-mail address
again) and I'll send a couple of disks to you.

I know Pascal (or atleast I used to). And I will download from the web.

-Shannon

[From Shannon Williams (951112.before the Cowboys play)]

Bill Leach 951110.20:57 U.S. Eastern Time Zone--

Each control loop "adjusts" its' output to maintain its' own perception
matched to its' own reference. For muscle related systems there will be
some "in phase" component between individual systems at the lower levels
however, these in phase components appear to be disturbance to the
respective perceptions and are handled accordingly.

This is a description of what needs to be done. Not how to do it. How do
you decide what 'phase components' need to exist?

The _tough_ calculations are NOT the control loop calculations but rather
the calculations necessary to properly and accurately simulate the
environment (where environment includes force generated by "actuators",
mechanics of bones and joints, mass effects and friction).

Ok. I know that there is something major that I am missing. This is the
second time that 'simulating the environment' has been mentioned. Why do
we need to simulate the environment?

I guess that you are saying that the Little Man only exists on a computer
screen, and we are looking for equations that simulate the effect of
telling him to move his arm, or whatever. Or something. I am confused.

-Billie "I took a wrong turn somewhere" Shannon

[Avery Andrews 951114]
(Shannon Williams (951112.before the Cowboys play))

>Ok. I know that there is something major that I am missing. This is the
>second time that 'simulating the environment' has been mentioned. Why do
>we need to simulate the environment?

If we can't simulate the environment reasonably well, we can't know that
we have an agent architecture that could actually function in the real
world. Of course it would be even better to build real robots, but that
would also be beyond our means. It is definitely a true fact about
arm control, for example, that things that `look like they ought to work'
on the basis of casual inspection can perform quite horribly in fact.
In the Little Man Demo, it took Bill Powers quite a long time to figure
out how to tune the various systems to manage the dynamics of moving
arms (coriolis forces, etc.) properly, and actually as far as I know
Bill's (and Greg Williams', I suppose) story is the only not-obviously-hopeless account of what the spinal reflexes are actually for (there is an enormous
and inconclusive literature on the spinal reflexes, in a cruise thru some
of it a few years ago I didn't encounter anybody proposing that what they
did was handle the dynamic interactions between the arm segments).

So far so good for the 3df arm, but a 4 or 6df demo would be much better.

  Avery.Andrews@anu.edu.au

<[Bill Leach 951113.00:18 U.S. Eastern Time Zone (undoubted after ...)]

[Shannon Williams (951112.before the Cowboys play)]

Each control loop "adjusts" its' output to maintain its' own perception
matched to its' own reference. For muscle related systems there ...

This is a description of what needs to be done. Not how to do it. How
do you decide what 'phase components' need to exist?

Actually, believe it or not, this is a description of what happens when
two or more analog control loop are connected in such a fashion. The
mutual "compensation" for cross disturbance just happens as because of
what the system are. While some "dynamics" trimming might actually be
necessary there is no other "linkage" required between the control loops
at any other level.

Ok. I know that there is something major that I am missing. This is the
second time that 'simulating the environment' has been mentioned. Why
do we need to simulate the environment?

I guess that you are saying that the Little Man only exists on a
computer screen, and we are looking for equations that simulate the
effect of telling him to move his arm, or whatever. Or something.
I am confused.

Yes, indeedy! We have not the wherewithall to actually build a physical
model thus we find it necessary to simulate the environment.

Yes again. Little Man provides a nifty graphics display of a what is
basically a "wire frame" man that moves his head, arm and hand. The
"simple" part of the program is the control equations that direct the
movements. The complex part is the in the environmental considerations.

Turn right, turn right! Come about to the starboard side.

-bill

<[Bill Leach 951113.00:29 U.S. Eastern Time Zone]

[Shannon Williams (951112.before the Cowboys play)]

fuzzy logic

We would not use it to approximate continuous relationships. We would
use it to understand the world from the control loop's point-of-view.

The control loop does not "understand" the world. Only that a perception
does or does not match the reference.

In other words, how do we know what a control loop needs to perceive?

Basically, we do THE TEST on living systems to determine what perceptions
are required for a particular control operation. While the interest in
"learning" is high here, most of us would just like to see working
examples of control systems that function assuming the learning is
already there. Once we have that... who knows, it might then be
reasonable to attempt some learning schemes though verification could
prove to be tough.

As humans we can tell that if inputs A,B, and C to loop D are activated,
then the robot's arm is moving too fast. The output of loop D causes
the arm to slow down. I would call this fuzzy logic because the system
is described/programmed according to its (behavioral?) needs. There are
not algorithms that say "accelerate forearm for 2 seconds, coast for
three seconds, and then decelerate for three seconds." Or whatever.

In multi-level control loops, this sort of appears not to be required.
Little Man does demonstrate that "simple" control loops do achieve the
sort of output observed when humans do the same tasks. It is probably
VERY important to understand that these analog controller differ
fundamentally from most digital controllers in that they don't "generate"
output. Digital controller usually do such things as "accelerate for 2
seconds, coast for 3". It appears that living systems use a perception
of tendon tension for some acceleration control. Also, it is likely that
when visual perception is involved that it also plays a part (probably a
rather complex part).

Everything is programmed to trigger whatever particular behavior is
needed at any given moment.

Hopefully the impending football game is responsible but the preceding is
something that a control system will not do. Control systems of the type
that PCT is based upon do not "trigger behaviours" based upon need or
anything else.

? I do not understand. Digital systems can work with values that have
from two to 0xFF and more values.

Yes, this correct (and 0xFFFF is now more common). There are a couple of
issues here.

For many control applications the original digital simulation of older
analog controller was not an efficient implementation of digital
processing to effect control. The easily obtainable reproduceable values
from digital systems allowed for open loop operation where such schemes
were all but impossible with analog systems.

A great deal of actual digital control is based upon well defined
environments and contains a great deal of "open loop" control (that is
the system "assumes" that if certain conditions are present that other
conditions are also true. This works well in a limited environment.

What we are trying to model is a controller that does not control
accurately, is not particularly fast, has an output system that is very
VERY inconsistant (when compared to virtually any engineered control
system). This system is amazingly adaptable however.

It is also virtually irrefutable that the lowest levels of this control
system are simple analog closed loop negative feedback control systems
(based upon biological research). What we want to model is this system
and we want our models to behave just as does the real thing.

-bill

[Richard Plourde 951113 1:46AM EST <gasp>]

[Avery Andrews 951114]

                    ^^?

(Shannon Williams (951112.before the Cowboys play))

>Ok. I know that there is something major that I am missing. This is the
>second time that 'simulating the environment' has been mentioned. Why do
>we need to simulate the environment?

world. Of course it would be even better to build real robots, but that
would also be beyond our means. It is definitely a true fact about
arm control, for example, that things that `look like they ought to work'
on the basis of casual inspection can perform quite horribly in fact.
In the Little Man Demo, it took Bill Powers quite a long time to figure
out how to tune the various systems...

There's something that might be useful in such an attempt. A couple
of years ago I went to a Texas Instruments seminar on the application
for DSPs (Digital Signal Processors). They're pretty fast (30E6+
instructions/second) and pretty cheap, and self-contained little
beasts. One application demonstrated was for a control system, where
the DSP, in addition to performing the rather standard
Proportional/Integral/Differential 'loop tuning', also figured out
what the parameters had to be for stable behavior. It had several
analog to digital converter inputs and a motor-control
digital-to-analog output. You could hook up your position sensors and
tachometers without concern for the polarities each signal had. It
was pretty exciting when it first turned on, but it settled down
quickly.

(Just a guess here, but perhaps with regard to PCT we could say that
the control system was controlling not only its perception of where
the 'object' was located, it was also introspectively[?] controlling
its own stability as a second simultaneous task. Is this close to
right?)

That sounds like an exercise in how to be a showoff, but as it turns
out, for some robotic applications, abilities similar to this prove
quite desirable. For example, the control-loop 'constants' for a
robotic arm are not constants at all -- have the arm pick up a
swinging basket, and what the loop needed for stability with just the
arm isn't even close to what it needs with a reactive load. The DSP
had no problems adapting. A classical servo-system with fixed
parameters boggles at such a job -- or it's hopelessly slow.

(Multiple-degree-of-freedom, geometrically nonlinear systems, would
seem addressable using similar techniques, but I don't know if such
techniques have been applied. Two or three years is a long time in
the world of DSPs.)

I don't know how adaptive loop behavior fits into what's being talked
about here -- I'm still pretty much lurking and trying to figure out
what's going on -- but it struck me at the time that *we* had better
have some kind of adaptive control-loop structure -- if only to jump
on pogo sticks. (I wonder if people who are drunk do a better job at
carrying clumsy loads.)

--R
----Richard Plourde --rplourde@scoot.netis.com

[From Shannon Williams 951113]

I am replying to Avery, who is not even going to write what I am replying
to, until tommorrow.

Avery Andrews 951114--

If we can't simulate the environment reasonably well, we can't know that
we have an agent architecture that could actually function in the real
world.

But we can pick any real world environment- the moon, freefall, underwater,
anything.

Could we begin by picking the easiest environment to simulate- like in a
vacuum in freefall? Once we can make the control loops work in this
environment, then could we add more complexities?

-Shannon

[From Shannon Williams (951113)]

Bill Leach 951113.00:18 U.S. Eastern Time Zone--

Each control loop "adjusts" its' output to maintain its' own perception
matched to its' own reference. For muscle related systems there ...

This is a description of what needs to be done. Not how to do it. How
do you decide what 'phase components' need to exist?

this is a description of what happens when two or more analog control loop
are connected in such a fashion.

My question relates to how you determine what control loops need to be
connected. It doesn't address how they act once they are connected. So,
how do you decide what control loops need to exist?

Little Man provides a nifty graphics display of a what is basically a
"wire frame" man that moves his head, arm and hand. The "simple" part
of the program is the control equations that direct the movements. The
complex part is the in the environmental considerations.

Does the little man work fine in a 'simple' environment?

Turn right, turn right! Come about to the starboard side.

Yikes! I'm turning!

Shannon "Did you see that tanker!" Williams

[From Shannon Williams (951113)]

Bill Leach 951113.00:29 U.S. Eastern Time Zone--

Fuzzy Logic -

We would
use it to understand the world from the control loop's point-of-view.

The control loop does not "understand" the world. Only that a perception
does or does not match the reference.

Let me re-phrase my sentence: We would use it to understand what
information in the world the control loop needs to perceive.

In other words, how do we know what a control loop needs to perceive?

Basically, we do THE TEST on living systems to determine what perceptions
are required for a particular control operation.

Before you can do the TEST, you must have a variable to test for. How do
you guess what variables to test for?

Everything is programmed to trigger whatever particular behavior is
needed at any given moment.

Control systems of the type that PCT is based upon do not "trigger
behaviours" based upon need or anything else.

Are you re-acting to the word 'need'?

If I design a control system, then each loop in that control system will be
triggered according to the function that that loop fills. In other words,
each loop fulfills certain system needs. If you have a better way for me
to express myself, then describe it. I am not picky about the words I use.

It is also virtually irrefutable that the lowest levels of this control
system are simple analog closed loop negative feedback control systems
(based upon biological research).

You have not succeeded in simulating biological control systems by using
analogue systems. How can you be so sure that analogue systems are the
only viable systems?

-Shannon

[Peter J. Burke 11/13/95 12:52 ]

From Shannon Williams (951113)

My question relates to how you determine what control loops need to be
connected. It doesn't address how they act once they are connected. So,
how do you decide what control loops need to exist?

This is not unrelated to my earlier posts about how does the system "know"
to make its inputs what is "needed" or its reference signals what are
needed. From the modeler's perspective, you can make it what you want to
work, but from the organism's perspective, it is another question entirely.
It's too bad that there is not more discussion of (re)organization. Where do
these control loops and their parts come from?

Peter

···

------------------------------------------------------------------------
Peter J. Burke Phone: 509/335-3249
Sociology Fax: 509/335-6419
Washington State University
Pullman, WA 99164-4020 E-mail: burkep@unicorn.it.wsu.edu
------------------------------------------------------------------------

<[Bill Leach 951113.18:29 U.S. Eastern Time Zone]

[Shannon Williams 951113]

(Replying to Shannon's reply to something not yet written) :slight_smile:
(Of course the way Avery sometimes travels... who knows?)

But we can pick any ...

... Once we can make the control loops work ... then could we add more
complexities?

That is ok except that the sort of control loops that we use do not
really need any further testing in other environments. Control Theory
defined and characterized these loop rather precisely a lot of years ago.

-bill

<[Bill Leach 951113.18:33 U.S. Eastern Time Zone]

[Shannon Williams (951113)]

... So, how do you decide what control loops need to exist?

You start with what biology tells us about what is actually there. The
modeling done on Little Man suggests that Bill has guessed correctly but
a more complex simulation would help a lot.

Does the little man work fine in a 'simple' environment?

Yes, it does. It is quite impressive. It is VERY impressive to watch (a
means of entertaining "simple" people -- I have to admit to watching for
extended periods of time). :slight_smile:

-bill

<[Bill Leach 951113.21:12 U.S. Eastern Time Zone]

[Shannon Williams (951113)]

Let me re-phrase my sentence: We would use it to understand what
information in the world the control loop needs to perceive.

How I have a serious problem with trying to conceive of how we would use
"fuzzy" logic to "understand what information in the world the control
loop needs to perceive".

Before you can do the TEST, you must have a variable to test for. How
do you guess what variables to test for?

You just guess. Some careful reasoning can help. I would suggest that
Bill Powers might be better at this than most people -- he has a lot more
experience at "looking the fool" to himself than most of the rest of us!

Everything is programmed to trigger whatever particular behavior is
needed at any given moment.

Control systems of the type that PCT is based upon do not "trigger
behaviours" based upon need or anything else.

Are you re-acting to the word 'need'?

No, not really. The problem is that I perceive the idea that your
statement is saying that "something happens" and that results in the
activation of a control loop.

As I understand the theory, the control loops already exist (and how they
come about is a whole 'nuther matter). If "something happens" and an
output is observed to "counteract" that something, then an active control
system already existed AND was in operation. That is, the disturbed
perception was already "under control" but the "something" disturbed that
perception so the control system did what it always does and that is
control perception.

I think that the reason why we often "feel" as though we are "triggered"
into action is that control systems that are controlling "just fine"
rarely "come" to our conscious attention. OTOH, when a sufficiently
large or fast disturbance occurs... large enough or fast enough to result
in a sudden increase in error level, one of the possible results is for
us to consciously perceive "that something is wrong".

It is also virtually irrefutable that the lowest levels of this control
system are simple analog closed loop negative feedback control systems
(based upon biological research).

You have not succeeded in simulating biological control systems by using
analogue systems. How can you be so sure that analogue systems are the
only viable systems?

One of the strongest beliefs about PCT is that IT HAS had amazing success
applying the control theory of closed loop negative feedback control to
living systems in a quantitative fashion.

Regardless of PCT's success, that the lower level systems are analog in
nature is not really subject to debate. The analog nature of the systems
is measureable and quantifiable. Biologists might in many cases not
understand why what they measure and observe is what it is nor why it
actually works but they are not in much of a position to challenge
either. What is there, is there and even if it sometimes mystifies them,
their descriptions and measurements confirm analog control.

-bill

(Shannon Williams 951113)

>Could we begin by picking the easiest environment to simulate- like in a
>vacuum in freefall? Once we can make the control loops work in this
>environment, then could we add more complexities?

Actually, freefall in the vacuum is a bit tricky to manage; friction and
damping make things easier. I've got a little program called `astro'
(on one of the PCT net sites, I think) which simulates 1d position
control in the vacuum, what's tricky is not to let your velocity get
so hi relative to the goal that you can't de-accelerate quickly enough
to meet the goal smoothly.

But anyway, if you want to demonstrate PCT's relevance to a specific issue,
such as arm control, you have to address the problems that exist there,
and keep in mind what Bill Leach just pointed out, that many of the real
easy problems have already been solved.

  Avery.Andrews@anu.edu.au