AvoidV1.pas

[From Bill Powers (20000106.0944 MDT)]
Bruce Abbott (20000106) --

Hi, Bruce --
This is of general enough interest that I'm replying via CSGnet. This is
what it's for, isn't it? We have a dozen or more people following the
progress of CrowdV3, so let's use the facilities available.

Bruce A:

... it seems to me that the
_programming_ problem has to do with how we represent directions. As I turn
to face the various compass points, I don't have any sense of some value
increasing as I rotate from west to south to east, then suddenly becoming
negative and decreasing as I continue to rotate from east to north and back
to west. Yet this is what the perceptual signal is doing in Avoid3.pas,
because that's a standard way to represent angles mathematically.

I agree completely. Lurking in the background here is a giant modeling
problem. As you say, it's convenient to represent angles this way, but in
the brain, I strongly suspect that there is a more complex scheme for
representing space, including directions. In fact, this same problem
extends to output processes.

Consider the output part first. As Bizzi found, in the frog's leg control
systems there are efferents which when stimulated cause the leg to deviate
by an amount that depends on the signal strength (frequency) in the
efferent nerve, but in a constant direction. To change which direction of
deviation occurs, you have to stimulate a _different nerve fiber_.
Obviously the directional control is not done in Cartesian coordinates, but
through a whole mesh of nerve fibers and muscle fibers that can produce
forces in any direction around the clock. If you stimulate a nerve fiber
that is associated with movements in, say, the 2:00 position, what you get
is excitation of all the motor neurons operating muscles that pull more or
less in that direction, and inhibition in all motor nerves operating
muscles that pull more or less in the opposite direction. If you stimulate
a "4:00 nerve" it excites and inhibits a different set of motor neurons
with a net effect in the 4:00 direction. It's sort of the inverse of
receptive fields. So directional control is achieved by changing which
output neurons are carrying signals, not just by varying the signal
strength in orthogonally oriented x and y muscles.

On the input side, something similar happens, I believe. Like you, I don't
think we perceive directions in terms of north-south and east-west,
although we have invented handy systems (at higher levels) for doing that.
I think we perceive at lower levels directly in terms of a perceptual map
in which our orientation in a model of the environment is carried by the
_location_ of signals in the map rather than by the _magnitudes_ of
signals. The magnitudes give us other information.

I think that's what all that mapping is for that exists from the retina, in
several stages, to the visual cortex. A direction is indicated not by the
magnitude of a signal, but by the fact that _these_ neurons light up
instead of _those_. I sense that this is pretty much what you're thinking.

So what do we do about this? I think we have to ask ourselves why we are
doing all this modeling. One of my main goals is to establish the
feasibility of certain principles -- to show _one way_ in which behavioral
phenomena could be reproduced by a collection of control systems. In
effect, I've chosen, for practical reasons, NOT to represent a brain in
which directional control is achieved by a collection of 360 control
systems organized so each one perceives and controls in a direction one
degree away from the directions of operation of adjacent systems (360, or
360,000). That's just too many to fit into my computer, which can execute
only a third of a billion computations per second. What I've done is to use
2 control systems operating at right angles to each other. In principle, I
could use 3 systems oriented 120 degrees from each other (you may recall
that in the Byte articles, I did exactly that). Or I could use more systems
at smaller angles -- but once you have more than 2 or 3 systems, you've set
foot on a road that leads to a building full of supercomputers. Or a real
brain.

In a way, the 2-axis system is just a limiting case of the multi-axis
system. As the number of spatial directions being sampled increases, you
need less and less precision in adjusting signal amplitudes, because for a
given system the output is connected mainly to muscles that pull in a
single direction. Suppose there were a system for every 1 degree around the
circle. Then one could control direction with an accuracy of 1 degree just
by shutting one system off and turning on the system next to it, sort of
like a stepper motor. You could still interpolate between the 1-degree
steps by varying the proportion of the signals going to each output: if
they had equal strength, the direction of force would be about halfway
between the 1-degree increments.

The fewer systems there are, the more finely one has to adjust the relative
magnitudes of output signals to maintain highly accurate directional
control. With 2 systems oriented in X and Y, you have to adjust the signal
strengths to achieve a direction that basically changes only in 90 degree
increments. Actually, since we're talking one-way control systems with
muscles that can pull but not push, the minimum is three one-way control
systems -- X-Y control requires 4 one-way systems if oriented every 90
degrees. In the Byte article, I showed how three one-way systems oriented
at about 120-degree intervals could be used to achieve x-y control. I was
thinking of this same problem then, but put it aside as premature. I was
still trying to explain control to people, then.

We need to be open about the fact that our models are probably not much
like the brain's actual organization. But the similarities are, in this
case, more important than the differences. With computers, we can get the
same overall effect with 2 or 3 control systems that the less-precise brain
uses dozens or hundreds of systems to achieve. Of course we probably miss
representing some properties that are unique to the highly multiple systems
in the brain, but we are where we are, and we have to leave something for
our children to amuse themselves with when they grow up.

Bruce continues:

However,
this _may_ not be a bad way to represent angular _deviation_ from some
direction. If my initial direction with respect to my body front is always
taken to be zero, then with respect to this reference the farthest I can get
away from "front" is "back," or 180 degrees (Pi radians). To get there I
can turn ccw (+) or cw (-). Given that a neural signal cannot go negative,
this suggests that the two rotational directions should be handled by two
coordinated control systems that produce rotational accelerations in
opposite directions.

In my Crowdv3 model, only relative direction (between the direction of
travel or "straight ahead" and some other object) is perceived, just as you
describe it. The lowest level perceives not direction but rate of change of
direction. If the system maintains a constant perceived (and presumably
actual) rate of change of direction, the outside observer will see it
turning in a circle, the radius being inversely proportional to the rate of
change of direction. When the rate of change of direction signal is
maintained at zero, the person is moving in a straight line. So it's only
in the environment that we have to keep adjusting the angle measure to keep
it between pi and -pi, for computational convenience. There is no
corresponding direction signal inside the Crowd model.
If you look carefully at the code, you will find that absolute (laboratory)
angles are used only in computing what angular deviations from straight
ahead the person will perceive.

I'll include the rest of your post because it touches on important
subjects. I have no urgent disagreements with any of it.

We're not very good at keeping track of directions using only vestibular and
proprioceptive information, relying more on visual cues and to some extent
(especially when vision is poor) on auditory and tactile ones. For this
simulation we might assume that there is some sense of angular acceleration
and speed, which together with a sense of time would yield change in angular
position, and/or have the system make use of visual landmarks. (The latter
sounds like it could be complicated.)

With respect to representing physical variables separately from the system's
internal signals, it will be easy to add the physical variables to the
baseperson object and make the appropriate changes in code. I'll make these
physical variables properties of the baseperson object because that is what
they are -- the person's direction and speed of travel, angular
acceleration, etc. Placing these properties is a separate "environment"
object would needlessly complicate the code.

In the remaining post you hadn't received when you sent this, I was already
agreeing to this.

In my revision, I intend to create a new "baseperson" object (I may rename
it) that just has these physical properties and some procedures for
initializing its position etc. The current "baseperson" object will then
become a descendant of this new base object, inheriting these physical
variables from the base object and adding the variables unique to the active
person plus the relevant control-system procedures. I avoided doing this at
first as I'm not experienced at creating useful object hierarchies, but in
retrospect this seems like a good way to break the objects down: the static
objects will have all the physical properties they need in the base object
and the active objects will have these plus the rest.

I can't pass this up:

{NOTE: "DT" IN THE THIRD LINE OF THE BODY BELOW IS NOT NEEDED. IT IS NEEDED
ONLY FOR INTEGRATIONS. AS USED HERE, IT ONLY REDUCES THE EFFECTIVE GAIN.

procedure baseperson.ControlDir;
begin
Direction.r := Fixangle(mousey * 0.005);
Direction.e := Addangle(Direction.r, -Direction.p);
Direction.o := Direction.g * Direction.e {* dt};
end;

I wondered about that when I coded it. It didn't seem to belong there, but
wasn't sure of your intent, so I left it in. (Yes, it's in your
Avoid3.pas.) It's a relief to know that I had it analyzed correctly -- at
least I'm thinking straight here.

A goof on my part -- sorry to mislead you.

{ONE THING I DIDN'T EXPLAIN WAS THAT I ASSUMED A CONTROL SYSTEM FOR
FORCING A CERTAIN RATE OF TURN. THE CONTROLLED PERCEPTION WOULD BE
SOMETHING LIKE THE LATERAL VELOCITY WITH WHICH THE ENVIRONMENT WAS
SEEN WHEELING AROUND, SO THERE COULD BE A PERCEPTION REFLECTING RATE
OF TURN. THEN DIRECTION.O WOULD SET THE REFERENCE LEVEL FOR THE RATE
OF TURN CONTROL SYSTEM, AND THE NECESSARY SIDEWARD ACCELERATION WOULD
BE PRODUCED TO GENERATE THE SPECIFIED RATE OF TURN. THE COMPUTATIONS
BELOW DEDUCE WHAT THAT SIDEWARD ACCELERATION MUST BE, TO PRODUCE THE
SPECIFIED RATE OF TURN GIVEN THE CURRENT FORWARD VELOCITY.

Yes, I understood that the actual control system for this was only implied.
By the way, if the simulation is to be accurate, perhaps it would also need
to take angular momentum into account. At present it would seem that the
object's mass is all compressed to a point at its center.

You're right about that, but the effect will be buried in that assumed
control system for producing a rate of turn. If you want to program that
control system explicitly, you would need a system that creates a torque,
with angular acceleration equal to torque divided by moment of inertia.
Vestibular sensing is very good for angular acceleration, so that loop can
be closed. Then the angular acceleration would be integrated once to
produce angular velocity, and that in turn would be directly perceived
(visually, perhaps) for comparison with the reference signal from
"direction.o."

I believe that your suggested improvements will produce a very neat model.

Best,

Bill P.