Bob Clark's model; obstacle avoidance; attractors

[From Bill Powers (931015.1015 MDT)]

Bob Clark (931014.2130) --

If I'm doing anything resembling your interpretation, it
consists of "classifying" the "perceived behaviors" to which
those "names" refer. And further classifying the "perceived
behaviors" (named) resulting from combining other "perceived
behaviors" (also named).

Assuming that what you mean by "behaviors" is "what one can see
other people doing," we are in agreement about what you're doing.

My concern in trying to define levels has been to explain what is
required in a brain in order that people can be observed doing
what we see them doing. One of the requirements, if control is
what we basically see, is that certain perceptual capacities
exist; control systems, in the final analysis, control their
perceptions. I have tried to find basic perceptual capacities
that must be present to account for everything we can be seen or
find ourselves controlling, capacities that must be present in
order for any behavior of any kind to occur. I am not very
confident of having succeeded, but I think that my proposed
hierarchy is in the right ballpark (even if we should find that
"hierarchy" is the wrong name for the overall organization).

To shift the frame of reference, suppose we are analyzing the
organization of a college student. We might decide that this
student requires Study Skills, Library Skills, Writing Skills,
and Testing Skills. A case could be made for some such taxonomy
of skills, arrangeable more or less hierarchically, that would
cover student behavior. This would resemble in form the kinds of
higher level organizations you're proposing.

What I am attempting to do, analogously, would be to find
underlying capacities that are needed for all these skills. If
one studied the skills looking for common factors, eventually one
might realize that none of these skills is possible unless the
student can read. This is a simple and obvious requirement, but
if one is looking at behavior from a different sort of viewpoint,
it may be too obvious to notice. Reading is simply buried among
the details of what happens when the student carries out any of
these skills.

In dealing with physical systems and with people in complex ways,
it is certainly necessary to have specific skills relating to
things and others relating to interpersonal interactions. But in
explaining what these different skills are, one might easily take
it for granted, unwittingly, that in either of these situations
one is able to think. If you're unable to think, you can't
analyze a complex machine, and you can't understand interpersonal
interactions either. I have tried to break "thinking" down into
basic operations such as sequences of fixed operations (sequence
level), if-then contingencies (program level), and
generalizations (principle level): such classifications define
capabilities independently of the subject-matter.

What matters most to me is defining capabilities that are
independent of the area of skill you happen to be observing. I
don't believe that the brain has specialized layers devoted to
dealing with people and other layers devoted to dealing with
other complex but nonliving systems. Those classifications are
not a matter of how the brain works, but of the information
content that is being manipulated by the brain: the products of
the brain's activities, not the means by which such products are
created. We can speak of a level of organization at which signals
represent the truth-value of propositions and the circuitry obeys
rules of logic (or other kinds), without ever saying WHAT
propositions are being handled, whether they concern reasoning
about human interactions or reasoning about what is wrong with
your car. The content or meaning simply isn't pertinent to the
way this level of organization works. It works the same way
whatever the signals happen to stand for, and in fact this level
doesn't need to know what the signals stand for; that is not its
concern. It simply performs logical or rule-driven operations on
whatever signals it is presented with and its output simply
represent the outcome of those manipulations.

I sympathize fully with your emphasis on communicating with
people like city managers in language that they already
understand. However, you should realize that by using the
taxonomy you use, you're communicating by examples, not teaching
the basic organization. Unless one of your city managers happens
to get interested in how this taxonomy works, your audience will
simply not learn about the organization of the brain itself; they
will learn an organized way of thinking about activities of the
brain. I'm not saying they should.

I do not communicate with city managers. I communicate with
theoreticians and experimenters concerned with the underlying
organization of the human brain and nervous system. I rely on
people like you and Ed Ford and Dag Forssell (for example) to
translate ideas about the brain into practical terms that
exemplify, but do not explain, the principles of hierarchical

At this time, it seems to me that our differences arise from
using different viewpoints. It seems to me that we also
discussed this question 30 years ago.

Yes. These differences were largely the reason that we went our
separate ways in 1960. I couldn't clarify then what the problem
was, which made for considerable frustration, but we clearly were
approaching the subject from very different viewpoints and I was
unable to adapt mine to yours.

As I recall the matter, my position was a "relative" one: the
view from the ground differs from that from the airplane. Your
position, I think, was that there could only be ONE CORRECT

Not quite -- I'm still aware that there are other possible
organizations than the particular hierarchy I have proposed.
However, I admit to believing that the brain operates however it
operates, regardless of our guesses about it. I do believe that
there is a Boss Reality with stable properties, and while I'm
sure that we can never do better than discovering a human version
of it, I believe we can construct models that fit our experiences
of it better and better as time goes on. I definitely do not
believe that one viewpoint is just as good as another: the whole
point of experimentation is to discover which viewpoint fits the
results the best. Of alchemy and chemistry, only one viewpoint
survived. When we have differing viewpoints, the right approach
is not just to shrug and be democratic about it; it's to ask what
experiment we could do that would rule out one viewpoint (or
perhaps both!). So yes, I am (and have always been) impatient
about seeing ideas as "perspectives" on something having to do
with our shared world. The existence of alternative and
INCOMPATIBLE "perspectives" on the properties of nature hoists a
signal-flag that should call for experimenters to get busy.



My approach is primarily that of the User, who needs to have a
set of systems adaptable to his changing circumstances. That
is, I am examining the Output Hierarchy. I begin with systems
that act directly on the environment. I proceed by adding
higher level systems that can act by selecting and activating
("controlling") the lower systems.

But you know that simply producing a certain Output is not a way
to produce a reliable consequence in the world. Output must be
freely adjustable when disturbances tend to alter the outcome, as
they always do. Focusing on Output encourages the old way of
thinking, which assumes that if you go through all the right
motions, the right outcome will ensue. Higher systems must be
able to VARY the reference signals they send to lower systems in
any way required at any moment to keep the higher perception in
its momentarily intended state. And that can be done successfully
only through continual comparison of the perception with its
reference state.

A higher system can't simply "activate" a lower control system
and succeed in its control task. It must continually ADJUST the
reference level for that control system even to maintain the
higher perception in one state.

I continue with additional systems operating similarly, but in
terms of more "complex" variables. By "complex" I mean a
larger number of identifiable components, interconnected in a
larger number of ways.

There is no need for a higher control system to be more complex
than lower ones or to contain more components. In fact I should
think that the general rule would be the opposite: simple control
systems of higher level act by organizing many control systems of
lower levels, with complexity and number of components enormously
increasing as you go down the levels. Higher systems are not
necessarily more "complex." They simply control variables of a
different type. If each level had to include all the capabilities
of the levels below it, then what you say might well be true, but
in my model of the hierarchy the levels are specialized and
perform only the functions typical of that level. A logic circuit
can have an elementary structure and organization, yet its
behavior can result in enormously complex behavior by large
numbers of lower-level systems. The logic level does not,
however, have to know how to carry out its conclusions, or what
its inputs mean in lower-level terms.

This is the way I am interpreting the "Hierarchical Principle."

Then you had better say "a" hierarchical principle, because it is
not the same one I use.
Avery Andrews (931015.1639) --

Suppose that there was a `Proximity Boost Factor', whereby the
contribution of an object to perceived proximity (for CROWD
person obstacle avoidance) would be boosted when the object is
(a) in the direction of travel (b) intervening between the
CROWD person and the goal.

These are already built into CROWD, although in analog, not
either-or, fashion.

The proximity factor for a given object is computed as a constant
times the inverse square of the distance to that object, times
the cosine of twice the angle between the direction of travel and
the direction to that object (with negative cosines clamped at
zero and the angle of acceptance limited to 180 degrees on each
side). This simulates a visual field with a sensitivity pattern
like that of a directional microphone. This fulfils your
condition (a) without its being explicitly stated as a rule.

In the absence of nearby obstacles, the person heads directly
toward the goal, so straight ahead is normally the direction to
the goal. The greatest sensitivity to proximity is also in the
straight-ahead direction, so there is the greatest sensitivity to
obstacles intervening between the person and the goal. This
satisfies your condition (b), also without the need for an
explicit rule.

This ought to (he says, not having modelled it) lead to the
CROWD person turning off a course where an obstacle intervenes
between the person and the goal, and is sufficiently large (we
don't change our path on basis of a single tree 50 yds away, or
a house 399 yds away).

This is in fact how the CROWD person behaves. Proximity is an
inverse-square function: an obstacle 10 times as far away is
perceived as having 1% of the proximity. The half-max-proximity
distance is set by scaling, and for obstacles or other active
people it is roughly 5 times the diameter of the person (a very
steep function, plotted on the screen in the lower left corner).
A less steep proximity function is used for seeking the

Also, proximity is cumulative over obstacles to one side. Thus a
group of 10 closely-spaced obstacles contributes 10 times the
proximity of an isolated obstacle at the same distance and
direction. An active person will begin turning to avoid a group
sooner than for an isolated obstacle.

This assumes some sophisticated perceptual functions, as in fact

do standard CROWD people.

The perceptual functions themselves are simple: they just add up
total sensed proximity. The model of physical properties is
somewhat complex, but it represents the fact that the physical
means by which we sense objects generally falls off as the
inverse square of the distance (sound intensity, apparent visual
solid angle, light flux, concentration of diffusing molecules,
etc.). That is not a property of the active system. The
dependence of sensitivity on direction is a property of the
receiving apparatus, and is only modestly complex.

It takes a lot longer to describe the side-effects of these
properties than it does for the active system to employ them in
behavior. The active system doesn't have to talk about them or
apply logical rules to make them effective.

For social situations, an independent parameter might be how
dense a crowd has to be before it is perceived as an obstacle
by this system, although I'm really thinking about things like
walls that are continuous, not groups that might be walked

This, too, is already part of the CROWD program. No new parameter
is needed. There is nothing that specifically relates density of
the crowd as such to a threshold for perceiving an obstacle, but
adjusting the gain on the obstacle-avoidance control systems has
the side-effect of determining how many people in a group of what
size is treated as an impassable obstacle. If you crank the
Proximity Avoidance gain up high enough, the person will escape
from the whole crowd and get to the goal by going all the way
around everybody (if the goal is in the open). If you turn it
low, the person will just smash right through everyone and go
straight to the goal.

You can build a wall by designating a group of active people with
all their parameters set to zero and giving each one the
appropriate starting position. These people will be reassigned
the same positions on every run (only inactive people are placed
randomly on each new run and serve as random obstacles). They
will not move because their control systems have zero gain. An
active person arriving at a wall will turn one way or the other
and travel to an end that can be circumnavigated. The turning
will start sooner than it would for an isolated obstacle.

I trust, by the way, that you have a working copy of CROWD, so
you can verify all these statements.

Incidentally, you don't need a mouse to run CROWD

True. Did I imply that you did? Did I (horrors) leave in the
piece of code that tests for an installed mouse? I guess you do
have a copy.
Oded Maler (930915) --

The term "attractor" is a metaphor. When you plot the phase
diagram of a modestly-chaotic system you get orbits of the
plotted point that remain close to some mean orbit. The effect is
AS IF the moving point were a material object, and AS IF the mean
orbit were some physical thing that had an attractive influence
on the moving point. Alternatively, you could say that the center
of the orbit is at the bottom of a "basin of attraction" and that
the moving point is a material object like a marble in a bowl
being drawn by gravity toward the center (but being held away
from the center by the centrifugal force due to its velocity at
right angles to the direction of "attraction."

A physical analysis of the system whose behavior is presented in
such a phase plot would not reveal any attractive forces or
influences (at least not of the kind imagined). The only
attractor exists in the mind of the beholder, who is free-
associating on a certain arbitrary way of plotting the behavior
of a physical system. The behavior of the moving point reminds
the observer of the behavior of a mass moving under the influence
of attractive forces (themselves being equally a figment of the
imagination). So this is just a useful but poetic way of alluding
to the visual appearances in certain plots; it says nothing about
what causes this sort of behavior.
Best to all,

Bill P.