Growing Artifical Societies without PCT

[From Bruce Gregory (971103.1120 EST)]

Gary Cziko 971103.1557 GMT

What I find a bit puzzling is that the book makes no mention (or explicit
use of) cyberenetics, feedback control, perceptual control, reference
levels and any of the other stuff that PCT modelers know, love and use.

So how is it that Epstein and Axtell are able to get all kinds of neat
things to happen, like skewed distributions of wealth and population
migrations? I could see someone using this research as an argument that
PCT isn't needed to model individuals interacting in societies.

I have my suspicions about how Epstein and Axtell had to "cheat" to avoid
using PCT, but I'd like some others to look at this work and confirm my
suspicions.

I believe that all the PCT machinery is implicit in the
goal-oriented nature of the agents. I too would be interested in
the reaction of others.

Bruce

[from Gary Cziko 971103.1659 GMT]

Bruce Gregory (971103.1120 EST) said:

I believe that all the PCT machinery is implicit in the
goal-oriented nature of the agents. I too would be interested in
the reaction of others.

But how can one make goal-oriented agents without explicit working control
systems? I think it's by using a very special (disturbance-free)
environment in which what the agents want is instantly achieved (or at
least on the next iteration) without having to actually act on the
environment.

I just discovered there are QuickTime animations of some of their
simulations on the Web. They can be found via
http://www.brookings.org/sugarscape/ under "Movies" (the animation on this
opening page is pretty neat, too).

--Gary

[from Gary Cziko 971103.1557 GMT]

Back in June Bruce Gregory (970630.1420 EDT) said:

I'm reading a book that seems to have possibilities when it
comes to modeling societies composed of intentional agents:
_Growing Artificial Societies: Social Science from the Bottom
Up_ by Joshua Epstein and Robert Axtell, (Brookings Institution
Press, MIT Press, 1996). Modelers might want to take a look at
it. Non-modelers will find it easy to read.

I've just discovered this book myself and find it quite interesting.
Sociological PCT modelers like Kent McClelland, Clark McPhail and Chuck
Tucker should find it of particular interest (see
http://www.brookings.org/pub/books/ARTIFSOC.HTM for more info).

What I find a bit puzzling is that the book makes no mention (or explicit
use of) cyberenetics, feedback control, perceptual control, reference
levels and any of the other stuff that PCT modelers know, love and use.

So how is it that Epstein and Axtell are able to get all kinds of neat
things to happen, like skewed distributions of wealth and population
migrations? I could see someone using this research as an argument that
PCT isn't needed to model individuals interacting in societies.

I have my suspicions about how Epstein and Axtell had to "cheat" to avoid
using PCT, but I'd like some others to look at this work and confirm my
suspicions.

--Gary

P.S. A CD-ROM version of this work is available with full text and
programs that run on both Macintosh and Windows platforms. See the URL
above for more info.

[From Bruce Gregory (971103.1220 EST)]

Gary Cziko 971103.1659 GMT

Bruce Gregory (971103.1120 EST) said:

>I believe that all the PCT machinery is implicit in the
>goal-oriented nature of the agents. I too would be interested in
>the reaction of others.

But how can one make goal-oriented agents without explicit working control
systems? I think it's by using a very special (disturbance-free)
environment in which what the agents want is instantly achieved (or at
least on the next iteration) without having to actually act on the
environment.

Yes, I think that's right. Using PCT you could build a much more
realistic model. The results might not be all that different. At
least in the simplest cases. When disturbances are important, my
guess is that the limitations of the simpler model would soon
become apparent.

Bruce

[From Rick Marken (971103.1300)]

Gary Cziko (971103.1659 GMT) -

But how can one make goal-oriented agents without explicit working
control systems? I think it's by using a very special (disturbance
free) environment

That's one possibility. The other is that the agents actually
_are_ control systems. Based on the very sketchy description of
the agents at http://www.brookings.org/pub/books/ARTIFSOC.HTM it
seems very possible that Epstein and Axtell have built control
systems that are controlling at least one perceptual variable: amount
of sugar consumed. Epstein and Axtell seem to view (or describe)
their systems as "rule driven" cognitive systems but this
doesn't mean that they _are_ cognitive systems. We have run
into this problem several times before. The "flocking" program
is one example; the author of that program described the "boids"
(the bird-like agents in the program) as information processors;
in fact, they were control systems that controlled a perception
like "average distance to all other boids"; the control loop was
stabilized because "boid" movement involved an integration (slowing
factor).

The S-R robots are another example. These were software agents
that fired left and right "rocket" motors (the response, R) in
proportion to the disparity in the intensity of light at two
sensor "eyes" (the stimulus, S). The robots were described as S-R
devices but they operated in a loop (the rocket output, R, affected
what was happening at the sensor eyes, S) and the relationship
between R and S again contained an integral so that the loop was
stabilized. The S-R robots were actually control systems,
controlling for having the perception of the difference in
intensity at the two eyes be equal to a fixed reference of zero.

Bruce Abbott's reinforcement model of E. coli navigation is still
another example. Bruce imagined that his model was controlled
by contingencies. In fact, the contingencies _in the model_
allowed E. coli to control input perception of nutrient
concentration relative to a fixed reference.

It's pretty easy to build control models without knowing it. If
you build an agent from an S-R perspective (and all the above
models were built from an S-R perceptive) the agent will control
the S variable as long as you set up the S-R relationship so that
there is negative feedback in the loop (which includes the R-S
relationship) and so that there is sufficient time damping relative
to system gain so that the system is dynamically stable. The people
building the "boids", S-R robots and "reinforced" E. coli just
fiddled around with the equations until the agent worked (controlled
its position relative to the other boids, moved to the source
of the light or moved to the source of the nutrient). These agents
were real control systems, too. They could not only produce
the goal result from the observer's perspective (the goal
perception from the agent's perspective), they could do so in
the presence of continuously varying disturbances.

If the developers of these systems had understood control theory,
they would have learned a lot more about what their invented
agents were actually doing. In particular, they would have known
that the behavior of their agents was organized around the control
of particular perceptual variables. They would also have seen that
determining the perception(s) an agent controls is one of the most
important considerations in modeling (and understanding) purposive
agents.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

Pardon me for jumping in. But since your mailing list is indexed by
reference.com, I assume that you don't mind outsiders reading and
contributing to your discussions. You mentioned one of the keyword my
search bots are looking for. ("Say the secret woid and win a prize!")

Richard Marken wrote:

...We have run into this problem several times before. The
"flocking" program is one example; the author of that program
described the "boids" (the bird-like agents in the program) as
information processors; in fact, they were control systems that
controlled a perception like "average distance to all other boids";
the control loop was stabilized because "boid" movement involved an
integration (slowing factor)...

I can assure you that the author of boids is quite comfortable with
describing his boid-brains as "control systems" (in fact he usually
calls them "steering controllers") and has no problem with the
perspective that the goal of the control system is to place certain
perceptual variables into their preferred ranges. This is not the way
he chooses to think about the design requirements of the controllers,
but hey, its a free country.

When designing a bridge, the important thing is that the model used by
the engineer is sufficiently robust so that the bridge supports the
specified weight without collapsing. If two different engineer use
two different mental models, but each produce a suitable bridge, its
hard to fault either on the basis of their thought process.

...If the developers of these systems had understood control theory,
they would have learned a lot more about what their invented
agents were actually doing. In particular, they would have known
that the behavior of their agents was organized around the control
of particular perceptual variables. They would also have seen that
determining the perception(s) an agent controls is one of the most
important considerations in modeling (and understanding) purposive
agents.

In the end, the proof is in the pudding, or the purposive agent. What
really matters is the quality of the behavior that is obtained, and
not in the philosophy of design that was used to create it. This is
what Turing said about intelligence in The Imitation Game. Its what
is behind "I may not know anything about art, but I know what I like."

For anyone who is interested, there is information about boids at:

  Check out our sponsor - Chinesepod.com

and some related "steering behaviors" at:

  Check out our sponsor - Chinesepod.com

(If you want to respond to this message you should probably CC me on
the message to make sure I see it.)

[From Rick Marken (971105.0800)]

Hi Craig --

I'm kind of busy today. But I might have time to reply to you
later. But I hope Bill Powers replies to you first. He's
built several models of behavior that illustrate the engineering
advantages (for roboticists) of understanding behavior as the
control of perception.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (971105.0936 MDT)]

Craig Reynolds (971104) --

Pardon me for jumping in. But since your mailing list is indexed by
reference.com, I assume that you don't mind outsiders reading and
contributing to your discussions.

We welcome it. Join the crowd. Tell us a bit about yourself so we know
where you're from, and coming from.

You mentioned one of the keyword my
search bots are looking for. ("Say the secret woid and win a prize!")

Richard Marken wrote:

...We have run into this problem several times before. The
"flocking" program is one example; the author of that program
described the "boids" (the bird-like agents in the program) as
information processors; in fact, they were control systems that
controlled a perception like "average distance to all other boids";
the control loop was stabilized because "boid" movement involved an
integration (slowing factor)...

I can assure you that the author of boids is quite comfortable with
describing his boid-brains as "control systems" (in fact he usually
calls them "steering controllers") and has no problem with the
perspective that the goal of the control system is to place certain
perceptual variables into their preferred ranges.

Excellent. A lot of people who write models like this don't realize that
they have to be talking about perceptions -- the "distance between birds"
can't affect the way the birds fly unless it's perceived by the birds. Our
approach is generally (in terms of this example) to say that there is a
reference-distance set inside the birds, and that the direction of flying
is based on the difference between the perceived variable (which is the
controlled variable) and the reference setting. It's not just a matter of
upper and lower limits; there's a target value. Hard to tell the difference
between a target value and limit values sometimes, of course.

This is not the way
he chooses to think about the design requirements of the controllers,
but hey, its a free country.

If the basic design is the same, it doesn't matter to me how it's
described. I have a particular way of setting up the analysis that seems to
make the relationships involved clear and easy to teach, as well as keeping
my own feeble thinking straight. It's not always easy to think in closed
loops -- the relationships can get very nonintuitive. I'd be interested in
knowing how the author of boids sees the process (haven't read anything
about it). How about a description?

When designing a bridge, the important thing is that the model used by
the engineer is sufficiently robust so that the bridge supports the
specified weight without collapsing. If two different engineer use
two different mental models, but each produce a suitable bridge, its
hard to fault either on the basis of their thought process.

When it comes to _simulating_ the physical process, there's a lot less
leeway. Every part of the simulation has to correspond to some aspect of
the physical process, as near as possible. Your mental model gets laid out
in public when you do that. You have to show how you think everything
works. It's not enough that the bridge holds up under the load -- you have
to show that it holds up because the stresses and strains are right, not
because there's an ogre with a strong back under it. In simulations, "then
a miracle occurs" doesn't work.

In the end, the proof is in the pudding, or the purposive agent. What
really matters is the quality of the behavior that is obtained, and
not in the philosophy of design that was used to create it.

Depends somewhat on how you define "behavior." If you mean the actions of
the system, I'd disagree. What counts in modeling organisms is that the
_consequences_ of their actions be correct. Organisms vary their actions
all over the place in order to create repeatable consequences. It takes a
feedback control model to explain how they can do that. Birds don't control
their flapping; they control variables that are affected by the flapping
behavior. The birds vary their flapping as required by gusts of wind and
the positions of the other birds (or bugs).

This is
what Turing said about intelligence in The Imitation Game. Its what
is behind "I may not know anything about art, but I know what I like."

I think Turing was wrong if he thought that the external appearance of
behavior is all that counts. It isn't. I could detect a non-control-system
generated behavior in a couple of minutes even if it seemed identical to a
control-system generated behavior. It wouldn't correct disturbances.

For anyone who is interested, there is information about boids at:

Check out our sponsor - Chinesepod.com

and some related "steering behaviors" at:

Check out our sponsor - Chinesepod.com

I'll look it up.

Best,

Bill Powers

[From Rick Marken (971105.1400)]

Craig Reynolds (971104) --

This is what Turing said about intelligence in The Imitation
Game. Its what is behind "I may not know anything about art,
but I know what I like."

Bill Powers (971105.0936 MDT) --

I think Turing was wrong if he thought that the external appearance
of behavior is all that counts. It isn't. I could detect a non-
control-system generated behavior in a couple of minutes even if
it seemed identical to a control-system generated behavior. It
wouldn't correct disturbances.

For Craig's sake (and for the sake of anyone else who might be
interested) my "Detection of Purpose" demo illustrates exactly
this point. The demo is at:

http://home.earthlink.net/~rmarken/ControlDemo/FindMind.html

The "Test for The Controlled Variable" demo:

http://home.earthlink.net/~rmarken/ControlDemo/ThreeTrack.html

illustrates the same point from the perspective of the behaving
system (the person doing the demo).

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

Warning: long rambling message follows. I'm trying to catch up with
several messages from Bill Powers and Rick Marken:

Bill Powers wrote:

[From Bill Powers (971105.0936 MDT)]
Craig Reynolds (971104) --
>...I can assure you that the author of boids is quite comfortable
>with describing his boid-brains as "control systems" (in fact he
>usually calls them "steering controllers") and has no problem with
>the perspective that the goal of the control system is to place
>certain perceptual variables into their preferred ranges.

Excellent. A lot of people who write models like this don't realize
that they have to be talking about perceptions -- the "distance
between birds" can't affect the way the birds fly unless it's
perceived by the birds. Our approach is generally (in terms of this
example) to say that there is a reference-distance set inside the
birds, and that the direction of flying is based on the difference
between the perceived variable (which is the controlled variable)
and the reference setting. It's not just a matter of upper and lower
limits; there's a target value. Hard to tell the difference between
a target value and limit values sometimes, of course.

>This is not the way he chooses to think about the design
>requirements of the controllers, but hey, its a free country.

If the basic design is the same, it doesn't matter to me how it's
described. I have a particular way of setting up the analysis that
seems to make the relationships involved clear and easy to teach, as
well as keeping my own feeble thinking straight. It's not always
easy to think in closed loops -- the relationships can get very
nonintuitive. I'd be interested in knowing how the author of boids
sees the process (haven't read anything about it). How about a
description?

The general approach is to measure properties of the boid's (agent's)
local environment (eg, "that flockmate to my left is a little too
close for comfort"), to transform these into behavioral desires ("I'd
prefer to be a bit to the right"), and then to convert these into
steering forces (this is trivial in the original boid model since they
steer by simple application of force in arbitrary directions). At
this level I think its easy to make analogies between my approach and
the PCS approach. However I think my behaviors have more of a flavor
of fuzzy logic, stimulus-response systems, based on 3d vector values.
The connection between perceptual variables and controlled variable
(if those are the right terms) is rather indirect.

I don't ever compare the "distance between birds" to a target value.
Instead there is, for example, a desire (force) leading to separation
which varies with (the inverse of) the distance between two boids.
There is no "target inter-boid distance" parameter in my model.
Instead this value emerges from the interaction between Separation and
Cohesion, two of the component behaviors of the boids model.

Surely there are many ways to formulate the metrics and parameters
used in a model like this which would produce similar if not identical
as observed from the outside. In fact several of the applets written
by others that I list on Check out our sponsor - Chinesepod.com use somewhat
different rules.

(BTW, for the hard core, there is a draft paper available from me on
the "steering behaviors" described on Check out our sponsor - Chinesepod.com )

>When designing a bridge, the important thing is that the model used
>by the engineer is sufficiently robust so that the bridge supports
>the specified weight without collapsing. If two different engineer
>use two different mental models, but each produce a suitable
>bridge, its hard to fault either on the basis of their thought
>process.

When it comes to _simulating_ the physical process, there's a lot
less leeway. Every part of the simulation has to correspond to some
aspect of the physical process, as near as possible. Your mental
model gets laid out in public when you do that. You have to show how
you think everything works. It's not enough that the bridge holds up
under the load -- you have to show that it holds up because the
stresses and strains are right, not because there's an ogre with a
strong back under it. In simulations, "then a miracle occurs"
doesn't work.

Whether doing simulations or building real bridges (and ignoring, for
the sake of this analogy, modern realities like obtaining permits and
conforming to acceptable practice) the only real question is "does the
bridge support the load?" If the designer's thought process involves
a theory based on a helpful ogre, and yet the bridge still stands,
then his theory worked. My point is that if you build a good bridge,
it doesn't matter that you built it for the "wrong reason". And of
course, if the bridge collapses, you need to rethink your theory.

>In the end, the proof is in the pudding, or the purposive agent.
>What really matters is the quality of the behavior that is
>obtained, and not in the philosophy of design that was used to
>create it.

Depends somewhat on how you define "behavior." If you mean the
actions of the system, I'd disagree. What counts in modeling
organisms is that the _consequences_ of their actions be
correct.

Right, the consequences. The externally observable behavior. We
don't care that a car's driver appears to move the steering wheel
correctly, we care that the car stays on the road.

Organisms vary their actions all over the place in order to create
repeatable consequences. It takes a feedback control model to
explain how they can do that. Birds don't control their flapping;
they control variables that are affected by the flapping
behavior. The birds vary their flapping as required by gusts of wind
and the positions of the other birds (or bugs).

>This is what Turing said about intelligence in The Imitation Game.
>Its what is behind "I may not know anything about art, but I know
>what I like."

I think Turing was wrong if he thought that the external appearance
of behavior is all that counts. It isn't.

Well I am reluctant to speak for Mr. Turing -- but he hasn't been
answering his email for quite a long time! The Imitation Game was his
alternative to the impossible task of defining intelligence. He
suggested that the way to decide if a machine possessed artificial
intelligence was to have a judge converse (via typed text) with the
machine and a human. If the judge could not determine what was which,
the machine passed the test.

I take this to mean that intelligence is independent of whether it is
"implemented" in neurons or silicon, and certainly independent of the
"design philosophy" used to create the implementation.

I could detect a non-control-system generated behavior in a couple
of minutes even if it seemed identical to a control-system generated
behavior. It wouldn't correct disturbances.

But if you *can* detect a difference between two sets of behaviors,
that implies to me that the behaviors must differ in their "external
appearance". In that case you are right, the non-reactive one is
failing to do its job correctly. But if both react correctly, then by
an operational definition they are the "same" behavior, even if they
are implemented quite differently.

Richard Marken wrote:

[From Rick Marken (971105.1400)]

For Craig's sake (and for the sake of anyone else who might be
interested) my "Detection of Purpose" demo illustrates exactly this
point. The demo is at:

http://home.earthlink.net/~rmarken/ControlDemo/FindMind.html

The "Test for The Controlled Variable" demo:

http://home.earthlink.net/~rmarken/ControlDemo/ThreeTrack.html

illustrates the same point from the perspective of the behaving
system (the person doing the demo).

I found the Test for The Controlled Variable demo very interesting,
almost spooky! The Detection of Purpose demo seemed to "tip its hand"
since before I interacted with it I could see that one of the squares
was moving along a diagonal line while the other two were tracing more
complicated trajectories.

However I may be missing the Big Picture, I'm not sure what lesson to
take away from these demos. I will look at the rest of Rick's site as
time permits.

Bill Powers wrote:

[From Bill Powers (971106.0917 MST)]

I've looked at your Boids demos and am much impressed. Your ideas
seem to have become much more widely known than mine! Perhaps there
was something in the air back in the 1980s, because I came up with
something very similar -- a demonstration of crowd behavior based on
individual control systems. I'm attaching the three files we
distribute over the net (the usual FTP link went defunct and we're
still working out how to restore it). The main .EXE file is
self-extracting (PCs only), and expands into the runnable program,
some setup files (you can create your own, too) and a writeup. The
other files just explain how to do this and automate the process if
you don't know how to do it yourself. From seeing your programs I
deduce that there's not much you don't know how to do with a
computer.

Well, thanks but one thing I steadfastly avoided learning is how to
use PCs, so I can't conveniently run these demos now. But in fact I
did once see the Gather/Crowd program run. I first heard of PCT/CSG
in January of 1996 when I got email from Rick Marken. Somehow this
lead to a call from Dag Forssell who happened to be visiting near my
home. We got together and compared notes and he gave me a PC disk of
demos, and later a coworker ran these for me on a PC at work. I'm
afraid my memory of it is pretty sketchy now almost two years later.

Regarding your comment about there being "something in the air", this
is a phenomena that has been noted many times in the history of
science. Conditions become "ripe" for an idea to come forth and it is
not uncommon for several independent researchers to make similar
advances at about the same time. I know of two groups in the computer
graphics area that did similar work around 1985: search for "eurythmy"
and "plasm" on Check out our sponsor - Chinesepod.com

[From Rick Marken (971107.1015)]

Craig W. Reynolds (971107) --

Warning: long rambling message follows.

I'll give a long rambling reply to your post when I get some
time. Right now I just want to thank you for pointing out a
bug in the "Detection of purpose" demo. You have remarkable
observational skills. You said:

The Detection of Purpose demo seemed to "tip its hand" since before
I interacted with it I could see that one of the squares was moving
along a diagonal line while the other two were tracing more
complicated trajectories.

What you saw was, indeed, happening and it was not supposed to
be happening. It was a result of programming error (it's actually
amazing that my programs work at all given my mental abilities;-).
I was using the same pointer variable to change the reference for the
position of the purposeful square. This made the purposeful square
move on a diagonal. I have now corrected the code and it works much
better. Thanks, Craig.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bill Powers (971107.1527 MST)]

The general approach is to measure properties of the boid's (agent's)
local environment (eg, "that flockmate to my left is a little too
close for comfort"), to transform these into behavioral desires ("I'd
prefer to be a bit to the right"), and then to convert these into
steering forces (this is trivial in the original boid model since they
steer by simple application of force in arbitrary directions). At
this level I think its easy to make analogies between my approach and
the PCS approach. However I think my behaviors have more of a flavor
of fuzzy logic, stimulus-response systems, based on 3d vector values.
The connection between perceptual variables and controlled variable
(if those are the right terms) is rather indirect.

The loop is perforce closed, since a change in location relative to another
boid necessarily alters the "properties of the boid's (agent's)

local environment (eg, "that flockmate to my left is a little too
close for comfort")".

I don't ever compare the "distance between birds" to a target value.
Instead there is, for example, a desire (force) leading to separation
which varies with (the inverse of) the distance between two boids.
There is no "target inter-boid distance" parameter in my model.
Instead this value emerges from the interaction between Separation and
Cohesion, two of the component behaviors of the boids model.

Behaviorally, the "target" distance shows up as that distance at which the
boid would tend neither to increase nor decrease the distance. This can be
represented as a single reference signal, from which the current perceived
distance is subtracted, the difference or "error signal" driving the change
in behavior. You may handle the mathematics differently, but the effect
will be the same. If you're really using acceleration as the output
variable, then you also have to sense rate of change of distance, to
achieve stability of the loop.

When it comes to _simulating_ the physical process, there's a lot
less leeway. Every part of the simulation has to correspond to some
aspect of the physical process, as near as possible. Your mental
model gets laid out in public when you do that. You have to show how
you think everything works. It's not enough that the bridge holds up
under the load -- you have to show that it holds up because the
stresses and strains are right, not because there's an ogre with a
strong back under it. In simulations, "then a miracle occurs"
doesn't work.

Whether doing simulations or building real bridges (and ignoring, for
the sake of this analogy, modern realities like obtaining permits and
conforming to acceptable practice) the only real question is "does the
bridge support the load?" If the designer's thought process involves
a theory based on a helpful ogre, and yet the bridge still stands,
then his theory worked. My point is that if you build a good bridge,
it doesn't matter that you built it for the "wrong reason". And of
course, if the bridge collapses, you need to rethink your theory.

I guess this depends on what you're trying to simulate. If you just want to
simulate load-supporting ability, then you put a 50 by 50 by 50 meter block
of steel into the gap, and just about any load short of a neutron star will
be held up. But if you also set yourself the task of doing it with real
components having real characteristics, at a specific cost, and in a
specific time, you end up with the sorts of analyses that bridge engineers
actually use, without much leeway.

In PCT, we try to model not just behavior, but the behaving organism, so
that, for example, the "distance between boids" would have to be measured
in a way that a boid could measure it -- from the viewpoint of the boid. Of
course there are limits on how well one can do this (the information is
rather incomplete), but it's possible to stick to this sort of rule in
principle, coming as close as possible.

···

------------------------

Well, thanks but one thing I steadfastly avoided learning is how to
use PCs, so I can't conveniently run these demos now.

They will run fine on a Power Mac -- they're basically DOS programs. A
pity, though, since all I've ever been able to afford have been PCs, so
that's all I can write programs for. Crowd happens to be written in C, and
the source code and header files are in the distribution program
(self-extracting zipped file, so you still need a PC for a few minutes). Or
failing that I could just send you the source files. You'd have to put in
your own graphics routines, but they're simple (circles or pixels, mainly).
If you're interested.

But in fact I
did once see the Gather/Crowd program run. I first heard of PCT/CSG
in January of 1996 when I got email from Rick Marken. Somehow this
lead to a call from Dag Forssell who happened to be visiting near my
home. We got together and compared notes and he gave me a PC disk of
demos, and later a coworker ran these for me on a PC at work. I'm
afraid my memory of it is pretty sketchy now almost two years later.

Wish I could program in Java, like Rick. There's a lot of interest in the
Crowd program, particularly the way one gets complex "social" patterns out
of basically simple interacting systems.

In the Crowd program, each active individual (up to 255 total, active and
inactive) senses with two receptors that cover a cardoid pattern on each
side (sensitivity of zero in the rearward direction). It's a 2-D program.
Proximity is perceived as an inverse-square function, which would cover
such variables as subtended visual solid angle, sound intensity, light
intensity, or odor intensity. For collision avoidance, the total proximity
is calculated for each sensor (from all other objects), and the left and
right proximity signals become the basis for slowing and turning when total
proximity becomes large. Other uses of proximity signals are seeking a goal
and following other active objects. "Following," for example, entails
bringing the left and right perceived proximities for a moving target to
equal magnitudes, and bringing the sum to a specific value that sets the
following-distance. The outputs are velocity and direction -- the detailed
propulsion systems are just assumed.

Regarding your comment about there being "something in the air", this
is a phenomena that has been noted many times in the history of
science. Conditions become "ripe" for an idea to come forth and it is
not uncommon for several independent researchers to make similar
advances at about the same time. I know of two groups in the computer
graphics area that did similar work around 1985: search for "eurythmy"
and "plasm" on Check out our sponsor - Chinesepod.com

I've been working on this stuff since 1953, but it was only in 1975 that I
build my first PC from kits and actually started trying simulations. And it
was another ten years before I got interested in something other than
simulating human behavior in tracking tasks. I've been trying to get
control theory "in the air" (for behavioral scientists) for a long time,
and meeting tons of resistance. How could somebody nobody has ever heard of
do something nobody else has thought of trying? But the stuff of PCT is
more in the air now than it ever used to be.

Best,

Bill P.