Warning: long rambling message follows. I'm trying to catch up with
several messages from Bill Powers and Rick Marken:
Bill Powers wrote:
[From Bill Powers (971105.0936 MDT)]
Craig Reynolds (971104) --
>...I can assure you that the author of boids is quite comfortable
>with describing his boid-brains as "control systems" (in fact he
>usually calls them "steering controllers") and has no problem with
>the perspective that the goal of the control system is to place
>certain perceptual variables into their preferred ranges.
Excellent. A lot of people who write models like this don't realize
that they have to be talking about perceptions -- the "distance
between birds" can't affect the way the birds fly unless it's
perceived by the birds. Our approach is generally (in terms of this
example) to say that there is a reference-distance set inside the
birds, and that the direction of flying is based on the difference
between the perceived variable (which is the controlled variable)
and the reference setting. It's not just a matter of upper and lower
limits; there's a target value. Hard to tell the difference between
a target value and limit values sometimes, of course.
>This is not the way he chooses to think about the design
>requirements of the controllers, but hey, its a free country.
If the basic design is the same, it doesn't matter to me how it's
described. I have a particular way of setting up the analysis that
seems to make the relationships involved clear and easy to teach, as
well as keeping my own feeble thinking straight. It's not always
easy to think in closed loops -- the relationships can get very
nonintuitive. I'd be interested in knowing how the author of boids
sees the process (haven't read anything about it). How about a
description?
The general approach is to measure properties of the boid's (agent's)
local environment (eg, "that flockmate to my left is a little too
close for comfort"), to transform these into behavioral desires ("I'd
prefer to be a bit to the right"), and then to convert these into
steering forces (this is trivial in the original boid model since they
steer by simple application of force in arbitrary directions). At
this level I think its easy to make analogies between my approach and
the PCS approach. However I think my behaviors have more of a flavor
of fuzzy logic, stimulus-response systems, based on 3d vector values.
The connection between perceptual variables and controlled variable
(if those are the right terms) is rather indirect.
I don't ever compare the "distance between birds" to a target value.
Instead there is, for example, a desire (force) leading to separation
which varies with (the inverse of) the distance between two boids.
There is no "target inter-boid distance" parameter in my model.
Instead this value emerges from the interaction between Separation and
Cohesion, two of the component behaviors of the boids model.
Surely there are many ways to formulate the metrics and parameters
used in a model like this which would produce similar if not identical
as observed from the outside. In fact several of the applets written
by others that I list on Check out our sponsor - Chinesepod.com use somewhat
different rules.
(BTW, for the hard core, there is a draft paper available from me on
the "steering behaviors" described on Check out our sponsor - Chinesepod.com )
>When designing a bridge, the important thing is that the model used
>by the engineer is sufficiently robust so that the bridge supports
>the specified weight without collapsing. If two different engineer
>use two different mental models, but each produce a suitable
>bridge, its hard to fault either on the basis of their thought
>process.
When it comes to _simulating_ the physical process, there's a lot
less leeway. Every part of the simulation has to correspond to some
aspect of the physical process, as near as possible. Your mental
model gets laid out in public when you do that. You have to show how
you think everything works. It's not enough that the bridge holds up
under the load -- you have to show that it holds up because the
stresses and strains are right, not because there's an ogre with a
strong back under it. In simulations, "then a miracle occurs"
doesn't work.
Whether doing simulations or building real bridges (and ignoring, for
the sake of this analogy, modern realities like obtaining permits and
conforming to acceptable practice) the only real question is "does the
bridge support the load?" If the designer's thought process involves
a theory based on a helpful ogre, and yet the bridge still stands,
then his theory worked. My point is that if you build a good bridge,
it doesn't matter that you built it for the "wrong reason". And of
course, if the bridge collapses, you need to rethink your theory.
>In the end, the proof is in the pudding, or the purposive agent.
>What really matters is the quality of the behavior that is
>obtained, and not in the philosophy of design that was used to
>create it.
Depends somewhat on how you define "behavior." If you mean the
actions of the system, I'd disagree. What counts in modeling
organisms is that the _consequences_ of their actions be
correct.
Right, the consequences. The externally observable behavior. We
don't care that a car's driver appears to move the steering wheel
correctly, we care that the car stays on the road.
Organisms vary their actions all over the place in order to create
repeatable consequences. It takes a feedback control model to
explain how they can do that. Birds don't control their flapping;
they control variables that are affected by the flapping
behavior. The birds vary their flapping as required by gusts of wind
and the positions of the other birds (or bugs).
>This is what Turing said about intelligence in The Imitation Game.
>Its what is behind "I may not know anything about art, but I know
>what I like."
I think Turing was wrong if he thought that the external appearance
of behavior is all that counts. It isn't.
Well I am reluctant to speak for Mr. Turing -- but he hasn't been
answering his email for quite a long time! The Imitation Game was his
alternative to the impossible task of defining intelligence. He
suggested that the way to decide if a machine possessed artificial
intelligence was to have a judge converse (via typed text) with the
machine and a human. If the judge could not determine what was which,
the machine passed the test.
I take this to mean that intelligence is independent of whether it is
"implemented" in neurons or silicon, and certainly independent of the
"design philosophy" used to create the implementation.
I could detect a non-control-system generated behavior in a couple
of minutes even if it seemed identical to a control-system generated
behavior. It wouldn't correct disturbances.
But if you *can* detect a difference between two sets of behaviors,
that implies to me that the behaviors must differ in their "external
appearance". In that case you are right, the non-reactive one is
failing to do its job correctly. But if both react correctly, then by
an operational definition they are the "same" behavior, even if they
are implemented quite differently.
Richard Marken wrote:
[From Rick Marken (971105.1400)]
For Craig's sake (and for the sake of anyone else who might be
interested) my "Detection of Purpose" demo illustrates exactly this
point. The demo is at:
http://home.earthlink.net/~rmarken/ControlDemo/FindMind.html
The "Test for The Controlled Variable" demo:
http://home.earthlink.net/~rmarken/ControlDemo/ThreeTrack.html
illustrates the same point from the perspective of the behaving
system (the person doing the demo).
I found the Test for The Controlled Variable demo very interesting,
almost spooky! The Detection of Purpose demo seemed to "tip its hand"
since before I interacted with it I could see that one of the squares
was moving along a diagonal line while the other two were tracing more
complicated trajectories.
However I may be missing the Big Picture, I'm not sure what lesson to
take away from these demos. I will look at the rest of Rick's site as
time permits.
Bill Powers wrote:
[From Bill Powers (971106.0917 MST)]
I've looked at your Boids demos and am much impressed. Your ideas
seem to have become much more widely known than mine! Perhaps there
was something in the air back in the 1980s, because I came up with
something very similar -- a demonstration of crowd behavior based on
individual control systems. I'm attaching the three files we
distribute over the net (the usual FTP link went defunct and we're
still working out how to restore it). The main .EXE file is
self-extracting (PCs only), and expands into the runnable program,
some setup files (you can create your own, too) and a writeup. The
other files just explain how to do this and automate the process if
you don't know how to do it yourself. From seeing your programs I
deduce that there's not much you don't know how to do with a
computer.
Well, thanks but one thing I steadfastly avoided learning is how to
use PCs, so I can't conveniently run these demos now. But in fact I
did once see the Gather/Crowd program run. I first heard of PCT/CSG
in January of 1996 when I got email from Rick Marken. Somehow this
lead to a call from Dag Forssell who happened to be visiting near my
home. We got together and compared notes and he gave me a PC disk of
demos, and later a coworker ran these for me on a PC at work. I'm
afraid my memory of it is pretty sketchy now almost two years later.
Regarding your comment about there being "something in the air", this
is a phenomena that has been noted many times in the history of
science. Conditions become "ripe" for an idea to come forth and it is
not uncommon for several independent researchers to make similar
advances at about the same time. I know of two groups in the computer
graphics area that did similar work around 1985: search for "eurythmy"
and "plasm" on Check out our sponsor - Chinesepod.com