[From Bill Powers (920804.1100)]
The meeting is over and I've recovered. About 25 people attended, with
Significant Others bringing the total to about 30. As usual, the flavor of
the meeting was different from all the others. The general theme seemed to
be the relationship of PCT to other fields. Wednesday night we did the
schedule, taking about half an hour.
Thursday: Clark McPhail, Chuck Tucker, and Kent McClelland (our
sociologists) spoke on teaching CT to their colleagues and students,
referring to the Gatherings (Crowd) program and its reception, classroom
approaches, and the merging of CT concepts into sociological topics such as
power. Tom Bourbon and Dick Robertson spoke of their ventures in France and
Belgium respectivly, teaching CT to students who were generally quite
interested and attempting to teach it to various bigwigs who weren't much
interested. Rick Marken gave an excellent demonstration of his three-level,
six-system-per-level spreadsheet model. Greg Williams opened the subject of
a possible PCT journal, to include Closed Loop plus refereed articles. As
to refereeing, the idea was that if someone insisted on having a rejected
paper published, it would be published along with the reviews (unless
Friday: Mark Lazarre (student of Tom Bourbon) described his master's thesis
work on two-person learning of a cooperative control task. I did a show-
and-tell of the new version of the arm model, a rubber-band version of the
Cognitive Control model, and a reprise of the Bucket of Beans experiment
for those who hadn't seen it before. Gary Cziko presented part of his
compendium of Portable Demos, including one in which a large rubber-band
was used to interfere with a subject bouncing a ball off a wall, showing
that throwing does involve control despite its ballistic component. Dag
Forssell went through some of his methods for presenting the basics of PCT
to business executives. In the afternoon, Ed Ford showed a video tape of
his PBS program. Friday night after the banquet, we had a 20-minute
business meeting at which Wayne Hershberger was elected Vice-President of
the CSG, to succeed to the presidency as usual next year (Chuck Tucker,
elected to that post last year, automatically became President for 1992-
93). Mary continues as secretary-treasurer (what else?). Mark Olson
presented his Master's thesis work on analysis of causation and control
theory, ending up presiding over an hour-long discussion on angels,
pinheads, language, and other related topics.
Saturday: Isaac Kurtzer (student of Tom Bourbon) presented his video
analyses of a buck-of-rice experiment in which he obtained quantitative
data showing the difference between the S-R version of the experiment (drop
the rice in the bucket in one lump) and the continuous-control version
(trickle the rice in over 10 seconds). Bill Williams presented his analysis
of corporate interlocks (directors of major banks who sit on boards of
major corporations where they can communicate), showing that in the world
of Big Business, directors come face-to-face with directors of other
corporations every few days, on the average (Hmm: potential explanation of
some aspects of social control? He wouldn't say). His main point, however,
was that everybody can learn to program if they have a good enough reason.
Brian D'Agostino presented his doctoral thesis work on PCT and militarism,
taking a lot of flak for his statistical approach but holding his own quite
bravely and making some convincing points about PCT in this kind of
research. Dick Robertson asked for advice in applying control theory to
analyzing his own classes and their behavior as control systems controlling
for grades. And finally, Wayne Hershberger asked what people meant by the
term "Controlling for ...", setting off a discussion that went past closing
If I've forgotten someone, please fill in the blanks.
All these talks, as is our custom, were really more like discussions with
one person standing up. Not a single person read a paper this time,
although the students are given license to make more formal presentations
if they want to. The non-hostile atmosphere is conducive to relaxed talks
that are more like conversations.
Every evening after the evening session, most of the people retired to the
lounge in the residence hall and resumed the discussion just for the fun of
it; the last die-hards staggered off to bed at 2 or 3 AM, appearing
somewhat bedraggled at breakfast at 7:30 AM but making it through the day
in fine shape. Every year I wonder whether the group will maintain the kind
of intensity it generated the last time; every year it does. Everyone talks
to everyone else. This is the only interdisciplinary conference I know of
where that works.
Bruce Nevin (920730) --
Your post on Studies in Artificial and Natural Intelligence was
interesting, coming to me after the CSG meeting and contrasting so strongly
with what goes on at that meeting. There are large numbers of people, it
seems to me, who want to sit back and talk about understanding brains,
intelligence, and so on, but who aren't willing to do much more -- they
think they ARE doing something, I suppose. Armchair or think-tank science
somehow impresses me less this week.
Avery Andrews (920803) --
Congratulations on venturing into the world of modeling. I think you've
tackled a do-able problem. Let me offer a little help.
It's a good idea to get a working control system before tackling
reorganization. If you just set up your Astro problem a little differently,
it will work fine.
The first thing to do is set up the physical situation. You have two
objects in space, Astro and Mother Ship. Mother ship can move at some
velocity in x, and so can Astro. Mother moves independently. Astro moves by
using thrusters that exert a force in x.
Astro controls two relationship variables: velocity relative to Mother's
velocity, and position relative to Mother's position.
You're postulating that Astro wants to perceive a velocity c*x^(1/2)
relative to Mother. This poses a problem, because you can't take the square
root of a negative number -- the program will blow up if x ever becomes
less than zero. I suppose you're using the square root to cause slowing
near the goal. But there's an easier way -- a control system will
automatically slow down near the goal without any special precautions.
It doesn't really make sense to speak of a "perceptual system that sets a
reference level for velocity..." because perceptions aren't reference
levels. They're reports on the actual state of affairs, not the desired
state. I think Astro will work better if you make the perception equal to
the relative velocity: that is, vrel = vm - va, where vrel = relative
velocity, vm = velocity of Mother, and va = velocity of Astro. You then
vrel* (reference relative velocity)
> error, vrel* - vrel
+^ -^ |
> > v
vm va <-----integral -- force
Now vrel = vm - va, with no square roots. This control system can control
the velocity of Astro relative to Mother. It can't control position,
however, because the velocity difference can be zeroed at any distance.
This control system will be stable.
You need a higher level of control to control for position, using
variations in the velocity reference signal vrel* as the means of control.
xrel* (reference relative position)
- +v error, xrel* - xrel
> k1 (gain adjustment)
> > (reference relative velocity)
><--- xrel - +v error, vrel* - vrel
> ---- ///// ------->-----
> vrel---->| |
\\\\\\ /////// k2 (gain adjustment)
^- ^+ ^+ ^- |
> > > > v
xa xm <--integral-- vm va <-----integral -- force --
^ MOTHER | ASTRO |
> > > (Environ
----<---integral-----<----- | ment)
The variables xm and xa represent Mothers position and Astro's position.
xrel is the relative position, xm - xa. xrel* is the relative position that
Mother's velocity is arbitrarily set (vm). Mother's position, as a program
step, is xm = xm + vm*dt, where dt is set to some small value like 0.01.
This integrates velocity to position. You can make Mother bounce off the
ends of the screen by reversing vm.
Astro's velocity is the integral of the applied force divided by Astro's
va = va + (force/M)*dt
Astro's position is the integral of Astro's velocity:
xa = xa + va * dt.
Thruster force is proportional to Astro's velocity error:
force = k2*(vrel* - vrel)
And of course vrel is just the same as va unless you want to play with
perceptual functions of other kinds.
For position, we have simply
xrel = xm - xa
where xrel = relative position
xm = Mother's position
xa = Astro's position
The velocity reference signal vrel* is proportional to the higher-order
error signal, so
vrel* = k1*(xrel* - xrel).
If you want Astro to seek the same position as Mother, just set xrel* to
zero. If it's nonzero, Astro will tag along at a fixed distance.
In each part of the diagram, there's a potential scaling factor. You could
make one or more scaling factors the target of reorganization. The
intrinsic variable that could be monitored might be total error at both
levels of the diagram (with an intrinsic reference level of zero).
If you use the constant k1 as the target of reorganization, you don't want
to vary k1 itself at random. That will give large jumps in performance and
there's no guarantee that you won't blow the system up.
It's better to use a delta that goes with each target of reorganization.
Delta is a small number between, say, -0.1 and 0.1, chosen at random. This
number is added to k1, the target of reorganization, on every iteration, so
k1 is continually changing. But wait ...
The intrinsic error doesn't depend on the sign of the difference between
the intrinsic variable and its reference level, but only on the magnitude.
So you can use |i*-i| as the error (in this case, |-i| because the
intrinsic reference level i* is zero). Then you compute the rate of change
of i, which just i(now) - i(previous). If this rate of change ever becomes
greater than zero, you institute a reorganization, which amounts to picking
a new value of delta at random, in the range -0.1 to 0.1. Between
reorganizations you just keep adding this delta to k1, the constant that is
You may want the changes to get smaller as k1 approaches the optimum value
(assuming there is one). To do this, you simply multiply delta by the
absolute intrinsic error before adding it to k1. As the error gets smaller,
delta will decrease in size, going to zero when intrinsic error is zero.
You're still reorganizing, but the amount of reorganization of k1 becomes
So: if you choose total error as the intrinsic variable i,
i = |vrel* - vrel| + |xrel* - xrel|
if (i(now) - i(last) < 0) delta = 0.2*(random - 0.5); (reorganize)
k1 = k1 + 0.001* i*delta; (change k1 on every iteration)
The random function is assumed to return a value between 0 and 1. I put the
0.001 in the last line to indicate that you'll want to make the changes
rather slow -- how slow is a matter for experiment.
You can set up control systems for Astro in one, two, or three dimensions.
Each one will be a one-dimensional two-level system as above, operating in
x, y, or z. You need only one reorganizing system, however -- just have it
sense the sum of the six absolute error magnitudes (two in each system).
When a reorganization is called for (rate of change of intrinsic error is
positive), just pick new values for three deltas, one for each axis. Use
the random function for each one individually, so the deltas will vary
I'll leave it up to you to get this to work. You shouldn't have any trouble
with the control systems. The reorganizing part might get tricky.
Best to all,