Sonja; dues

[From Bill Powers (920811.2000)]

Avery Andrews (920811) --

That is a most informative post about Sonja. Your attempt to cast what's
going on in PCT terms makes it so obvious that there are control systems
there that I wonder why Chapman doesn't see them. On the other hand, I
think Chapman is managing to get some orderliness into his scheme, which is
to be admired.

Your proposition about the gated control system is a good start. Remember,
though, that in a neural model, to turn off a one-way control system (no
negative error signals) it is sufficient to set the perceptual signal to
zero (negative feedback alone can't cause an error signal if the reference
signal is zero). The implication is that the sensing of the "condition" (a
logical condition) is the input function of a higher-level control system,
the output of which sets the reference signal for the "scalar control
system." The higher system, being program-level, can actually select which
lower-order perceptions to adjust by choosing which reference signals to
bring above zero. Rick Marken has a logic level in his spreadsheet demo

... to make it worthwhile to kill a monster, a registrar has to decide
that it is dangerous. This is achieved by having an autonomous process
put a `marker' on the closest monster (a marker can be thought of as a
register connected to the visual system, which automatically tracks
something, and from which a certain amount of information about that
something can be extracted). When the marker returns the information >that

the monster is within a certain distance, the `the-monster-is->dangerous'
registrar gets activated (think of the registrar as a >perceptual circuit).

This is verging on hierarchical perceptions. A "marker," for example,
clearly has to include a perceptual function if it's to "return information
that the monster is within a certain distance." The recognition of the
monster is one perception; the distance is another. Those perceptions
becomes an input to a registrar. Why not just say that there's a system
that senses and controls the distance of a particular object? This would
provide for approaching or avoiding any object of whatever kind by
adjusting the reference-distance signal. If a higher system classifies this
object as dangerous, it can set the reference signal for proximity to a low
value to avoid it, or to a high value if the decision is to attack it. If
the object is classified as desirable, the same reference signal can be
adjusted formaximumn proximity in order to approach the object (so as to do
whatever one does with desirable objects).

When the alignment is good enough, the another proposer, `face-the-
monster' will kick in, which can be thought of as a control-system that
tries to zero the difference between the-direction-the-amazon-is-facing
and the-direction-toward-the-monster.

If the amazon is given real perceptual functions, the-direction-the-amazon-
is-facing won't be perceived. The amazon will simply perceive everything
else relative to the direction of looking. To align with something, the
amazon simply turns and translates until the object to be faced is
centered. The calculation of what the amazon can see is part of the model
of the environment, not of the behaving system.

The problem here is that it's Sonja, not the amazon, that contains the
control systems. So Sonja is acting like the game player, being above the
fray looking down on it, using her own properties but trying to make the
amazon do things. This is analogous to Chapman sitting up in the sky
watching and describing and evaluating what Sonja is doing. Sonja's
properties get all mixed up with the properties of the amazon. The means by
which the amazon kills a monster are not the means by which Sonja makes the
amazon do things that Sonja interprets as killing the monster (for Sonja,
making it disappear). Sonja's proposer that says "face the monster" does
not make SONJA face the monster, because Sonja is already looking down on
the monster and can see it. This proposer is really proposing that an icon
on the screen be rotated and translated until a line projected along its
heading passes through the monster and corresponds to a joystick direction
(two conditions to be controlled).

I think that Chapman has a point-of-view problem here. He is actually
trying to solve a very much more complex problem than he has to solve. He's
trying to represent not only the game problem, but the player's
identification with the amazon, and the stories that he himself tells about
Sonja as she goes through the motions, and about the amazon as the action
proceeds. Those stories are Chapman's, not Sonja's. Unless he wants to give
Sonja the ability to make up such stories, they aren't part of the Sonja
model. If Chapman wants to model the game player, then he should model what
the game player is actually controlling, rather than jumping around among
Sonja's point of view and the amazon's and his own. The verbal
embellishments only make it harder to see what is actually being
controlled. The monsters on the screen are not "dangerous." They are simply
icons in certain spatial relationships with the amazon. Those spatial
relationships can be controlled relative to any reference-relationships
Sonja pleases. The task of a higher-level system is to prevent certain
relationships from occurring and to achieve other relationships. In order
to do either, it's necessary that the relationships be under control. Once
they're under control, particular sets of them can be named, and a higher-
order system can worry about maintaining certain sequences of named
relationships and certain logical conditions in which the named
relationships are elements.

Am I getting this across? I hope so, because this is as close as I've ever
come to saying what I find wrong with this sort of modeling. This is what I
mean when I say that too much of the modeler is outside the model making it
work. There's nothing in the model of Sonja that could come up with the
label "dangerous," with all its meanings. The mechanism for doing that is
inside Chapman, as are the meanings. The best Sonja can do is avoid a
logical condition involving a configuration and a relationship: (monster
AND monster-close-to-amazon). From Chapman's point of view, if Sonja does
indeed manage to keep that condition false, it will seem as though Sonja
perceives the monster as dangerous and acts appropriately. But that's only
a description; it's not what makes Sonja work.

In the crowd program, an individual will sometimes go through an opening
into a pocket of closely-spaced obstacles. The moving person will turn this
way and that, going through all sorts of loops, and eventually leave
through the same opening. It looks as if the person is perceiving this-is-
a-way-through-the-obstacles, then deciding there's-no-way-out-of-this-
pocket and therefore i'd-better-look-for-the-way-I-came-in and finally
doing go-the-other-way-through-the-same-opening. If we set this up as a
logical problem with markers and proposers and arbiters, we could probably
reproduce something like this behavior -- but in fact nothing like that is
actuually happening, or is actually necessary. Setting up the problem in
terms of those hyphenated phrases leads into doing something quite simple
in a horribly complex and unlikely way. That's because the phrases bring in
a whole background of the OBSERVER'S thoughts and associations and
descriptions, none of which the model is capable of perceiving or
producing. What you end up with is an extremely complex model that produces
simple behavior. What we want, of course, is a simple model that produces
complex behavior. I don't think that the approach taken by Chapman and Agre
is going to create a model simpler than the behavior it explains.

But all this is still based on second-hand information. I've applied
through interlibrary loan for Chapman's book. When it gets here I'll have
another look and see if I still come up with the same critique.


Bruce Nevin and others who are students: At the CSG meeting, it was
proposed to increase student dues to $10 per year to cover the costs of
Closed Loop. A counterproposal was that we increase ordinary membership
cost from $40 to $45 to subsidize the students' subscriptions. The
counterproposal carried. Those who have already paid their full $40 for

Bill P.

(penni sibun 920811)

i've got lotsa comment on this thread, but i've been busy and my
typing capacity is limited. however, i saw an opportunity to make a
simple answer.

   [From Bill Powers (920811.2000)]

   The problem here is that it's Sonja, not the amazon, that contains the
   control systems.

this may be a problem and chapman may have problems, but this isn't
one of his. chapman's programs, pengi and sonja, are programs that
play video games. it does get occasionally confusing, but that in
fact is the nature of video games: the player identifies w/ the
character in the game. (that's why *we* get confused: i don't know
if sonja confuses itself with the amazon or pengi confuses itself
with the penguin.)

chapman (1991:59-61) explains the rational for video games clearly:

pengi was...designed to demonstrate concrete-situated ideas about
activity. these ideas emphasize perceptual connection with the world,
so i wanted a domain which required substantial perceptual
interaction, but which would not require that i spend years solving
known difficult probleems in vision research. video games have just
these properties....because video games are two-dimensional and
because the images involved are generated by a computer, i could bypass
hard problems in early vision such as noise tolerance, stereo, and

you may or may not consider this a reasonable choice, but chapman made
it knowledgably.

   Am I getting this across? I hope so, because this is as close as I've ever
   come to saying what I find wrong with this sort of modeling. This is what I

can you try again, with the above confusion cleared up?