Simon

[From Bruce Abbott (960906.1625 EST)]

francisco arocha, 96/09/06, 15.27, EST

And in a more recent article by Vera & Simon (1993, page 9), we find
the following (Ants as well as people are physical symbol systems,
according to Simon):

. . .

Maybe Simon is not so simple, but this sure sounds like input/output.

Indeed it does. So what's your point?

Regards,

Bruce

[From Bruce Abbott (960906.1840 EST)]

Rick Marken (960906.1540) --

francisco arocha (96/09/06, 15.27, EST)

Maybe Simon is not so simple, but this sure sounds like input/output.

Bruce Abbott (960906.1625 EST)

Indeed it does. So what's your point?

Let me see if I can guess Francisco's point. Correct me if I'm wrong,
Francisco.

Excuse me, but I think Francisco is capable of speaking for himself. How
about it Francisco,

What's your point?

Regards,

Bruce

[From francisco arocha, 96-09-07; 9:34 AM EST]

I rather send this before it gets really irrelevant. I was going to say
some other things, but I can't keep up with you guys. I'm kind of slow.

[From Bruce Abbott (960906.1840 EST)]

Rick Marken (960906.1540) --

francisco arocha (96/09/06, 15.27, EST)

Maybe Simon is not so simple, but this sure sounds like input/output.

Bruce Abbott (960906.1625 EST)

Indeed it does. So what's your point?

My point is that Simon's idea of an organism is not a that of a control
system, but that of an input/output system, notwithstanding his occasional
suggestions, such as that in his '69 book. Sensory information comes in,
there is some processing in the middle where the incoming information is
matched to some old information in LTM, and information is sent down for
the proper action. This is the idea that I get from reading not only his
'93 paper I cited, but also his '72 book on problem solving with Newell,
his chapter in the handbook of cog sci and other papers (it is long list).

When Simon writes in a more formal tone, when trying to explain what
Information processing theory says, he describes an input/output system. If
Simon in his 69 book seems to suggest that organisms are control systems in
his description of the ant's behavior, this description is not supported by
his "theory". If I have to go by his informal description of the ant or by
his more formal presentation, I will go by what his explanation says, not
by any informal remarks he may make, specially when these are only
suggestions, not explicit statements.

Many people describe the behaviour of a control system and explains it
using S-R language. A few years ago there was a discussion on this list
with David Chapman and also with Rodney Brooks, two well known researchers
in the so called situated action perspective. If I remember correctly, the
description of their systems (Pengi and Brooks' bug) were that of control
systems, but neither Chapman nor Brooks recognized them as such,
preferring to explain them in terms of S-R. Like Simon, they too described
the behaviour of a control system, but their explanations were not
consistent with their descriptions.

hasta pronto,

francisco

[From Bruce Abbott (960917.2120 EST)]

Bill Powers (960917.1300 MDT) --

Here's one of the paragraphs in question from Simon:

The outer environment determines the conditions for goal attainment. If
the inner environment is properly designed, it will be adapted to the outer
environment, so that its behavior will be determined in large part by the
behavior of the latter . . .. To predict how it will behave, we need only
ask "How would a rationally designed system behave under these
circumstances? The behavior takes on the shape of the task environment.

I agree with Bruce Gregory that this is murky language. It's murky because
of the attempt to say something that is generally true without mentioning
any of the details that might make it true (or false).

That's probably because I left out the introductory material and supporting
examples; this is embedded in a larger context and serves simply as a
summary of the points discussed thus far.

(1) The outer environment determines the conditions for goal attainment.

Is it possible for this not to be true, under any understanding of the
meaning of "goal attainment?"

All Simon is saying here is that when things are designed to serve some
function, then their characteristics must be selected so as to conform to
the requirements imposed by the environment, e.g., a bridge designed to
support heavy truck traffic must be designed differently from one designed
for horse-and-buggy only.

Control-system example: if a control system is to stabilize a heavy gun of a
battleship against the rolling of the ship, it must be built of components
having sufficient power to move the gun at the rate required by the ship's
rate of roll, along with many other such properties dictated by the task
environment.

2.If the inner environment is properly designed, it will be adapted to the
outer environment, so that its behavior will be determined in large part by
the behavior of the latter..

This is a complex way of saying that if the system's behavior is to have a
particular effect via the environment, it must be organized so as to have
that effect via the environment. The problem is that "properly designed" and
"adapted to the outer environment" are empty phrases; what he's saying is
like saying that if a bridge is strong enough it will support the load it
supports.

What he's saying is that if a bridge is properly designed it will support
the loads it will encounter in its intended environment.

A properly designed bridge will have characteristics that meet these
requirements; that's what "properly designed" and "adapted" _mean_. (All
definitions are tautologies.) How it performs (its behavior) will be
determined in large part by the behavior of the environment -- in this case
whether the weight passing over the bridge is heavy or light.

Control system example: If the gun-stabilization mechanism is properly
designed, i.e., adapted to its task environment, it will keep the gun aimed
at target under all conditions that fall within its design parameters. If
it isn't properly designed, it won't. How can you tell whether the system
has been _properly_ designed? It does the job it was designed to do.

This may be obvious, but it was necessary for Simon to state it in order to
set the stage for his conclusion. He's simply reminding us, "isn't that
what we mean by 'properly designed' or 'adapted' "?

Also, I am at a loss to understand how my inner environment's "behavior" is
adapted to the "behavior" of a hammer I am using to remove a nail. Is the
meaning of "behavior" slipping and sliding around in the middle of the
sentence between its use to mean "actions" and its use to mean "properties"
and its use to mean "outcomes?"

Simon is using "behavior" in this context to refer to the performance of the
system. The behavior of the bridge is to support the traffic passing over it.

3. To predict how it will behave, we need only ask "How would a rationally
designed system behave under these circumstances?" The behavior takes on
the shape of the task environment.

I really don't understand how behavior could take on the shape of the task
environment. To me this is pure gobbledeygook. If I'm sawing a piece of wood
in two (the task), does my behavior take on the shape of the saw? The teeth
of the saw? The shape of the wood? My behavior is a reciprocating motion
combined with exertion of forces that keep the teeth in contact with the
surface being cut. What part of the environment does that behavior take on
the shape of? Is Simon saying anything more than that the behavior of the
inner system (somehow) becomes whatever it takes to get the task done?

The task environment is the environment in which the task is intended to be
performed. The "shape" of that environment consists of the features of the
environment to which the system must be adapted if the task is to be
accomplished in that environment. The behavior (of the system) is how it
performs in the task environment.

Control-system example: The gun-barrel's slew-rate is matched to the
rolling-rate of the ship, as determined by the ship's design, the nature of
the waves it normally encounters, and the laws of physics. Its direction of
motion is opposite to that of the deck on which it is mounted, as required
to keep the barrel steady against the roll. You don't have to know anything
about the physical implementation that accomplishes this to understand how
the gun will behave in this task environment: just ask yourself how a
"properly-designed" gun would behave.

Well, enough of that. I could do the same with the next paragraph, and every
paragraph you have cited.

I'm sure you could; it's possible to find a way to criticize anything if
that is your intention. The question then becomes, are those criticisms
reasonable? I think I've shown that, in this case, they are not.

Yes, when disturbances become large and fast, more errors occur and it
becomes easier to see the dynamics of the system. But that is all relative
to a model of the system; if you don't already have a model, you can't
interpret the dynamics.

We aren't restricted to observing the system in action only under these
conditions. We observe it performing within its design environment and then
begin -- slowly -- to push it until its performance begins to deteriorate.
How it deterioriates, and under what conditions, allows us to begin
speculating about how the inner system might be so constructed as to lead to
these particular failure-modes. By the time we're into failure-mode
analysis, we've developed a model, perhaps several, to account for the
system's behavior when operating within its design parameters. Now we're
looking for information that will tell us more, perhaps help us to decide
which model is best.

Control-system example: I observe a man performing a tracking task, mouse
in hand. I begin varying the disturbance faster and faster and observe his
ability to compensate deteriorate. I observe an increasing lag between
disturbance and mouse-movement, and an increased tendency both to over- and
under-compensate. From this I learn about certain properties of the system,
chiefly those that serve to limit performance.

What we get from inducing errors is the ability to measure details of the
design. But if we don't have the basic design right to begin with, those
details will tell us little.

We need only ask ourselves, "what rational designs would perform in this
way, given the task environment?" Now we have our models, but all of them
will do the job. The ways in which performance deteriorates when the system
is pushed to the limits and beyond will tell us more.

In a benign environment, we learn only what the control system has been
called upon to do -- it compensates beautifully for disturbances,
controlling beautifully. The result is an excellent fit between the
predictions of a large class of models (all those that control equally well)
and the data.

Martin Taylor has said something similar: that you only learn about system
organization by stressing the system to the limits. But I claim that the
truth is the opposite: you must first observe the system under benign
conditions, and only then can you see how it really works. When you see a
control system "controlling beautifully," it is more or less self-evident
how that system does what it does. But if control were that easy to
recognize, PCT would have been discovered (?) a century ago. I first
understood control systems by studying artificial systems that behaved
almost ideally.

Unfair! You could take apart these systems and study their architecture.
Here, we're observing from the outside and trying to guess what the insides
are like.

I don't believe that either Martin Taylor or I has stated that you _only_
learn about system organization by stressing the system to the limits. When
you observe a control system controllng beautifully, it is indeed more or
less self-evident how that system does what it does -- you know what it
does, and there are only so many ways, in a general sense, to do it. Where
stressing the system comes in is after you have developed some specific
models, based on your observations of the system operating normally.
Unfortunately, when it comes to specific implementations, there are dozens
of designs that will do the same job, with equal facility, within a specific
task environment. To decide which is the most likely candidate, studying
how the design fails can be valuable tool. Short of opening the up the
system and peering inside, it may be the only way to narrow the field.

So where's your Nobel Prize?

I'm told it's being held up until yours is delivered. (;->

Regards,

Bruce