[From Bill Powers (960917.1300 MDT)]
Bruce Abbott (960913,2125 EST) et. seq --
Thanks for replacing my missing files.
Here's one of the paragraphs in question from Simon:
The outer environment determines the conditions for goal attainment. If
the inner environment is properly designed, it will be adapted to the outer
environment, so that its behavior will be determined in large part by the
behavior of the latter . . .. To predict how it will behave, we need only
ask "How would a rationally designed system behave under these
circumstances? The behavior takes on the shape of the task environment.
I agree with Bruce Gregory that this is murky language. It's murky because
of the attempt to say something that is generally true without mentioning
any of the details that might make it true (or false). Just pick the
(1) The outer environment determines the conditions for goal attainment.
Is it possible for this not to be true, under any understanding of the
meaning of "goal attainment?" Suppose we use Skinner's translation of goal:
a reinforcing consequence of behavior. Isn't it true that the outer
environment determines the conditons under which behavior has reinforcing
consequences? Or suppose we accept the open-loop control concept of a goal:
an outcome produced by a program of commands. Isn't it true that the outer
environment determines the conditions under which a program of commands will
lead to a specific goal-outcome? Or we could even say that goals, which
imply an effect of future events on present causes, can't exist because of
physical principles that apply in the outer environment. Isn't this also a
determination by the outer environment of the conditions for goal attainment?
If you come to this sentence thinking in terms of control systems, then you
can make sense of it by saying that the output of the controlled variable
must act through conditions set by the outer environment in order to produce
preselected goal-states. But Simon has all the other bases covered, too.
And don't forget to look for counterexamples. How does the outer environment
determine the conditions for attainment of the goal of correctly solving a
set of simultaneous equations? Of remembering your great-grandmother's
middle name? Of thinking up a reply to a post?
2.If the inner environment is properly designed, it will be adapted to the
outer environment, so that its behavior will be determined in large part by
the behavior of the latter..
This is a complex way of saying that if the system's behavior is to have a
particular effect via the environment, it must be organized so as to have
that effect via the environment. The problem is that "properly designed" and
"adapted to the outer environment" are empty phrases; what he's saying is
like saying that if a bridge is strong enough it will support the load it
supports. Anyone can say that if bridge supports a load, it is strong enough
to support that load. That boils down to "The bridge supports the load."
Simon's words boil down to "the behavior accomplishes the task."
Also, I am at a loss to understand how my inner environment's "behavior" is
adapted to the "behavior" of a hammer I am using to remove a nail. Is the
meaning of "behavior" slipping and sliding around in the middle of the
sentence between its use to mean "actions" and its use to mean "properties"
and its use to mean "outcomes?"
3. To predict how it will behave, we need only ask "How would a rationally
designed system behave under these circumstances?" The behavior takes on
the shape of the task environment.
I really don't understand how behavior could take on the shape of the task
environment. To me this is pure gobbledeygook. If I'm sawing a piece of wood
in two (the task), does my behavior take on the shape of the saw? The teeth
of the saw? The shape of the wood? My behavior is a reciprocating motion
combined with exertion of forces that keep the teeth in contact with the
surface being cut. What part of the environment does that behavior take on
the shape of? Is Simon saying anything more than that the behavior of the
inner system (somehow) becomes whatever it takes to get the task done?
Well, enough of that. I could do the same with the next paragraph, and every
paragraph you have cited. But I'll just go to your remarks concerning
Often, we shall have to be satisfied with meeting the
design objectives only approximately. Then the properties of the inner
system will "show through." That is, the behavior of the system will only
partly respond to the task environment; partly, it will respond to the
limiting properties of the inner system.
Commenting on this, you say
I propose that the same principle holds for control systems: Only when it
is taxed do certain properties of the system "show through": those aspects
of its internal structure that are chiefly instrumental in limiting
Yes, when disturbances become large and fast, more errors occur and it
becomes easier to see the dynamics of the system. But that is all relative
to a model of the system; if you don't already have a model, you can't
interpret the dynamics. Suppose we're driving along, and I see a truck
coming in the other lane. "Look," I say, "Here's how a beginning driver acts
when hit by the backwash from a large truck." And I then proceed to make the
car overcorrect and veer back and forth after the truck passes. What are you
seeing? Is there some objective "task environment" that determines how I
will behave? Are my "limiting properties" showing through? Unless you know
what task _I_ have selected to accomplish, the environment tells you nothing
about my inner organization.
You go on:
Example: In testing a participant in a tracking task, we apply
a disturbance whose rate of change occasionally exceeds the ability of the
participant to follow. From these deviations we learn about the bandwidth
limitations of the system. We discover a certain lag in response to
disturbance; from this lag we learn about the size of the loop propogation
delay of the system.
This, of course, I must admit is correct. It is simply a reflection of the
fact that all physical systems, including those that have been "adapted to"
environments and those that meet the "design objectives" either perfectly or
poorly, have limits of performance. But what general insight do we get from
that, beyond the fact itself?
What we get from inducing errors is the ability to measure details of the
design. But if we don't have the basic design right to begin with, those
details will tell us little.
In a benign environment, we learn only what the control system has been
called upon to do -- it compensates beautifully for disturbances,
controlling beautifully. The result is an excellent fit between the
predictions of a large class of models (all those that control equally well)
and the data.
Martin Taylor has said something similar: that you only learn about system
organization by stressing the system to the limits. But I claim that the
truth is the opposite: you must first observe the system under benign
conditions, and only then can you see how it really works. When you see a
control system "controlling beautifully," it is more or less self-evident
how that system does what it does. But if control were that easy to
recognize, PCT would have been discovered (?) a century ago. I first
understood control systems by studying artificial systems that behaved
When you add complications by stressing the system, you start seeing
nonlinearities and delays and all sorts of inaccuracies. You don't get the
clear opposition of output to disturbance; you don't get the clear
stabilization of the controlled variable. All you do is introduce
distractions that make the basic organization even harder to see.
I can go from the practicalities of control system design to the
generalities that Simon produces. But could anyone go in the other direction?
Perhaps you learned things from Simon's writings that you didn't know
before. Fair enough. But I suspect that you didn't know them mainly because
you hadn't, in your EAB mileau, given them much thought. I find your
writings much clearer and much less ambiguous than Simon's. If I were to
judge by the capacity to communicate, I'd say you are smarter than Simon is.
So where's your Nobel Prize?