Simon on the mount

[From Bill Powers (960917.1300 MDT)]

Bruce Abbott (960913,2125 EST) et. seq --

Thanks for replacing my missing files.

Here's one of the paragraphs in question from Simon:

  The outer environment determines the conditions for goal attainment. If
  the inner environment is properly designed, it will be adapted to the outer
  environment, so that its behavior will be determined in large part by the
  behavior of the latter . . .. To predict how it will behave, we need only
  ask "How would a rationally designed system behave under these
  circumstances? The behavior takes on the shape of the task environment.

I agree with Bruce Gregory that this is murky language. It's murky because
of the attempt to say something that is generally true without mentioning
any of the details that might make it true (or false). Just pick the
paragraph apart:

(1) The outer environment determines the conditions for goal attainment.

Is it possible for this not to be true, under any understanding of the
meaning of "goal attainment?" Suppose we use Skinner's translation of goal:
a reinforcing consequence of behavior. Isn't it true that the outer
environment determines the conditons under which behavior has reinforcing
consequences? Or suppose we accept the open-loop control concept of a goal:
an outcome produced by a program of commands. Isn't it true that the outer
environment determines the conditions under which a program of commands will
lead to a specific goal-outcome? Or we could even say that goals, which
imply an effect of future events on present causes, can't exist because of
physical principles that apply in the outer environment. Isn't this also a
determination by the outer environment of the conditions for goal attainment?
If you come to this sentence thinking in terms of control systems, then you
can make sense of it by saying that the output of the controlled variable
must act through conditions set by the outer environment in order to produce
preselected goal-states. But Simon has all the other bases covered, too.

And don't forget to look for counterexamples. How does the outer environment
determine the conditions for attainment of the goal of correctly solving a
set of simultaneous equations? Of remembering your great-grandmother's
middle name? Of thinking up a reply to a post?

2.If the inner environment is properly designed, it will be adapted to the
outer environment, so that its behavior will be determined in large part by
the behavior of the latter..

This is a complex way of saying that if the system's behavior is to have a
particular effect via the environment, it must be organized so as to have
that effect via the environment. The problem is that "properly designed" and
"adapted to the outer environment" are empty phrases; what he's saying is
like saying that if a bridge is strong enough it will support the load it
supports. Anyone can say that if bridge supports a load, it is strong enough
to support that load. That boils down to "The bridge supports the load."
Simon's words boil down to "the behavior accomplishes the task."

Also, I am at a loss to understand how my inner environment's "behavior" is
adapted to the "behavior" of a hammer I am using to remove a nail. Is the
meaning of "behavior" slipping and sliding around in the middle of the
sentence between its use to mean "actions" and its use to mean "properties"
and its use to mean "outcomes?"

3. To predict how it will behave, we need only ask "How would a rationally
designed system behave under these circumstances?" The behavior takes on
the shape of the task environment.

I really don't understand how behavior could take on the shape of the task
environment. To me this is pure gobbledeygook. If I'm sawing a piece of wood
in two (the task), does my behavior take on the shape of the saw? The teeth
of the saw? The shape of the wood? My behavior is a reciprocating motion
combined with exertion of forces that keep the teeth in contact with the
surface being cut. What part of the environment does that behavior take on
the shape of? Is Simon saying anything more than that the behavior of the
inner system (somehow) becomes whatever it takes to get the task done?

Well, enough of that. I could do the same with the next paragraph, and every
paragraph you have cited. But I'll just go to your remarks concerning

  Often, we shall have to be satisfied with meeting the
  design objectives only approximately. Then the properties of the inner
  system will "show through." That is, the behavior of the system will only
  partly respond to the task environment; partly, it will respond to the
  limiting properties of the inner system.

Commenting on this, you say

I propose that the same principle holds for control systems: Only when it
is taxed do certain properties of the system "show through": those aspects
of its internal structure that are chiefly instrumental in limiting
performance.

Yes, when disturbances become large and fast, more errors occur and it
becomes easier to see the dynamics of the system. But that is all relative
to a model of the system; if you don't already have a model, you can't
interpret the dynamics. Suppose we're driving along, and I see a truck
coming in the other lane. "Look," I say, "Here's how a beginning driver acts
when hit by the backwash from a large truck." And I then proceed to make the
car overcorrect and veer back and forth after the truck passes. What are you
seeing? Is there some objective "task environment" that determines how I
will behave? Are my "limiting properties" showing through? Unless you know
what task _I_ have selected to accomplish, the environment tells you nothing
about my inner organization.

You go on:

Example: In testing a participant in a tracking task, we apply
a disturbance whose rate of change occasionally exceeds the ability of the
participant to follow. From these deviations we learn about the bandwidth
limitations of the system. We discover a certain lag in response to
disturbance; from this lag we learn about the size of the loop propogation
delay of the system.

This, of course, I must admit is correct. It is simply a reflection of the
fact that all physical systems, including those that have been "adapted to"
environments and those that meet the "design objectives" either perfectly or
poorly, have limits of performance. But what general insight do we get from
that, beyond the fact itself?

What we get from inducing errors is the ability to measure details of the
design. But if we don't have the basic design right to begin with, those
details will tell us little.

In a benign environment, we learn only what the control system has been
called upon to do -- it compensates beautifully for disturbances,
controlling beautifully. The result is an excellent fit between the
predictions of a large class of models (all those that control equally well)
and the data.

Martin Taylor has said something similar: that you only learn about system
organization by stressing the system to the limits. But I claim that the
truth is the opposite: you must first observe the system under benign
conditions, and only then can you see how it really works. When you see a
control system "controlling beautifully," it is more or less self-evident
how that system does what it does. But if control were that easy to
recognize, PCT would have been discovered (?) a century ago. I first
understood control systems by studying artificial systems that behaved
almost ideally.

When you add complications by stressing the system, you start seeing
nonlinearities and delays and all sorts of inaccuracies. You don't get the
clear opposition of output to disturbance; you don't get the clear
stabilization of the controlled variable. All you do is introduce
distractions that make the basic organization even harder to see.

I can go from the practicalities of control system design to the
generalities that Simon produces. But could anyone go in the other direction?

···

----------------------------------
Perhaps you learned things from Simon's writings that you didn't know
before. Fair enough. But I suspect that you didn't know them mainly because
you hadn't, in your EAB mileau, given them much thought. I find your
writings much clearer and much less ambiguous than Simon's. If I were to
judge by the capacity to communicate, I'd say you are smarter than Simon is.

So where's your Nobel Prize?

Best,

Bill P.
---------------------------------------------------------------------------

[Martin Taylor 960919 15:40]

Bill Powers (960917.1300 MDT) to Bruce Abbott

Martin Taylor has said something similar: that you only learn about system
organization by stressing the system to the limits. But I claim that the
truth is the opposite: you must first observe the system under benign
conditions, and only then can you see how it really works.

I think you misconstrue what I've been trying to say. What you say is
"the opposite" is what I would say is the first stage in a progression
of refinement. When you say "you must first..." I agree. If you change
that to "You must only..." then I disagree strongly. But you didn't write
"only," so I see no opposition between what you say and what I believe
and have said.

When you see a
control system "controlling beautifully," it is more or less self-evident
how that system does what it does.

No, I wouldn't say that. I'd say that it is more or less self-evident
_that_ the system does what it does--control. _How_ it does what it does
is completely hidden when the system is "controlling beautifully."

But there is no point in trying to find out _how_ the system does what it
does until you have seen _what it is_ that it does. There's no point in
trying to distinguish between organizations that perform similar functions
until you have found out what that function is.

I'm working under the assumption that we long ago determined that the
function is _control_. Now I want to distinguish among different control
structures, and to do that one is stuck if one can look only at situations
in which all the structures would "control beautifully."

When you add complications by stressing the system, you start seeing
nonlinearities and delays and all sorts of inaccuracies. You don't get the
clear opposition of output to disturbance; you don't get the clear
stabilization of the controlled variable.

True. And this is what makes it possible to distinguish one control structure
from another.

All you do is introduce
distractions that make the basic organization even harder to see.

Not "All you do." You do do that, for sure. But if by "basic organization"
you mean "that the system is controlling," we are working under the
assumption that this has been demonstrated and now we want to find out how
it is doing so. These "stresses" are not "distractions," they are
dissecting scalpels and microscopes. When you want to look at a whole
organism you don't use a microscope. When you want to see a tiny bit, you do.

But you only use the microscope once you've assured yourself that there
is an organism to be looked at. You are asserting that I omit that first
step, whereas I am assuming it has been taken.

As I said, there's a progression. In the sleep study, we have two simple
variations of the disturbance waveform, and a good model should provide
an equally good fit to the human results with both kinds of variation
(difficulty and waveshape). The same human, at approximately the same time
and in the same state of "disrepair" is doing the tracks that differ only
in their disturbance waveform. The same model should apply to all the
different disturbances.

Martin