Ffrom Bill Powers (960918.0700 MDT)]
Bruce Abbott (960917.2120 EST)--
All Simon is saying here is that when things are designed to serve some
function, then their characteristics must be selected so as to conform to
the requirements imposed by the environment, e.g., a bridge designed to
support heavy truck traffic must be designed differently from one designed
for horse-and-buggy only.
If that's all he was saying, why didn't he say it? Perhaps because when
it's translated into simple direct language it doesn't sound nearly as
impressive, or as true.
It isn't the environment that imposes requirements (or tasks); it's people.
You are perfectly free to design a bridge to carry horses and buggies and
drive trucks (or at least one truck) over it: of course the truck will break
the bridge and the driver will be killed, but the environment doesn't care
about that; it has no requirements that the bridge carry the truck. When you
say how something "must be designed" you're speaking as if there's some
objective requirement, but there's always a contingent "if" implied -- if
you want the bridge to hold up trucks, if you want the bridge to cost as
little as possible, if you want the approaches to use up as little land as
possible or be as accessible as possible or limit the speed of vehicles
going onto the bridge, and so on. Any given design is embedded in a network
of human intentions. Of course there are physical requirements, in that IF
you want the bridge to carry a certain load, THEN you must use strong enough
materials for the configuration you have chosen. But why not build the
bridge strong enough to carry anything that might cross it? The only
objections would be based on goals regarding cost, size, esthetic
appearance, availability of materials, and so forth. All human goals, about
which the environment couldn't care less.
Control-system example: if a control system is to stabilize a heavy gun of a
battleship against the rolling of the ship, it must be built of components
having sufficient power to move the gun at the rate required by the ship's
rate of roll, along with many other such properties dictated by the task
Not by the task environment; by the intentions of the people designing the
gun. The people want the gun to stay on the target; they want to move the
gun as fast as possible (not just as fast as the ship rolls); they don't
want to go over budget when buying the motors (unless they think they can
get away with it); they don't want the reaction forces to open the seams of
the ship and sink it. Simon got his Nobel Prize for saying that managers
don't "optimize" or "maximize" the performance of their economic units: they
"satisfice." This mean the they set a goal for performance and do only what
is required to reach it. The same applies to designers; they don't optimize;
they satisfice. So you can't tell from the properties of the environment
what the actual requirements are; you have to know the goal of the designer
How can you tell whether the system has been _properly_ designed? It does
the job it was designed to do.
Exactly my point. You can't know whether the system has been "properly" (I
would say "adequately") designed unless you know what job it was designed to
do. That means you have to know what the designer _wanted_ it to do. Why did
Victorian bathtubs have claw feet on them? What job were they designed to do?
What job was the little hole in the middle of a kiva designed to do? What
job were the fins on a Cadillac designed to do?
Simon is using "behavior" in this context to refer to the performance of the
system. The behavior of the bridge is to support the traffic passing over >it.
If he's going to use "behavior" to mean action when he speaks of a human
being, but "passive properties" when speaking of the environment, he ought
to make it clear that he's switching meanings in mid-sentence. This is how
murky language (and thinking) is created: by letting word-associations and
puns lead you wherever they will.
The task environment is the environment in which the task is intended to be
performed. The "shape" of that environment consists of the features of the
environment to which the system must be adapted if the task is to be
accomplished in that environment. The behavior (of the system) is how it
performs in the task environment.
I really don't like your way of using the passive voice. This whole
paragraph never once refers to the person who is responsible for all of
this. The environment in which the task is intended to be performed is the
environment on which someone must act in order to create a result intended
by that person. It's not a "task environment" until someone picks a task --
an outcome -- and tries to perform it or get someone else to perform it
using the available properties of the environment.
In PCT we make a distinction between behavior-the-action and
behavior-the-result. When you refer to behavior as "how it performs", are
you referring to "what regular actions the system emits" or "what regular
consequences of variable actions are repeated?" The term "performance" is
ambiguous in this regard. That's the purpose of words like this, isn't it?
To be used with different meanings in sentences that look like they have one
clear meaning. In my comment I asked how my behavior could "take on the
shape" of the task environment when I use a saw to cut wood. Your reply
says, basically, that it doesn't -- unless you use a word like "performance"
which can refer either to an action or to a property of the environment. I
perform, the saw performs, the wood performs, the task is performed, and the
whole performance satisfies the requirements. Five different meanings,
conveyed by a single word!
Control-system example: The gun-barrel's slew-rate is matched to the
rolling-rate of the ship, as determined by the ship's design, the nature of
the waves it normally encounters, and the laws of physics. Its direction of
motion is opposite to that of the deck on which it is mounted, as required
to keep the barrel steady against the roll. You don't have to know anything
about the physical implementation that accomplishes this to understand how
the gun will behave in this task environment: just ask yourself how a
"properly-designed" gun would behave.
You're driving me up the wall, Bruce. This paragraph says exactly nothing.
The gun's slew rate is matched to the rolling-rate of the ship because the
gun barrel remains stationary while the ship rolls underneath it; that is
what we _mean_ by saying that the slew rate matches the rolling rate. That
also means that the direction of motion is opposite to the motion of the
deck. What you're describing in three different ways is not the behavior of
the gun, but a _consequence_ of the behavior of a "properly designed"
system. The system is designed to keep the gun-barrel stationary in inertial
space; as an irrelevant side-effect it produces the relationship of the gun
barrel to the ship that you're describing. The gun isn't doing anything;
it's being moved around by something else. And the "properly designed
system" is not concerned with the relation of the gun barrel to the ship, or
the rolling of the ship, or the effect of the waves. It is concerned with
keeping the sensed angle between the gun and an inertial reference platform
constant. The observer of this gun control system is seeing only the more
obvious external disturbances and irrelevant consequences of the action of
this control system. Yes, the slew rate is matched to the ship's roll rate,
but only because the angle of the gun is being maintained constant relative
to something the naive observer can't see, the artificial horizon.
Control-system example: I observe a man performing a tracking task, mouse
in hand. I begin varying the disturbance faster and faster and observe his
ability to compensate deteriorate. I observe an increasing lag between
disturbance and mouse-movement, and an increased tendency both to over- and
under-compensate. From this I learn about certain properties of the system,
chiefly those that serve to limit performance.
That't not how I would do it (or have done it). If you gradually increase
the difficulty of the disturbance, the human system you're observing will
gradually change its tactics, even its method of control or its perceptual
definition of the task. What I do is to use a disturbance containing a
randomly varying mixture of frequencies from zero to some maximum slightly
above the highest frequency at which good control can be maintained. Then I
match a model to the performance of the subject. From this I can state a
relationship between gain or lag and frequency of disturbance, in the form
of a simple differential equation. There is no particular "limit" to
performance, and it's not caused by any particular feature of the system.
Instead, there is a relationship between characteristics of performance and
characteristics of the disturbance, which is implicit in the model whether
the disturbance be "easy" or "hard."
In fact the lag between disturbance and mouse-movement is nearly independent
of the kind of disturbance, and there is usually no change in the tendency
to over or under compensate. Of course there are residuals; the model
doesn't behave exactly like the person. But by adding to the model we should
be able (in principle) to eliminate all _systematic_ residuals, leaving only
What I'm saying is that Simon's conception of how we model systems is naive.
We don't just "look for limitations." We try to construct a model that has
the same characteristics as the real system. If this involves some sort of
limit -- for example, a limit in the speed at which actions can change --
then we put the same limits into the model. A limit or a nonlinearity is
just another feature of the system: it's something we have to account for.
If the real system has no important limitations in the region of behavior of
interest, then we don't have to put any limitations into the model. We don't
learn anything special from limits; we just learn about the limits.
I think I should say this, as a justification for abandoning this discussion
of Simon. I simply don't like the level of abstraction at which he speaks.
It's the sort of stuff that dilettantes love; big sweeping generalities that
can be interpreted to mean just about anything you want them to mean. They
aren't design principles; they're the sort of thing that would be said by a
bystander who doesn't know anything about design but still wants to be
included in the conversation. I can remember only too well doing this
frequently as a teenager working among adults; desperately looking for
something to say that would be acknowledged as true by these people who knew
so much more than I did. Of course what I got was a lot of covert rolling of
the eyes to heaven, and the sarcastic nickname of "Prof," with which I was