Simon sign-off

[From Bruce Abbott (960918.1530 EST)]

Bill Powers (960918.0700 MDT) --

Bruce Abbott (960917.2120 EST)

All Simon is saying here is that when things are designed to serve some
function, then their characteristics must be selected so as to conform to
the requirements imposed by the environment, e.g., a bridge designed to
support heavy truck traffic must be designed differently from one designed
for horse-and-buggy only.

It isn't the environment that imposes requirements (or tasks); it's people.

Ah, clever. I was speaking here of things people design. Of _course_
people impose the tasks or requirements of the artifact: what it must do.
But those things must be done in an environment that imposes on the designer
additional constraints if the artifact is to serve its purpose
satisfactorily. That is what _I_ meant by "requirements imposed by the
environment." I don't mean to imply that the environment has its own goals,
wishes, etc. that it imposes on the device; it's just simpler to state it as
I did and trust that the reader understands. My mistake; I'll try to choose
my words more carefully in the future.

Control-system example: if a control system is to stabilize a heavy gun of a
battleship against the rolling of the ship, it must be built of components
having sufficient power to move the gun at the rate required by the ship's
rate of roll, along with many other such properties dictated by the task
environment.

Not by the task environment; by the intentions of the people designing the
gun.

Yes, I agree. I know that; Simon knows that. Next issue, please.

How can you tell whether the system has been _properly_ designed? It does
the job it was designed to do.

Exactly my point. You can't know whether the system has been "properly" (I
would say "adequately") designed unless you know what job it was designed to
do. That means you have to know what the designer _wanted_ it to do. Why did
Victorian bathtubs have claw feet on them? What job were they designed to do?
What job was the little hole in the middle of a kiva designed to do? What
job were the fins on a Cadillac designed to do?

Exactly Simon's point, too. I'm sorry you missed it.

Simon is using "behavior" in this context to refer to the performance of the
system. The behavior of the bridge is to support the traffic passing over

it.

If he's going to use "behavior" to mean action when he speaks of a human
being, but "passive properties" when speaking of the environment, he ought
to make it clear that he's switching meanings in mid-sentence. This is how
murky language (and thinking) is created: by letting word-associations and
puns lead you wherever they will.

No pun intended. Simon has been talking about performance -- as evaluated
against design requirements -- throughout when talking about designed
systems. I agree that his use of "behavior" in this context is sometimes
ambiguous and therefore confusing at times, but I think I've been able to
follow along by paying attention to the context in which the term is used.

The task environment is the environment in which the task is intended to be
performed. The "shape" of that environment consists of the features of the
environment to which the system must be adapted if the task is to be
accomplished in that environment. The behavior (of the system) is how it
performs in the task environment.

I really don't like your way of using the passive voice. This whole
paragraph never once refers to the person who is responsible for all of
this. The environment in which the task is intended to be performed is the
environment on which someone must act in order to create a result intended
by that person. It's not a "task environment" until someone picks a task --
an outcome -- and tries to perform it or get someone else to perform it
using the available properties of the environment.

We're talking about things people have designed to suit some purpose or
purposes, but it is also true that "artifacts" resulting from the process of
evolution have the same quality, although in the latter case it is much more
difficult to determine to what purposes some structure has become suited via
variation and selective retention, and must remain a matter of scientific
inference. I chose the passive voice to allow for this possibility -- after
all, there is no "designer" in the latter case to refer to actively.

Control-system example: The gun-barrel's slew-rate is matched to the
rolling-rate of the ship, as determined by the ship's design, the nature of
the waves it normally encounters, and the laws of physics. Its direction of
motion is opposite to that of the deck on which it is mounted, as required
to keep the barrel steady against the roll. You don't have to know anything
about the physical implementation that accomplishes this to understand how
the gun will behave in this task environment: just ask yourself how a
"properly-designed" gun would behave.

You're driving me up the wall, Bruce. This paragraph says exactly nothing.
The gun's slew rate is matched to the rolling-rate of the ship because the
gun barrel remains stationary while the ship rolls underneath it; that is
what we _mean_ by saying that the slew rate matches the rolling rate. That
also means that the direction of motion is opposite to the motion of the
deck. What you're describing in three different ways is not the behavior of
the gun, but a _consequence_ of the behavior of a "properly designed"
system.

If you know that the gun is supposed to accomplish (its task), and what
design constraints arise from the environmental conditions under which the
system must work, you can describe what it must do to carry out that
function. You can describe how it must behave. That's not behavior?

The system is designed to keep the gun-barrel stationary in inertial
space; as an irrelevant side-effect it produces the relationship of the gun
barrel to the ship that you're describing.

You can tell this by observing it's actions?

The gun isn't doing anything; it's being moved around by something else.

Cheap shot: the "gun" of which I speak includes its control system.

And the "properly designed
system" is not concerned with the relation of the gun barrel to the ship, or
the rolling of the ship, or the effect of the waves.

Of course not. It is concerned with keeping the gun barrel at a certain
angle with respect to horizontal. _How_ it does this is somewhat mysterious
to the external observer, who knows nothing of the internal workings of the
system.

It is concerned with
keeping the sensed angle between the gun and an inertial reference platform
constant. The observer of this gun control system is seeing only the more
obvious external disturbances and irrelevant consequences of the action of
this control system. Yes, the slew rate is matched to the ship's roll rate,
but only because the angle of the gun is being maintained constant relative
to something the naive observer can't see, the artificial horizon.

I thought we were talking about _inferring_ the structure/organization of
the inner system. The observer knows nothing about inertial platforms or
artificial horizons.

Observing external behavior doesn't tell you much about the inner system.
You don't know whether the gun is being driven by an electric motor or a
steam piston, whether its signals are implemented by an electronic brain or
by a system of gears and wheels. These details remain hidden.

What I'm saying is that Simon's conception of how we model systems is naive.
We don't just "look for limitations." We try to construct a model that has
the same characteristics as the real system. If this involves some sort of
limit -- for example, a limit in the speed at which actions can change --
then we put the same limits into the model. A limit or a nonlinearity is
just another feature of the system: it's something we have to account for.
If the real system has no important limitations in the region of behavior of
interest, then we don't have to put any limitations into the model. We don't
learn anything special from limits; we just learn about the limits.

But system performance is often limited in ways that reveal something of the
system's inner structure. There are tradeoffs and limitations in the choice
of materials and in their organization that may "tell" under certain
conditions and reveal something of the inner structure. Most people have
severe difficulty retaining more than a few items in consciousness when
presented one item after the other, and this leads to a rather severe
bottleneck in certain kinds of human performance. Any theory about how the
system works must explain this constraint. Limits are not just limits; when
they arise from the structure and organization of the system they place
constraints on how the system must be organized. The correct model will be
limited in the same way, and for the same reasons, as the real thing; limits
need not be "tacked on" to the model to make it work realistically, post hoc.

I think I should say this, as a justification for abandoning this discussion
of Simon. I simply don't like the level of abstraction at which he speaks.
It's the sort of stuff that dilettantes love; big sweeping generalities that
can be interpreted to mean just about anything you want them to mean. They
aren't design principles; they're the sort of thing that would be said by a
bystander who doesn't know anything about design but still wants to be
included in the conversation.

Obviously I disagree with this assessment, but given the direction this
discussion has taken, I'm quite ready to drop the whole thing. I've already
lavished more attention on it than the few simple ideas it presents
deserved, and have more pressing matters to attend to. It is quite
interesting, however, to see how differently two people who come to the
material with quite different backgrounds, and with different agendas to
pursue, interpret the same written words.

Regards,

Bruce