Replies to Hans Blom

[From Bill Powers (930309.1800)]

Hans Blom (930309) --

RE: Evidence

Yes, people do disagree on what constitutes evidence. I don't
think that this is any reason to abandon the idea of experimental
test. Common sense has to come in here somewhere -- if I try to
sell you the Statue of Liberty, and you have any interest in
buying it, you'll want some evidence that I have the right to
sell it. You won't let me off just because I start a
philosophical argument about the nature of evidence.

Or will you? As a matter of fact, I happen to be selling shares
in some nice real estate in South Central Florida ....


RE: Optimal control systems

I don't think many evolutionists would agree with your statement

... in the billions of years of experimentation through
evolution people (and organisms in general) have found superb
ways to realize their goals.

Evolution doesn't optimize anything; it just weeds out unworkable
organisms. What's left is just barely good enough to survive --
for a while.

I would have to agree with your implication that organisms
control as well as they can. That's a matter of definition. But
in looking at the state of our world, I am not greatly impressed
with the way people control for social harmony, economic
viability, or maintenance of an environment fit to live in.

In my world view, an organism's behavior is perfectly in line
with its top-level goals.

I think you're defining top-level goals from outside the
organism. When I speak of goal-seeking I'm not normally thinking
of "goals" like maintaining the life-support system and combating
invasive microorganisms, or even "surviving," the unlearned goals
that I assume to drive reorganization. I'm thinking more in terms
of the learned goals, things like being a good person, making a
decent living, and so forth. I don't think that people are
particularly adept at constructing systems of goals that hang
together, are consistent with each other. Most of the people in
the world live in poverty, hunger, and illness. I don't see how
you can claim that they are optimal control systems.
In offering alternatives to the highest-level goals you suggested
(jumping as high as possible), I wasn't denying that some people
might actually have the goal of jumping as high as possible. I
was only pointing out that other goals are equally plausible, and
in my experience more common (particularly when you ask what the
_immediate_ goal is). In explaining to me that in different
persons identical actions may come from different motives, you're
simply echoing my point.

You seem to have a weird conception of what pain is. Your
"pain" sensors seem to be my pressure/deformation sensors.

Let someone put a fold of your skin in a pair of pliers, and then
start increasing the squeeze. At low levels of squeeze, you will
simply detect the amount of squeeze. But I predict that at some
level of that sensation, you will say "Ouch!"

In my opinion, pain is a stimulus generated by our body when
attention needs to be drawn to more or less massive ongoing or
threatening distruction of bodily tissues. Pain, in this view,
is not normally present. Pain, moreover, is a strong motivator
to get away from the situation that brings it forth if still
possible and, probably even more important, a very strong
motivator to avoid similar situations in the future.

If a sensation rises above a certain reference level, you will
take steps to reduce it. Isn't that how we define pain? For some
sensations, like stimulation of a nerve in a tooth, that
reference level is set quite low. For others, like pinching the
skin, it is set fairly high (although the reference level varies
according to the part of the body involved).

It isn't the sensation that serves as a motivator (perceptions
don't motivate anything). The reference level determines what
level of the sensation will be enough to produce efforts to
reduce it.
RE: zero reference levels

I don't see any particular problem here. As you say, setting a
reference level for a particular perception to zero is not very
practical, because you'll end up with the perception oscillating
above and below zero. Better to set it at a very low but nonzero
level, so you can maintain control of it at a harmless level.

In your anthesthesis problem, you don't want any reaction from
the patient at all, so you have to try to guess how much
anthesthesia is required without administering too much. And you
don't want to use the patient's reaction as a gauge, because you
don't want any reaction at all. Obviously, tight control is
impossible; you just do the best you can.

This is slightly paradoxical: we want to keep away from
something that we do not feel. Isn't such a construct, control
relative to an 'imagined' reference, possible in PCT?

I think you must mean controlling an imagined perception relative
to a reference level. Reference signals are just reference
signals, generated by the next level up, not by the environment.

Controlling an imagined perception is possible, but doesn't give
you any control over a real one. What you can do is interpret
signs from the environment as an indication that an unwanted
perception is likely to occur, and correct the environment to a
state in which you predict that it will not occur. Of course if
you continue to do this, the most likely outcome is superstitious
behavior. You have to check now and then to make sure that if you
don't take the preventive step, the unwanted perception will in
fact occur. The most persistent forms of superstitious behavior
arise when the unwanted perception implies great danger or
possible death. Then you're stuck with the superstition because
you don't dare test it.

To break out of that sort of supersitious behavior, all you can
do is improve your understanding of nature. You wouldn't try
experimentally stepping on a crack to see if it really would
break your mother's back as a way of testing that supersition.
But if you understood how the world works a little better, you
might decide that there's no basis for the rule, and simply
cancel it. The less you understand, of course, the harder it is
to give up any superstition.
RE: your control formula

... in control systems a CONSTANT never provides a problem.
Control of CONSTANT disturbances is trivial; one might
not even call a constant disturbance a disturbance.

But in your design, you have to assume that the average value of
the disturbance is zero. If you don't assume that, will your
formula work?

This is a prime candidate for a working model. I claim that your
design for a model of steering will end up with the driver
oscillating between the edges of the road, whereas a PCT-type
design will stabilize the car on some path well away from the
edges. The two become equivalent, by the way, if there is a
region on the road where both edge-avoiding systems are
simultaneously active. Then you get a conflict system with a
virtual reference level in the middle.
RE: optimization again

You ARE trying to optimize something: you are trying to find an
optimal match between a model and a real human subject.

You're a pretty slippery customer. What you say is true: I'm
controlling for the best fit between the model and the real
behavior. Achieving this requires the same sort of trial and
error that tuning a radio or focusing a lens requires, because
the amount of error doesn't tell you which way to move, and
there's no a priori way to specify the magnitude of the effect at
the minimum (or maximum). This sort of control does happen. It's
not very common. And it's not very tight.
Same subject:

My point is that the human perceptual + conceptual systems are
so beautifully designed that they even extract information from
very 'noisy' perceptions.

They do that only as well as the statistics and the accuracy-time
tradeoff permits. I don't worry much about extracting signal from
noise; most of the behaviors we observe work at signal levels
where noise can be neglected.

Control must always be limited; the world is just too complex
for our three pounds of brains to model it and our fifty pounds
or so of muscles to subdue it.

Well, I won't be nasty and remind you of how wonderful our
evolved control systems are supposed to me.

What's really wrong with your statement is the implication that
it's hard to find instances of good control. Control is, to be
sure, limited -- but it's hard to find examples of behavior in
which control isn't pretty good by anyone's standards. "Limited"
is one of those qualitative terms; the importance of the limits
depends on quantitative definitions. Human motor behavior works
with a bandwidth of only about 2.5 hz -- certainly too limited to
enable us to balance on end a stick one inch long. On the other
hand, this bandwidth seems to be just sufficient to handle most
of the disturbances that actually occur on scales that matter to
us. On those scales, the limitations are irrelevant.
RE: input and output

In engineering, we take great liberty in defining inputs,
outputs and systems.

I think this is one of the reasons that engineers failed to come
up with PCT. When you're focussed on producing some outcome in
the environment, there's no organizing principle for laying out
the control system. You can put your stabilizing filters in the
input function, or add little loops anywhere you like that will
do the job. The result is that there are no real principles of
design in control engineering (that I know of). There are plenty
of principles, but none having to do with how to design the
functions of a control system in some systematic way. Basically,
you kludge up a design that looks as if it will work, and then
buckle down to analyzing what you designed.

The PCT approach is to define the problem in terms of sensed
variables: it is the sensed variable that will ultimately be
controlled, so it should represent something specific in the
environment to be controlled. The engineer can violate this
principle, because the engineer knows what is to be controlled.
But if the control system is in an organism, its perceptions have
to be useful in a variety of higher-level systems, and can't have
haphazard relationships to the outside world. This forces the
modeler to propose a consistent set of definitions of input,
output, system, and environment.

I think that a little more systematicity would also help control
engineers, but that's their business.

Bill P.