on control

[Hans Blom, 950124] On control

It is frequently expressed in these here csg-circles that an organism
_is_ a control system. It is not. Or that organisms _have_ control
systems, in contexts where we cannot meaningfully say so. This small
essay will attempt to clarify this obviously confusing issue. Let me
quote just a few of the remarks that prompted this contribution.

[Rick Marken (941223.0930)]: There are unquestionably control systems
in plants ("tropisms" ...

[Bill Powers (941225.1100 MST)]: A "grasping" control system can
optimize itself by means we can easily imagine, but the optimization
applies only to achieving the reference-state of grasping as well as
possible. What does such a control system know about grasping hot
objects, or grasping objects which are not only edible, but
nourishing?

[Rick Marken (941226.1745)]: In PCT the word "hierarchy" refers to
the structure of the relationship between control systems in the PCT
_model_

[Bill Powers (941227.0730 MST)]: In the same way, a control system of
a given level can't perceive in terms of higher-level perceptual
functions: a relationship-controlling system perceives nothing of
categories or any higher level of perception.

Let us be clear about what a control _system_ is. It always consists
of two parts, an inside and an outside, so to say. Let us call the
"inside" a controller, even though it may not be able to control; and
let us call the "outside" the something to be controlled, even though
in reality it may not be controlled. A well-engineered controller and
the plant it controls. An organism and its environment. A human being
and her physical and social context. And it is only the two together
that are the _system_. It is only the two together that determine
whether we can talk about "control" occurring or not. Of course this
is not a new idea. Yet it, and some of its implications, are readily
forgotten.

Examples are easy to construct. Invert the sign of the environment's
transfer function and what was an excellent controller is now an
oscillator. Make the environment unresponsive (massive, sluggish),
and control is control no more. A human may be an excellent
controller on earth, he isn't in Jupiter's immense or in an
asteroid's negligable gravity. It is the environment that determines
whether something that was designed to be a controller is in control
or not. Engineers are always concerned with these limits of control.
They have to establish the possible variations of the environment in
order to discover the stability or controllability limits of the
system as a whole.

Controllability limitations come in two shapes, depending upon the
loop gain. Notice the word _loop_: it, by itself, signals that both
inside and outside equally contribute to the characteristics of the
system as a whole. In the first flavor, a system is uncontrollable
when it oscillates. This is so when the loop gain is positive and
equal to one; a loop gain greater than one is not possible and will
forcibly be limited to one by the physical characteristics of
controller or environment or both. But in practice a loop gain
approaching plus one results in uncontrollability as well, due to the
extreme sensitivity of the system to both disturbances and setpoint
changes. The other flavor of controllability arises when the loop
gain is approximately zero. Now there will be no wild oscillationsbut
no or hardly any response at all, however great the effort. So, in an
environment with a fixed contribution to the loop gain, the
controller is extremely restricted in its own choice of
characteristics. If it can choose.

We may assume that in a fully genetically determined organism -- if
such a thing exists -- there is an exquisite tuning of the
characteristics of the organism's modes of action to its environment.
But at least some organisms are too complex to be able to be fully
specified by genetic code. Here a different mechanism is -- in
addition -- required.

Adaptation -- as in adaptive control systems and higher organisms --
is an attempt by the controller to adjust itself to its environment
in such a way that it can initiate, resume or improve its control.
Thus there is a kernel of humility in adaptation. Other types of
learning are not different in this respect. Learning of any type is
an attempt by the organism to adjust to the environment in such a way
that control can be (re)established or improved. But we should not
forget that, maybe, some environments are just too harsh to be
controlled, and despite all learning effort, control will prove to be
impossible -- or only slight control will ultimately be possible,
demanding the greatest possible effort. We all know human cases where
this happened: children with unresponsive or inconsistent parents are
a case in point. And since effort itself often is a precious resource
for an organism that must be tightly controlled, attempts to control
may cease alltogether in such cases.

Adaptation is the only possible strategy in all cases where the
characteristics of the environment change significantly. Without
adaptation, the system as a whole will either start to oscillate or
come to a grinding halt. Without adaptation, the "controller", now
out of control, does not realize that the world has changed. The old
behavioral patterns remain intact but have no effect anymore -- or
not the desired effect. We all know examples where this occurs in
humans, too: an example is the phase of denial and disbelief after
the death of a significant other, where a very significant period of
time may be required before adaptation is developed.

This discussion has to do with what is called the organism's
autonomy. Autonomy should always be understood relative to an
environment. An "autonomous vehicle" can be autonomous only in a
relatively mild environment -- witness some spectacular failures.
Similarly an organism can be autonomous only if _its_ environment is
mild enough. To talk about an organism being autonomous or a control
system all by itself is sheer nonsense.

Why this discussion that does not present anything new? To counteract
tendencies that I spot in some of our csg-l discussions to think of
organisms -- and humans -- as systems that, all by themselves, are in
full control, regardless of their environment. They are not. It is a
fairy tale that any newspaper boy can become a millionaire if only he
works hard enough. Control can arise only as an _interaction_. And in
this interaction the controller usually has to adjust to the demands
of the environment -- except where the environment can be changed and
given different characteristics. But that is an entirely different
story.

At the very least, we have a severe terminological problem. Maybe it
can be solved by introducing different names. But I have a feeling
that the misunderstanding is deeper than that. It leads to statements
like

[Bill Powers (941227.0730 MST)]: All human beings set their own
goals;there is no mechanism by which any external agency can directly
determine the goals of an individual.

This is misleading at best. Goals are relative to the environment,
and without the "external agency" goals cannot even be defined.

Greetings,

Hans

Tom Bourbon [950124.1130]

[Hans Blom, 950124] On control

It is frequently expressed in these here csg-circles that an organism
_is_ a control system. It is not. Or that organisms _have_ control
systems, in contexts where we cannot meaningfully say so. This small
essay will attempt to clarify this obviously confusing issue.

Enlighten us, please! :slight_smile:
. . .

Let us be clear about what a control _system_ is. It always consists
of two parts, an inside and an outside, so to say. Let us call the
"inside" a controller, even though it may not be able to control; and
let us call the "outside" the something to be controlled, even though
in reality it may not be controlled. A well-engineered controller and
the plant it controls. An organism and its environment. A human being
and her physical and social context.

Actually, when we talk about people as though they were control systems, we
don't draw the boundary at the skin-air interface and certainly not at a
"person-context" interface, whatever that might be). Instead, on the output
side we put it at the gap between neurons and either muscles or glands. On
the input side, it is at the surface of the receptor cells or the tissues
that directly affect them.

And it is only the two together
that are the _system_. It is only the two together that determine
whether we can talk about "control" occurring or not.

But, of course.

Of course this
is not a new idea. Yet it, and some of its implications, are readily
forgotten.

As we must constantly remind people.

Examples are easy to construct. Invert the sign of the environment's
transfer function and what was an excellent controller is now an
oscillator.

In the systems you engineer, perhaps, but not if the controller is a person
in a tracking task, or a similar situation. After a few milliseconds, the
person reverses the sign of output and goes right on tracking. Isn't it
amazing how living control systems can do things engineered ones can't, at
least as you have described the engineered ones?

Make the environment unresponsive (massive, sluggish),
and control is control no more.

Obviously you mean this to be true after you have exceeded broad limits
within which everything remains the same -- the person just keeps on
tracking, if I can get away with using that metaphor.

A human may be an excellent
controller on earth, he isn't in Jupiter's immense or in an
asteroid's negligable gravity.

A problem all of us encounter every day! :wink: I must be missing your point,
Hans.

It is the environment that determines
whether something that was designed to be a controller is in control
or not.

And?

Engineers are always concerned with these limits of control.
They have to establish the possible variations of the environment in
order to discover the stability or controllability limits of the
system as a whole.

Yes. That's one of the big differences between engineers who design and
build control systems (er, "controllers") and perceptual control theorists
who must study living control systems (oops, "controllers") "as is." But we
have discussed all of this at great length, many times before, haven't we?

Controllability limitations come in two shapes, depending upon the
loop gain. Notice the word _loop_: it, by itself, signals that both
inside and outside equally contribute to the characteristics of the
system as a whole.

Right! It seems we never stop telling people that.

In the first flavor, a system is uncontrollable
when it oscillates. This is so when the loop gain is positive and
equal to one; a loop gain greater than one is not possible and will
forcibly be limited to one by the physical characteristics of
controller or environment or both.

What? Hans, haven't you been on csg-l for a couple of years now? Did you
forget all of the discussions on this point, or are you just trying to
tweak our noses and have a little fun? :wink: I'm sure you are just trying to
get us to say, "A living control system -- controller -- has a high-gain
negative-feedback interaction with its environment."

We may assume that in a fully genetically determined organism -- if
such a thing exists -- there is an exquisite tuning of the
characteristics of the organism's modes of action to its environment.

Yes, and one of the truly exquisite "tunings" is the one that produces a
remarkably effective general-purpose, high-gain perceptual controller out
of rather imprecise and sloppy parts.

But at least some organisms are too complex to be able to be fully
specified by genetic code. Here a different mechanism is -- in
addition -- required.

Some organisms are controllers and others are not? Are the "others" pure
reflex systems, finely tuned to their environments?

Adaptation -- as in adaptive control systems and higher organisms --
is an attempt by the controller to adjust itself to its environment
in such a way that it can initiate, resume or improve its control.
Thus there is a kernel of humility in adaptation. Other types of
learning are not different in this respect. Learning of any type is
an attempt by the organism to adjust to the environment in such a way
that control can be (re)established or improved.

You say organisms "adapt themselves." Do they ever adapt their environments
to suit their own (perceptual) ends?

But we should not
forget that, maybe, some environments are just too harsh to be
controlled, and despite all learning effort, control will prove to be
impossible -- or only slight control will ultimately be possible,
demanding the greatest possible effort. We all know human cases where
this happened: children with unresponsive or inconsistent parents are
a case in point.

Yes. And?

And since effort itself often is a precious resource
for an organism that must be tightly controlled, attempts to control
may cease alltogether in such cases.

To paraphrase: "Effort is a resource. As a resource, it must be tightly
controlled." What kind of resource is the action called effort? Effort is
not a thing, an object. That issue aside, who or what must "control"
effort, and why, and at what quantity? Apparently I do not follow your
reasoning here.

. . .

This discussion has to do with what is called the organism's
autonomy. Autonomy should always be understood relative to an
environment.

Another thing we seem to repeat very often.

. . .

Why this discussion that does not present anything new? To counteract
tendencies that I spot in some of our csg-l discussions to think of
organisms -- and humans -- as systems that, all by themselves, are in
full control, regardless of their environment. They are not. It is a
fairy tale that any newspaper boy can become a millionaire if only he
works hard enough.

Hans, what on earth does this story have to do with our discussions of
living perceptual control systems. No one who does any PCT modeling or
research has ever implied such a thing. Are you writing for some of the
people on csg-l who never do research or modeling, just to remind them of
something?

At the very least, we have a severe terminological problem. Maybe it
can be solved by introducing different names. But I have a feeling
that the misunderstanding is deeper than that. It leads to statements
like

[Bill Powers (941227.0730 MST)]: All human beings set their own
goals;there is no mechanism by which any external agency can directly
determine the goals of an individual.

This is misleading at best. Goals are relative to the environment,
and without the "external agency" goals cannot even be defined.

If you disagree with Bill, tell us how an external agency can directly set a
reference perception in a living controller. Many are those who make the
assertion; none are those who have provided the evidence.

Describe an "external agency" that can do what you say. In itself, that
would be a revolutionary contribution to PCT.

Later,

Tom
Department of Neurosurgry
University of Texas Medical School-Houston Phone: 713-792-5760
6431 Fannin, Suite 7.138 Fax: 713-794-5084
Houston, TX 77030 USA tbourbon@heart.med.uth.tmc.edu

[Hans Blom, 960206]

(Bill Powers (960202.1030 MST))

It helps if you specify the purpose for which a decision is to be
made. We could say that we require a variable to be maintained
within 1% of the range of values of the reference signal (taking
note of your previous correction of my statement about "temperature
error"). Or we could specify a specific amount of error. We would do
this on the basis that smaller errors would have have no
consequences we consider important. When I'm steering a car, I could
say that a position error of one centimeter is essentially the same
to me as no error at all. So if a system can't keep the error that
small I would say it isn't performing its function very well, and
might try to improve it. But if it keeps the error smaller than 1
cm, I don't care how much smaller the error is; it's still "no
error" to me.

I fully agree with your "It helps if you specify the purpose for
which a decision is to be made." What, in particular, would the
purpose be at the highest level of the hierarchy of, say, humans? As
long as we are not specific about this, we will be faced with the
difficulty of only being able to study the "purpose" of the lower
(lowest?) levels. But since these are only "tools" designed to
realize the top-most purpose, we'll be studying tools without knowing
what they are good for.

The remainder of your remark above is not in line with what I observe
about control systems, either artificial or natural ones. Although it
would be nice to work from a priori specifications, what makes a
control system successful IN PRACTICE is that it is better than a
competing control system. Not necessarily in accuracy, but possibly
also in size, reliability or power consumption. I see this process in
industry; I also see this mechanism in the evolution of species.

In particular, your remark that "if it [a control system] keeps the
error smaller than 1 cm, I don't care how much smaller the error is;
it's still "no error" to me." That might be true, but the more
successful control system would now probably be the one that realizes
the same accuracy with less power consumption. In general, a control
system has not just one purpose, but a great many ones, most of which
are, moreover, very difficult to pre-specify. Usually the success of
a design is apparent only afterwards, after it has been thoroughly
tested in the practice of daily life over a sufficiently long period.

Remember that the aileron "control" does not control the position of
the aileron. It has to be operated by some other system.

It is statements like these that make the meaning of the word
"control" (or any of its replacements) very untransparant. It
confuses levels. The aileron control system can be considered a
control system per se and any engineer will regard it as such. It is
also, of course, a component of a larger control system; as long as a
pilot controls the controls, that is.

In all these cases, a control system is employed where we would
actually prefer a "stimulus-response" or "input-output" system, if
such were only possible. There is nothing advantageous to a
feedback control system per se.

But in each case you are talking about a _component_ of a negative
feedback control system, not the system actually doing the feedback
control. If the overall system isn't retrofactive, then there's no
-- excuse me -- control.

Playing with words? A control system does not control unless it is
controlled? Where does it stop? What is the highest level? I'm afraid
you might be caught in unending recursions.

It seems to me that you frequently express a concern about being
"certain" that something is "correct." Isn't that a rather vain
hope?

Yes, sure. Yet that is what science is to me: a search for ever more
certain knowledge. In other words: for ever better models, that allow
ever better control. And that most certainly is not a vain hope, as
the history of science shows.

Greetings,

Hans