Emergent control

[From Bill Powers (930619.1800 MDT)]

Hans Blom (930619) --

I detect the opening of a small window between our worlds. It
opened like this:

So what do *I* do to control the controller? I put the sensor
at a location where what the heating system can do approximates
as closely as possible what *I* want done. There are two
levels: the heating system controls, and I control using the
heating system's contrller. Could that be a hierarchical
control system?

This illustration of hierarchical control has been used a number
of times by PCTers, but not quite in the manner you describe it.
Under normal circumstances, a householder has no choice but to
leave the sensor where it was installed. But there is a way to
achieve hierarchical control even if the sensor is located so it
does a poor job of controlling air temperature in the place where
the householder wishes to sit. The knob on the thermostat, which
adjusts the reference level, can be turned up or down until the
air at some location other than the sensor is at the desired
temperature. As long as heat disturbances don't change, the
thermostat, by keeping its own sensor at the new temperature,
will, as a byproduct, keep the householder comfortable at a
different temperature. The householder is a higher-level control
system using another control system, through adjusting its
reference signal, to control a variable other than the one the
lower control system itself is controlling. In this case the
difference isn't great -- it's just a matter of where temperature
is being sensed -- but the principle remains the same.

Of course the air temperature the householder experiences, being
far from the thermostat's sensor, is easy to disturb and indeed
can be disturbed without the thermostat's knowing it and
correcting the error. So the householder would be best advised to
sit near the control unit to avoid having to get up frequently
and alter the reference setting of the thermostat. Or, of course,
as you suggest, rip the control unit off the wall, solder
extensions onto all its wires, and carry it around (as organisms,
in fact, do with their own private portable temperature

Much of the rest of what you say shows that you are actually
considering the same kinds of problems with which PCT must some
day become involved -- and you're hinting that when that day
comes, a lot of the work will already have been done!

The goal of robust control is to design systems that continue
to control well in an environment which may change greatly or
with sensors and/or actuators whose characteristics may vary
randomly or systematically, as in aging of catalysts or decay
of magnetic properties and so on. For example, it is one of the
limitations of servomechanisms that they break down when their
sensors become very noisy. There are solutions for that problem
using other approaches.

In order for "robust control" as you define it to be achieved by
an autonomous system (i.e., without an engineer standing by with
test instruments and a screwdriver), something in the system
itself must be able to perceive the quality of control. If a
control system threatens to become unstable, for example,
something must be monitoring a function of the various system
variables so as to report the degree of instability that is
present. The desired level may be zero, or it may be a one-way
specification: no more than some maximum amount of instability.
The resulting error signal has to operate an effector system
which then alters the parameters of the control system in the
direction that decreases the perceived instability toward the
reference setting.

The organization of the stabilizing system is thus just like the
basic diagram of a control system in PCT, except for one thing:
it does not act by adjusting the reference signals for lower
systems, but by adjusting the parameters of their components. It
does not perceive a variable that is a report on a higher-level
aspect of the environment, but a variable that represents the way
the control system is controlling.

I recognized many years ago that this sort of hierarchical
control relationship had to exist in the human system. However, I
also recognized that modeling it analytically was a very much
more complex undertaking than I would be able to carry out. The
only step taken in that direction (which Clark and I mentioned in
our joint paper in 1960) was the introduction of a reorganizing
system, a system which altered parameters at random (anticipating
the "genetic algorithm" approach now in vogue). This idea is
still necessary, for if there are organized ways of adjusting the
parameters of control, the systems that accomplish it must
themselves have arisen somehow.

Control systems with random outputs can actually achieve control
far more efficiently than one might suspect, by altering the
spacing between random changes caused by the output. They explore
the total space of possibilities, and so are not constrained to
produce any particular type of control system.

But the general topic of control through parameter adjustment,
which I believe I mentioned as a research topic even in BCP, is
not covered by reorganization. Indeed, the acquisition of systems
that control sensed system performance through variation of
parameters has the effect, once such a system is working, of
greatly reducing the need for random reorganization. The control
systems can remain functional even though changes in the
environment and their own characteristics go far beyond what a
system with a fixed design can accomplish. As you say, such
systems are much more robust than are systems with fixed
characteristics. So you have my apology for saying you don't know
what robust means. I underestimated you. You clearly know more
about that subject than I do.

But don't dismiss mere "servomechanisms" too lightly. What I
realized long ago was that even if I couldn't model how control
systems alter their parameters, I could still study human control
systems when they are operating with relatively stable
characteristics, in an environment to which they are adapted. In
other words, I could ask the question, "How is competent adult
human behavior organized when its organization is not being
changed?" This is what HPCT is about.

The basic HPCT model, or a PCT model of a specific control
system, assumes a system with constant characteristics. As Tom
Bourbon has shown with data from tracking experiments, the model
parameters that produce the best fit with behavior continue to
predict behavior very accurately after a lapse of one year. At
our meeting this summer, he will unveil some predictions made 5
years ago, and put them to the same test. We confidently expect
very little change in the parameters, and an excellent prediction
(3 to 5 percent RMS mismatch between tracings of the real actions
and the model's actions). So at least with respect to the
particular behaviors we have been able to explore experimentally,
the assumption that human control systems retain constant
characteristics over some period of time does not seem
unreasonable. In fact, it seems correct.

However, it's clear that as we learn how to model higher and
higher level control behaviors, that assumption may break down.
Then we will have no choice but to include adaptive control in
the HPCT model. I don't want to do this until we have a specific
problem with modeling that can't be handled with a system having
fixed characteristics. But that day will surely come.

When I speak of the excellence and tightness of human control
systems, I'm not trying to account for how they got that way.
Neither am I trying to explain how they change to maintain that
excellence when long-term changes in the environment (or the
organism itself) obsolete a formerly competent design. My central
goal is simply to get behavioral scientists to start thinking of
behavior as a control process, and abandon all the previous bad
guesses about its underlying organization. The hierarchical
control model with fixed characteristics and control strictly
through adjustment of lower reference signals is, I believe,
enough to do the job. The sorts of problems you imply with your
robustness control are important only after the basic model has
been accepted. In fact when people ask me where I think PCT will
go in the future, adaptive control is one of the subjects I
mention. I also mention that this is a difficult problem and that
it will probably be solved by our descendants (it won't, at any
rate, be solved by me).


Now to your supposed counterexample concerning power gain:

Now imagine an almost identical situation, where it is the
electrical power engineer's task to track the current and
voltage of those energy-rich 380.000 Volt overhead power lines
on his oscilloscope. What he needs is a tremendous power LOSS.

In the first place, I'm not sure what you mean by "track." If you
just mean "observe," then yes, there has to be a power loss. The
sensor used for measuring the line voltage reports the voltage
without drawing any significant current, and the current that is
drawn (times the voltage drop) is reduced to the level of optical
power when the image of the meter face enters the engineer's
eyes. [Incidentally, measuring voltage is not measuring power]

However, the power gain I am talking about is power gain measured
inside a controlling system. To apply this concept, we would have
to have the engineer ADJUSTING the voltage on the 380-volt line.
In that case, the engineer would have to have a control knob that
allowed the levels of force generated by his muscles to alter the
line voltage regardless of the load on it. Coming in to the
engineer's senses is a minute scrap of energy that represents the
line voltage. Coming out of the engineer is a force that
dissipates many orders of magnitude more (muscle) energy than
comes into the engineer, and this force results in still greater
power amplification in the device that alters the voltage put out
by the generator.

The extra power gain in the environment, however, is offset by
the power loss in the link from the high-voltage line to the
meter and then to the senses of the engineer. There is probably a
net power loss in the environment. In the engineer, however,
there is an enormous power gain. If we trace gains and losses all
the way around the loop, starting anywhere, we will find that the
loop power gain is probably on the order of 40,000 (the amplitude
loop gain gain in typical visually-guided motor control processes
measures out very roughly at about 200). That assumes that the
engineer can control the line voltage as accurately as he would
be able to make a cursor track a moving target given a similar
manual control knob.

So your counterexample isn't a counterexample, but an example of
what I was talking about.

You can, no doubt, think of many more examples in which energy
or power must be scaled DOWN rather than up.

Yes, but scaled down where in the loop? All that really counts
for control is LOOP power gain, not gains or losses in any one
part of the loop. As a control engineer, you surely understand

... you also seem to exclude servomechanisms that have no power
gain such as micromanipulators that atomic physicists or
surgeons might use to operate on a scale much finer than that
of the human hand.

A mechanical micromanipulator is not a control system, although
an electromechanical one may be. The electromechanical one has a
very high loop power gain if it's a servomechanism. What its
actuator loses in amplitude the control system gains back through
the sensor detecting the tiny movements, and more power gain is
added inside the controlling system. Without considerable power
gain, you can't have accurate servocontrol.

Any micromanipulator, even a mechanical one, PLUS an atomic
physicist or a surgeon, IS a control system. The amplitude gain
that is lost in the reduction of the surgeon's output is regained
when he looks through the microscope at the highly magnified
movements of the tiny tool. The relationship between hand
movements and image movements is about the same as when
positioning something directly while looking at the result
directly. There is a large power gain going from the optical
image to the muscle outputs.

Amplitude loop gain is actually more informative than power gain,
because to compute power gain you have to take impedance
transformations into account. A frictionless mechanical
micromanipulator would actually have a power gain of unity: you
trade distance moved for force amplification. But then you have
to consider internal losses in the muscles, and it all gets
excessibly complex. It's easier to look at amplitude ratios,
knowing that power gain is simply the square of the amplitude
ratio when you measure under the same impedance conditions (as
you do by definition when measuring loop gain).

In my previous contributions, I have referred to these areas of
control theory as well, not knowing that you wanted to limit
things to servo- mechanism theory only. That may be the basis
of our mutual misunderstanding about what constitutes

The basic model for behavior has to be, I think, the
servomechanism, control-of-input, model. The reason is simple.
Organisms are not like commercial control systems; they are not
organized to be used by someone else. The only evolutionary
reason for behavior to occur at all is to produce some effect on
the organism itself, or to prevent something from happening to
it. Open-loop behavior might be practical in the world of devices
that are employed by human users, because the human user can
monitor the result and alter the commands to or the adjustment of
the device to correct any errors. But a behavior that simply
affects the outside world with no consequences that ever return
to the organism could have no evolutionary basis; as Martin
Taylor remarked, that would be a waste of resources. The whole
point of behavior is to control the effects the environment has
on the organism, keeping those effects in states or conditions or
amounts that the organism itself specifies internally. In fact,
this ability of control systems to control what happens to them
is a strong selection factor favoring the development of closed-
loop control rather than open-loop response.

That is the sort of control that adaptive systems in the human
organism must maintain, however they do it.
Finally, I think we have agreed that flockness is not under
closed-loop control, but is only under control in the emergent
sense. I still have great difficulty in allowing for the latter
sense of "control:" it is hard for me to conceive of a kind of
control in which the controller never knows about the output and
doesn't care what happens to it. My suspicion is that all so-
called open-loop controllers are embedded in a closed control
loop, and that if they're not, they can't actually control

Bill P.