control quality

[Hans Blom, 960129e]

(Bill Powers (960125.1430 MST))

The first lesson to learn is that control systems are not perfect.
It is true that 17 degrees is not 20 degrees, but it is 85% of 20
degrees.

Bill, this is something that you cannot do. Translate the temperature
to a different scale, say Fahrenheit or Kelvin, and you get different
percentages. So measured in Kelvin, control quality would be fantas-
tic! That is clearly not reasonable.

There is no absolute for the quality of control. Two control systems
can be compared as to quality, whatever measure one defines; we've
done so in the past. Or a control system can be compared to itself
after it has been modified somehow. But to say that a control system
by itself is "good" or "poor" lacks any meaning. It is the comparison
that provides the meaning.

I noticed with some surprise that no one else seems to have caught
this...

Greetings,

Hans

------------------- ASCII.ASC follows --------------------
[Hans Blom, 950518]

(Rick Marken (950516.0800))

Rick, you make remarks like

Actually, as Bill Powers (950516.0530 MDT) just showed, this little PCT
controller hardly loses control at all, even with your rather extreme
disturbances ...
...
... Hans' own very complicated "model- based" controller ... was
unable to maintain control in the face of ANY disturbance at all.

and from (Rick Marken (950516.2045))

... the perceptual control model maintains control even under the most
adverse disturbance conditions.

Maybe something good can come out of this discussion after all: how do we
measure the quality of control, what yardstick do we apply? What are our
norms? Without a yardstick, statements like "hardly loses control at
all", "unable to maintain control" and "maintains control even under the
most adverse disturbance conditions" have little or no meaning.

In the engineering literature, the quality of control is usually measured
in terms of

      SUM {r (i) - x (i)}^2, over some (finite or infinite) range of i

where r is the reference level (setpoint, the trajectory to be followed,
the optimal value for x, or whatever terminology you might like to use);
x is the thing to be controlled at the value r, and i counts the samples.
Sometimes x can only be measured contaminated by noise; if not, x is the
observation. All this for a one-dimensional controller. Replace the sum
by an integral in non-discrete systems.

The goal in engineering is, of course, to design a controller that
minimizes the sum above and hence maximizes the quality. When the design
is finished, the effort that the controller must expedite in its actions
is known, and this guides the selection of the hardware to build the
system from. If the hardware becomes too large and expensive, design
might start with a different criterium, such as

      SUM {r (i) - x (i)}^2 + alpha * u^2

where u stands for the controller's actions and u^2 for the energy
required for control; alpha a weight factor. This criterium states that
we think of the quality of control as being inferior with more powerful
actions/actuators, other things being equal. It is a remarkable finding
in most practical situations, that less powerful actuators frequently do
not deteriorate the quality of control very much, given an appropriate
control law. The criterium is economic in that it allows us to arrive at
a best compromise between energy expended and tracking accuracy. Enlarge
alpha, and you use less effort. Using too little effort would of course
compromise tracking accuracy too much. By the way, this reminds me of an
earlier discussion about energy expenditure as a controlled variable.

If we can agree on some yardstick (substitute your own, if you want to),
we can start to talk about how good a controller is. Note that the above
yardstick punishes large errors severely and deemphasizes small errors,
something which seems intuitively desirable.

By the way, Hans, I have developed a simple PCT type control model that
meets and exceeds your specifications for a controller that has
"approximately zero response time" while it "tracks a repeating
waveform".

Great! Show me! Does it work with all kinds of "arbitrary" disturbances?

Hans, you've been hanging around csg-l for several years now? I am still
wondering what in the world you are trying to teach us.

Rick, I am not trying to teach you anything. I consider myself a student,
practicing and arguing to gain understanding. But I may be an obnoxious
student: I do not accept a teacher's word for it unless it "feels" right.
Consider me experimenting on the world and acting upon it in order to
establish how it reacts. Building my own "world-model". Something like
trying to discover the laws of nature. Something like doing science.
Something like applying The Test.

                                                     Do you agree
that all control systems -- living and non-living -- are typically
organized around the control of their own perceptions?

This phrasing doesn't "feel" right. I do not control all of my percept-
ions, but I do think that I use all of my perceptions in order to con-
trol. As stated, it is too fuzzy. Perceptions, both of the world outside
and of the world inside me (my instincts, drives, feelings and emotions)
are the only things that can show me what is going on, and what the
effects of my actions are. They are primary. It is the "organized around"
that I am grappling with.

... Are you just trying to expand the application of the control model
to cases where the control system is deprived of perceptual input?

That is an important part of it. Perception-less control is a model for
planning, for instance: you set out a trajectory into the future that you
want to follow, yet you don't have feedback yet on how you do.

But no, not just that.

... What is your point?

I'm not sure whether I have one, or whether I can formulate one. Although
I constantly try to apply The Test to myself, I haven't discovered my
goals yet. I am not even sure whether they can be discovered in
principle...

... Hans' own very complicated "model- based" controller ... was
unable to maintain control in the face of ANY disturbance at all.

This seems to be one of your famous more extreme statements. I cannot
take it seriously. Have you experimented with the demo and discovered how
well it performs with "random" (white) noise? The controller cannot
handle disturbances that it cannot model or has not (yet) modeled, that
is correct. The demo demonstrates a very different concept of control
from what you are used to. But it DOES control, within its limited domain
of applicability: even a square wave trajectory/reference level is fol-
lowed with an almost ideal response. That this is true only in a relat-
ively mild and predictable "world" does not detract from that fact.

(Martin Taylor 950516 11:00)

Bill Powers (950516.0530 MDT) and Hans Blom (950516)

The models you are throwing back and forth do not seem to be realistic
models of continuous systems.

Certainly. In my opinion, however, the discussion centered around the
theme when control is "good", "not good", "lost", etc. That is, I think,
a very worthwhile discussion in and by itself. Bill's system (Bill Powers
(950515.1615 MDT)):

p := qi e := r - p o := o + (1000*e - o)/1001 qi:= d + o

is not very realistic in that his "world" is too simple: p = qi = o + d

The "world" is modeled as having a transfer function of 1: p = o in the
absence of noise. No delays, no bandwidth limitations, no inertia, no
phase shifts, no problems: the simplest world of all. And I am not even
sure that Bill is aware of the fact that he introduces a "world-model" in
these equations!

The remainder of your discussion treats side issues, I think, although
some of them quite important. Some remarks:

In a model that simulates a continuous waveform, you cannot represent a
step function legitimately.

Why should a model only simulate continuous waveforms? Our models were
discrete, and the can adequately handle even square waves, as my demo
shows.

Hans's disturbances are therefore, for the most part, illegitimate
waveforms to apply to Bill's model.

Why "illegitimate"? Difficult, that is for sure, but only because Bill
chooses to have an integrator in his output function. There is nothing in
principle that I can think of that limits Bill's options in this way.

(Bill Powers (950516.1530 MDT))

Now that you've had a chance to consider the modified model, let's talk
about physical time and scaling.

No, let's talk about something else first: a yardstick to measure the
quality of control.

OK, I recognize that much of your discussion centers on that issue. You
propose a quality criterium something like

      SUM {r (i) - x (i)}^2, umin < u < umax
or
          {r (i) - x (i)}^2
      SUM -----------------, umin < u < umax
           {xmax - xmin}^2

where the sample interval between successive i's should be short enough
not to miss information (the sampling theorem). Otherwise all sorts of
nasty things occur, as you rightfully describe. This is indeed usually a
hidden assumption; theoretically the requirement of a sort enough sample
interval is so obvious that no one much talks about it -- yet in practice
it is often forgotten.

Yet, the requirement exists only for continuous systems, NOT for discrete
ones. In discrete systems, the equations describe only what occurs AT the
sample moments, NOT what happens in between. Indeed, when translating a
continuous problem into a set of discrete equations, this is frequently
forgotten also. Thus a discrete system design may show all kinds of
unwanted behaviors in between sample moments, paticularly high frequency
oscillations, when it is used to control a continuous real-world system.
An adequate translation back and forth is far from trivial.

There are two considerations in determining the magnitude scaling used
for a model of a control system. One is the maximum output that the real
system can produce with its output equipment, and the other is the
maximum value that the perceptual representation of a controlled
variable can attain. The latter is set by the saturation level of the
perceptual signal.

The first consideration is very important, the second hardly ever is, in
my experience. I hardly know of industrial or organismic sensors that
saturate.

Neither of these considerations is likely to appear in a conceptual
model, but they are necessary if we are to have any method for
determining what is a "large" disturbance and a "large" error.

In engineering, output limitations are frequently specified as design
criteria. And because they are, and because by definition successful
controllers control successfully, perceptions will have a limited range
and hence will not saturate sensors. Again, theoretically this is a
possibility, in practice it is not, in my experience. Idem for saturation
of the reference level.

(Rick Marken (950516.2045))

The ability of a control system to control depends on how much output it
can generate per unit error (gain), the upper limit of this output
(strength) and speed (integral and transport lag).

Here you touch on the quality criterium again. Your statement seems to
imply that control is better the higher the controller's gain and the
faster its speed. Does that imply that we should give a controller
infinite gain and remove integrators from its output function? No,
obviously not (always). The characteristics of the controller must depend
on the characteristics of its world.

That brings forth another meta-consideration that I've had for a long
time: the equation of "reference level" and "goal" in many of the
discussions here, where something is said like "a controller's goal IS
its reference level/setpoint".

I think that that is not accurate. A reference level is just that, a
level, a possibly varying value. It itself is not the goal. The goal of a
controller is to USE its internal mechanism in order to BRING ITS PER-
CEPTIONS TOWARD AND MAINTAIN THEM AT the reference level. An ACTIVITY
rather than a VALUE. Or, in terms of the quality index that I discussed
above, not its value but the continuous ACTIVITY to minimize its value.

Comments anyone?

Greetings,

Hans

[Martin Taylor 950519 10:30]

Hans Blom, 950518

That brings forth another meta-consideration that I've had for a long
time: the equation of "reference level" and "goal" in many of the
discussions here, where something is said like "a controller's goal IS
its reference level/setpoint".

I think that that is not accurate. A reference level is just that, a
level, a possibly varying value. It itself is not the goal. The goal of a
controller is to USE its internal mechanism in order to BRING ITS PER-
CEPTIONS TOWARD AND MAINTAIN THEM AT the reference level.

Perhaps it's one of those issues of word usage, but I would say that
your "goal of the controller" is actually a goal of the designer of the
controller. For me, a goal corresponds to something perceptible. I have
a goal to make a million dollars, and I can perceive that my actual number
of dollars is receding from this goal. But I don't see myself as having
_A_ goal to carry out the activities required to get a million dollars.
To progress toward that goal requires that I have other goals, such as
seeing myself at work, seeing paychecks... All perceptible. All
corresponding to reference signals that in themselves may or may not be
perceptible.

The designer of a control system has a goal to see the control system
"using its internal mechanism" effectively, but the control system just
acts. So I think it correct to identify the "goal" with the "value of
the reference signal" in a control system (whether that be a simple scalar
control system or one with a higher-dimensional set of signals).

ยทยทยท

-------------

In a model that simulates a continuous waveform, you cannot represent a
step function legitimately.

Why should a model only simulate continuous waveforms? Our models were
discrete, and the can adequately handle even square waves, as my demo
shows.

Because implicit in all the modelling is the notion that the models simplify
control that occurs in real live systems that have to work in a continuously
changing world.

Hans's disturbances are therefore, for the most part, illegitimate
waveforms to apply to Bill's model.

Why "illegitimate"?

There's no objection to analyzing and experimenting with discrete systems
on their own terms. My objections were to uncritical application of the
results to the continuous world. I felt that some of your difficult
disturbances would probably not result in model behaviour that could be
generalized to the behaviour of the implied continuous system being modelled.
That's why I called them "illegitimate."

Martin