# Thoughts on PCT and Modern Control Theory

Here are some thoughts about "modern control theory" drawn from a
reading of the back pages of _Modern Control Engineering_ [Ogata, K.
(1970), Prentiss-Hall], the most recent source in my library. I realize
that this book is out of date and that I do not follow the mathematical
methods in it, but one works with what one has at hand both mentally and
physically.

···

---------------------------
The goal of "modern" control theory, as I understand it, is to devise a
driving signal (a vector) r, working through matrix computations to
produce a driving function, another vector, u(t). The driving function
u(t) is such that the variables comprising a plant will be brought to an
optimal state within some criterion of time or accuracy, or some other
global measure of quality of performance. The function u(t) may entail
multiple variables, even hundreds.

In order for the driving signals _r_ that result in u(t) to be
determined, the inverse of the plant function must be observable and
computable. If it is not, then some method of approximating the inverse
within some range must be found. Ogata suggests the application of small
test signals to the plant, which can be used to measure its static and
dynamic characteristics. Iterative procedures can then lead to closer
and closer approximations to the necessary driving function.

The basic concept of control in this field is therefore not built on the
principle of negative feedback, but on a principle of lineal causation.
Furthermore, the plant is not conceived of as a modular system in which
the overall control problem can be broken down into smaller problems,
but as a whole which is optimized in one huge calculation. Either the
entire plant is controlled, or control fails.

The function u(t) is derived from the requirement that the vector
describing the state of the plant approach some desired state, which in
turn is defined in terms of some optimality criterion. The optimality
criteria, being global, are not specifications for any particular states
of any plant variables, but describe some overall state of the plant
such as optimal energy usage or (I suppose) some weighted sum of errors.
These criteria are thought of as the "goals" of the control process.
They are specified not by the control system, but by someone or
something operating from outside the control system.

Notice that the idea of a reference signal is not central to control in
this way of thinking. The only thing corresponding to a reference signal
is the vector r which produces u(t), the driving function (via a
function which is the inverse of the plant function). Since this signal
that produces the driving function is derived from the global optimality
criteria, it is subordinate to them, becoming whatever is necessary to
achieve optimality. Thus the optimality criteria correspond to what we
call "intrinsic reference signals" in PCT. These criteria are given from
outside the control process; in the case of organisms, they are possibly
a product of evolution. The actual state of the optimizing system would
be derived from sensors detecting the state of the plant, passed through
computing functions that calculate the variables in terms of which
optimality is to be judged. The error signals that result (absolute or
squared) then are the basis for calculating changes in u(t). The only
variable that is controlled in a negative feedback way is the optimality
measure.

The language in which modern control theory is expressed is matrix
calculus (in discrete form). This language is suitable for handling
large complex systems in a compact way. My main problem with this
language is that it completely hides all the details of what is going on
in the system; one simply learns the rules of matrix manipulations, and
then applies them blindly, with no conception of what is actually
happening to individual variables in the system. For a person like me,
who wants to connect every variable and operation in a mathematical
representation to the physical system being represented, this is
intolerable: I get no sense of understanding how the system works. In
trying to follow Ogata's introductory materials on matrix manipulations,
I realized why it is that I have always dug in my heels and refused to
learn this language. I feel that I'm learning cookbook rules and losing
all contact with reality. This is probably a very peculiar attitude, but
perhaps that attitude is part of optimizing my own control systems to
compensate for mental deficiencies. There doesn't seem to be much I can

theory is that it is not actually incompatible with PCT or HPCT. The
concept of global optimality criteria is quite similar to the idea of
intrinsic reference signals, and the idea that departures from
optimality lead to changes in the organization of control is also
similar. In my conception of reorganization, I do not speak of "optimal"
values of intrinsic variables, but only of reference values. Whether
those reference values are optimal in any sense is not important; all
that matters is that they are inherited. If the related variables are
maintained near their reference states, the system is functioning in a
way we presume to be sufficient for survival of the individual, for a
while at least. Whether this is "optimal" in any sense is irrelevant in
a behavioral theory.

The main difference between HPCT and modern control theory is in how the
control system is assumed to be organized internally, below the level of
the "master driving signal" r. From the standpoint of the matrix
approach, HPCT proposes a way of partitioning the calculations so that
intermediate variables can be identified with experience and with
specific operations in the nervous system and body. It also proposes
that the inverses which are needed are generated not open-loop but by
intermediate closed feedback loops; a feedback loop is an excellent and
simple way of creating the same effect as taking an inverse, but without
actually deriving any inverses.

In the HPCT model, the "plant" that is the external world is not
controlled in one huge chunk, but through layers of processes that work
from the simple to the complex.

The most proximal variables, those easiest to control, are put under
local feedback control at the first level, the level where muscle forces
and lengths are generated. These loops are independently stabilized,
regardless of events that are more remote from the organism. The effect
of these control systems is to create a relationship between driving
signals from higher systems and the resulting perceptual signals from
the sensors. The relationship is far simpler than it would be without
the local feedback loop; the apparent dynamics of limb movements, for
example, are reduced (nearly) to those of a simple proportional movement
with viscous drag. Without the local feedback, the higher systems would
be faced with a sensory response that lags, oscillates, and interacts
with other outputs at the same level. The local feedback makes otherwise
interactive subsystems essentially independent of each other and reduces
the order of the dynamical equations.

The first layer of control systems thus simplifies the apparent response
of the plant, the environment, to driving signals from higher in the
brain. At the same time, it automatically opposes the effects of
disturbances that act immediately on the force, velocity, and position
outputs of the first layer of systems. Such disturbances, therefore, do
not have to be handled by any higher systems unless they become
unusually large. So the world that is experienced by the higher systems
becomes both simpler and less subject to perturbation than it would be
without the first layer of feedback control.

A second layer of control further simplifies the world and handles more
kinds of disturbances (which enter through sensory signals that are not
involved in the first level of control). And so it goes, layer by layer,
each layer perceiving and controlling a world derived from the
simplified world of the level below it, further simplifying the world
presented to any higher systems and shielding the higher systems from
more kinds of disturbances. Of course to us who are built this way, the
higher perceptions seem more abstract, and because we can manipulate
them in many ways, more complex. But in fact it is the lower-level
signals that are truly complex, representing the world with a degree of
detail which would swamp the computing facilities of the higher levels
if presented directly to them.

Modern control theory, of course, says nothing about these intermediate
layers of feedback control. It doesn't matter to this theory if the
inverses required are calculated directly, or by the implicit methods of
negative feedback. If we had equations to represent the organization of
each system at each level in the HPCT model, the modern control theory
approach would be to combine them into a single enormous matrix, and
then to try to compute the output vector u(t) in one unimaginably
complex computation.

In principle such a computation might exist, but in practice it could
probably not be carried out by any material system. The hierarchical PCT
approach offers a way in which this calculation could in fact be done by
a brain. Moreover, it offers a way in which simpler organisms could do
part of the calculation (omitting higher levels), and still produce
control processes that resemble those of human beings at the lower
levels of organization. Even a cockroach's leg control systems employ
velocity and position feedback, quite similar to the same processes in
human limbs, although somewhat differently implemented.

Although I understand little of matrix mathematics, I have seen
suggestions that large problems in matrix manipulation can be greatly
simplified by suitable partitioning of the matrices, especially if the
matrices have favorable properties. In effect, the overall matrix is
broken down into a set of submatrices, each of which is much easier to
handle and demands much less computing power. What I am suggesting is
that HPCT offers a way of partitioning the overall matrix describing
control by a human brain, a way that turns an impossibly complex overall
problem into a collection of relatively simpler individual problems.

In addition, HPCT offers a way to identify various aspects of the whole
system with classes of subjective experience, with control tasks of
different kinds, with areas of brain function, and with characteristics
that differentiate complex organisms from simple ones. And HPCT also
offers a way of effectively obtaining the inverses of various parts of
the external plant in a way that does not require actually computing
those inverses.

I hope this constitutes progress.
-----------------------------------------------------------------------
Best to all,

Bill P.

[Hans Blom, 960725b]

("William T. Powers" <POWERS_W@FORTLEWIS.EDU>)

Here are some thoughts about "modern control theory" drawn from a
reading of the back pages of _Modern Control Engineering_ [Ogata, K.
(1970), Prentiss-Hall], the most recent source in my library. I realize
that this book is out of date and that I do not follow the mathematical
methods in it, but one works with what one has at hand both mentally and
physically.

The views that the book presents are not, as I understand from your
post, out of date at all. Let me give you some feedback ;-). Your
understanding of the book is fine, but you may have missed some of
the finer or deeper implications. Or maybe you have not. But let me
point out some of them anyway.

The goal of "modern" control theory, as I understand it, is to devise a
driving signal (a vector) r, working through matrix computations to
produce a driving function, another vector, u(t). The driving function
u(t) is such that the variables comprising a plant will be brought to an
optimal state within some criterion of time or accuracy, or some other
global measure of quality of performance. The function u(t) may entail
multiple variables, even hundreds.

Critical is: ONE global measure. Thus, "modern" control theory (MCT)
is top-down: by explicitly describing the top-level goal, all lower
level goals are (implicitly) specified as well. In contrast, HPCT
looks more like a bottom-up approach. Even if it is not, your major
concern is with the lower levels of the hierarchy. MCT says that you
cannot design the lower levels if you have not specified what they
are to do for the higher levels.

Also: this global measure is something to be MINIMIZED (or maximized,
but that is the same thing with a different sign), not something to
be brought to a specific value. Thus, a control system is much like a
marble in a bowl which "seeks" its lowest position.

Due to the ever present "noise" or "disturbance", the controller will
never be "ready". Especially if it is adaptive, it will retune itself
constantly in order to remain more stably at the bottom of the pit.

Noise or disturbance is not just negative. It allows the controller
to "explore" its pit, helping it to find the bottom. In fact, if
there is too little "natural" noise, the controller might want to
generate some of its own: small random disturbances of its actions.

The basic concept of control in this field is therefore not built on the
principle of negative feedback, but on a principle of lineal causation.

Both. Lineal causation in order to discover the "laws of nature".
Feedback in order to _use_ the knowledge contained in the laws thus
discovered to best effect. It is the _combination_ that makes it so
powerful: because of feedback, the model needs to be only a fair
approximation. Because a fair approximation is available, control
can be very good. This emerging property may be quite unexpected.

Furthermore, the plant is not conceived of as a modular system in which
the overall control problem can be broken down into smaller problems,
but as a whole which is optimized in one huge calculation. Either the
entire plant is controlled, or control fails.

Right. Multiple, maybe many, simultaneous sub-goals, all interlinked,
all serving the one supreme goal.

The function u(t) is derived from the requirement that the vector
describing the state of the plant approach some desired state, which in
turn is defined in terms of some optimality criterion. The optimality
criteria, being global, are not specifications for any particular states
of any plant variables, but describe some overall state of the plant
such as optimal energy usage or (I suppose) some weighted sum of errors.

Right. The designer has to specify the "importance" of the individual
sub-goals, or how much weight each one has.

These criteria are thought of as the "goals" of the control process.
They are specified not by the control system, but by someone or
something operating from outside the control system.

Right. Just like in organisms, something has prespecified which is
more important in the "cost function": to bring offspring into the
world, or to care for the offspring once it is there. Different
organisms have different prespecifications.

Notice that the idea of a reference signal is not central to control in
this way of thinking.

Right. Although it is equally possible to say that reference signals
are computed by higher levels for lower levels to follow. But the
_concept_ is indeed not a central one.

The only thing corresponding to a reference signal
is the vector r which produces u(t), the driving function (via a
function which is the inverse of the plant function). Since this signal
that produces the driving function is derived from the global optimality
criteria, it is subordinate to them, becoming whatever is necessary to
achieve optimality. Thus the optimality criteria correspond to what we
call "intrinsic reference signals" in PCT. These criteria are given from
outside the control process; in the case of organisms, they are possibly
a product of evolution.

There is (and can be) only one topmost optimality criterium. If there
were more, one would again have to give them weights in order to pre-
specify their importance.

The actual state of the optimizing system would
be derived from sensors detecting the state of the plant, passed through
computing functions that calculate the variables in terms of which
optimality is to be judged.

The concept "state" is the central notion in MCT. Briefly, the "state"
of a system is a memory vector which "remembers" all that is important
of the past and useful for (control in) the future. Thus it provides
for data compaction or redundancy elimination. All information about
the past that might be important for present and future control but
is not captured by the state is called the "model error".

A consequence of the notion "state" is that the past is unimportant.
The controller need not carry it around with it. Everything important
about the past is available NOW. The term "state" is also called
"sufficient statistics". If we have discovered, for instance, that a
sequence of numbers has (had) a Gaussian distribution, all that we
will have to remember in order to predict future numbers are a mean
and a variance. All else can be forgotten.

This has to do with Rick's question of how we can "know" the future,
e.g. that something will remain constant. We aren't gods, after all.
Yet, we constantly assume that things -- like the speed of light or
the layout of my house -- will remain constant. Models are not about
TRUTH, they are about TRUST. Theology, by the way, too.

If the state vector is not well chosen, important information may not
be discovered, resulting in a bad controller.

In general, the state vector can be chosen in a great many different
ways, just like the basis vectors of Euclidean space may be chosen in
any directions, as long as the three axes are perpendicular. Thus, it
does not matter how the "basis concepts" of the controller are
chosen, as long as they cover all important dimensions of the
controller's "world". They need not be orthogonal, although that
helps analysis (by a human) and computational accuracy (in updating
the model).

The error signals that result (absolute or
squared) then are the basis for calculating changes in u(t). The only
variable that is controlled in a negative feedback way is the optimality
measure.

The controller constantly tries to minimize it.

The language in which modern control theory is expressed is matrix
calculus (in discrete form). This language is suitable for handling
large complex systems in a compact way. My main problem with this
language is that it completely hides all the details of what is going on
in the system; one simply learns the rules of matrix manipulations, and
then applies them blindly, with no conception of what is actually
happening to individual variables in the system.

That is not true at all. There are a number of tools, sometimes more
or less explicitly present in a controller, that tell it -- even in
real time -- which sensor information is most important to reliably
establish the current state, which actuators are most important to
bring the goal nearer, and how accurate the knowledge is. In a complex
system with redundancy in sensors and actuators and computational
errors, such a scheme is remarkably robust.

Moreover, a blind choice of the components of the state vector would
be plain dumb if one wants to analyze what the system does. One
choice might be position, velocity and acceleration (which makes
sense, is a point body's position is to be controlled), another choice
might be some random nonsingular combinations of these. That usually
does not make sense. If the choice is judicious, the system's
functioning may be quite transparant.

For a person like me,
who wants to connect every variable and operation in a mathematical
representation to the physical system being represented, this is
intolerable: I get no sense of understanding how the system works.

What about if the system has 100,000 sensors and 10,000 actuators? A
matrix/vector description remains almost as simple as the scalar
description of a system with 1 input and 1 output. I would not be
able to understand such a complex system if NOT in vector/matrix
terms...

In
trying to follow Ogata's introductory materials on matrix manipulations,
I realized why it is that I have always dug in my heels and refused to
learn this language. I feel that I'm learning cookbook rules and losing
all contact with reality.

Yes, practice is required to come to grips with the formalism. Hands-
on experience is different from theoretical knowledge. It's a good
start to use a cookbook when you cook for the first times. Once you
know how to, you don't require it anymore and you can start to become
creative.

This is probably a very peculiar attitude, but
perhaps that attitude is part of optimizing my own control systems to
compensate for mental deficiencies. There doesn't seem to be much I can

It is a normal attitude, I think: one cannot do everything. Our time
is finite. I stopped after vectors, so I'm not familiar with tensors.
That's not something that I'm ashamed of, although I know that now
the finer details of quantum physics are out of my reach. Somewhere
the buck stops...

theory is that it is not actually incompatible with PCT or HPCT.

If it were incompatible, then HPCT would be wrong and I wouldn't be
here ;-).

In the HPCT model, the "plant" that is the external world is not
controlled in one huge chunk, but through layers of processes that work
from the simple to the complex.

MCT shows that in most cases the processes in these layers must be
interconnected in order to achieve control of a reasonable quality.
Except in special cases, where certain orthoganality requirements are
fulfilled. HPCT frequently neglects this aspect and talks about SISO
(single input single output systems) as if these can explain
everything. Thus, for instance, the discussions about "disturbances"
as only deteriorating the quality of control and disregarding their
effect on other simultaneous control loops and, for instance, in
learning a better model. I don't wish a personal crisis to anyone,
but people who have experienced one often emerge a lot wiser...

The most proximal variables, those easiest to control, are put under
local feedback control at the first level, the level where muscle forces
and lengths are generated. These loops are independently stabilized,
regardless of events that are more remote from the organism.

That is a very good idea in some cases, especially when speed of
response is crucial. They make very rapid _hard-wired_ anticipation
possible, rather than slower software-like prediction by the model.

So the world that is experienced by the higher systems
becomes both simpler and less subject to perturbation than it would be
without the first layer of feedback control.

That may not be the reason. The state vector approach offers a
similar simplification. But that simplification is global, not layer
by layer.

A second layer of control further simplifies the world and handles more
kinds of disturbances (which enter through sensory signals that are not
involved in the first level of control). And so it goes, layer by layer,
each layer perceiving and controlling a world derived from the
simplified world of the level below it, further simplifying the world
presented to any higher systems and shielding the higher systems from
more kinds of disturbances.

Until at the top everything is simplified to one variable? Where do
you stop simplifying? How to _know_ when to stop? What to simplify
and what not?

If we had equations to represent the organization of
each system at each level in the HPCT model, the modern control theory
approach would be to combine them into a single enormous matrix, and
then to try to compute the output vector u(t) in one unimaginably
complex computation.

Yes, there's an important limitation! This would require some separat-
ion into smaller matrices which are less heavily interconnected.
That's a PRACTICAL approach to make the theoretically BEST approach
fit within a skull.

Although I understand little of matrix mathematics, I have seen
suggestions that large problems in matrix manipulation can be greatly
simplified by suitable partitioning of the matrices, especially if the
matrices have favorable properties. In effect, the overall matrix is
broken down into a set of submatrices, each of which is much easier to
handle and demands much less computing power.

The computing power is not the problem, because everything can be
done in parallel. The space required for wiring is the problem: it is
simply impossible to connect every neuron to every other neuron.

What I am suggesting is
that HPCT offers a way of partitioning the overall matrix describing
control by a human brain, a way that turns an impossibly complex overall
problem into a collection of relatively simpler individual problems.

In addition, HPCT offers a way to identify various aspects of the whole
system with classes of subjective experience, with control tasks of
different kinds, with areas of brain function, and with characteristics
that differentiate complex organisms from simple ones. And HPCT also
offers a way of effectively obtaining the inverses of various parts of
the external plant in a way that does not require actually computing
those inverses.

I hope this constitutes progress.

I'm happy with this post of yours, since it establishes a common
ground from where we can proceed together without too much
misunderstandings.

Greetings,

Hans