manipulating wants

[From Bill Powers (960418.0955 MDT)]

Hans Blom, 960418 --

If, as you propose, the highest set of reference signals is fixed,
then at the highest level the environment can't influence the
reference signals ...

     And neither can the organism...

The organism can still reorganize and change its highest-level goals.
And anyway, aren't you changing the subject? Even an inherited reference
level gives the organism autonomy relative to the associated controlled
perception -- the environment can't influence it.

     I propose the following (thought) experiment. There is a
     hierarchical control system. I can manipulate its world, i.e. what
     it perceives or can perceive, by introducing "disturbances" to
     (what it perceives of or how it perceives) the world. It is my
     guess (I could state this more strongly :wink: that I can manipulate
     the disturbances (in a simu- lation; not in the "real" world!) in
     such a way that I can determine the values of all (lower level)
     goals/references of that control system.

You didn't read my post very carefully. I was speaking exclusively of a
non-living environment: that is, a non-purposive environment. In order
for you to do what you propose, you have to be a full-fledged living
control system. You have to know what reference signals you want to
create in the other organism, omnisciently perceive the current states
of those reference signals, and produce disturbances that bring the
perceived perceptual signals closer to those you want to see. The non-
living environment does not have the ability to do this. In fact, YOU
don't have the ability to do this, but I'm being nice by allowing you to
pretend that you have a way of perceiving the reference signals inside
the other system.

You are not free to pick any changes in the other system that strike
your fancy. You have to choose changes in such a way that the new set of
reference signals you have created does not (via the associated control
systems) bring the perceptual world at that level to a state that causes
error in any higher-level system. This is the point I was trying to make
(which you seem to be ignoring in favor of your thought experiment).

In short, you must choose your desired set of of lower-level reference
signals from those combinations that leave higher-level perceptions
undisturbed. As long as that condition is maintained, you are free to
apply disturbances that alter the lower-level reference signals. If you
violate that condition, then you lose control, because the higher
systems will add their contributions to the same reference signals you
are trying to alter, and prevent the forbidden condition from occurring.

It is possible -- or at least conceivable -- that a control hierarchy
could be so complete that given the reference conditions defined at the
highest level, there is one and only one set of reference signals in
_all the lower systems_ that will allow the set of all highest-level
reference conditions to be satisfied. In that case, there is no
arbitrary manipulation of disturbances that can make any lower-level
reference signal change. Any disturbance tending to alter a reference
signal in the middle of the hierarchy will result in an adjustment by
higher level systems that leaves that reference signal unchanged.

I don't really think that such a "complete" hierarchy exists, however.

     "But the world doesn't do that!", you might exclaim. Doesn't it?
     How would you know? If Gaia is alive, as some think, couldn't Gaia
     be the manipulator?

We can dismiss the Gaia hypothesis for the same reasons we rule out all
supernatural explanations of natural phenomena in a scientific
discourse: the explanation is too easy. Whatever you want to explain,
you can say "Gaia has the capacity to do that" or "God, being
omnipotent, can do whatever will create that result." This is really
avoiding the subject, because what we want to know is HOW the result
could be achieved. To say that the means is beyond our comprehension is
to say nothing at all. Even worse, it is to claim that you comprehend
what you have just said is humanly incomprehensible.

     And on a smaller scale, don't I "manipulate" part of the world --
     and thus part of the perceptions -- of the people I come into
     contact with?

How could you know that? You can't experience other people's
perceptions. What you know is that you apply disturbances to certain
perceptions of your own, those that you see as part of the common
environment. But if the other person is controlling something affected
by your actions, the control action will _prevent_ the actual perception
in the other person from being manipulated.

     I am aware of how heretical these statements must sound to you. Yet
     I would like you to consider this perspective, if only for 5
     minutes. If you can grasp what I say here (and let a simulation
     convince you), you won't see an utter contradiction between
     stimulus-response theory and PCT anymore, but a reconciliation
     between the two: External "stimuli" can change what the organism
     wants, PCT describes how the organism acts upon its wants...

I have never claimed that external stimuli can't change what an organism
wants, at a level lower than the highest level (unless the hierarchy is
complete in the sense mentioned above). In fact, I was the one who
pointed out, years and years ago, that by judicious application of
disturbances you can control the output of any control system, given
that the reference signal for that system is constant. What I am saying
is that external stimuli can't _arbitrarily_ change what an organism
wants, without arousing opposition to the effects of those stimuli. Only
a well-defined set of changes is permitted: those that cause no errors
at higher levels. When changes are attempted that alter the perceptions
of a higher system, the reference signal for the disturbed system will
change and you no longer have the required condition of a constant
reference signal. You can no longer predict the output of the system
whose perceptions you are disturbing.

As to simulations, I would certainly have to believe a demonstration of
your point if it worked out as you say. But if we got down to a real
case in this way, I would demand that the circumstances also be
realistic: namely, that you know of the operation of the simulation only
through its interface with the world outside it. If you are allowed to
perceive directly all variables and functions in the system, and to act
on them directly, then of course you can make the system do anything you
want. But in the real case we're talking about, all you can know of the
inner workings of the system must be deduced from interacting with it at
the interface between the system and its environment.

Here is a simple herarchical system. At the top level there are two
fixed reference signals, r21 and r22. The perceptual signal p21
represents the sum of two lower-level perceptions, p11 + p12. The
perceptual signal p22 represents the difference between the same two
perceptions, p11 - p22.

The outputs of the higher-level systems contribute to the states of the
reference signals for two lower systems, one controlling p11
representing environmental variable v1, and the other controlling p12
representing variable v2.

                       r21 r22
                        > >
                  ----> C21--->- ------> C22--->-
                 > > > >
                p21 Fo21 p22 Fo22
                 > > > >
                Fi21 | Fi22 |
              + / \ + o21 +/ \- o22
              / \ / \
             p11 p12 p11 p12

[interconnections indicated by labels, signs indicate by + and -]

                    o21 o22 o21 o22
                     \- -/ \- /+
                       \ / \ /
                p11 r11 p12 r12
                 > > > >
                  ---> C11---->- ----> C12 ---->-
                 > > > >
                p11 Fo11 p12 Fo12
interface | | | |
- - - - - - | - - - - -| - |- - - - - -| - - -
                 v1 <--------- o11 v2 <---------- o12
                 > >
                 d1 d2

The extermal observer can observe the values of v1, d1, o11, v2, d2, and
o12. The external observer can affect this system only by manipulating
d1 and d2, which represent all independent influences on v1 and v2.

The control systems maintain v1+v2 at the reference level set by r21,
and maintain v1-v2 at the reference level set by r22. The question is,
how can the external observer, by manipulatintg d1 and d2, arbitrarily
determine the values of r11 and r12? This is what you are claiming that
you can do.

I trust that we can agree that the limits on d1 and d2 are such that the
disturbances do not overwhelm the ability of the control systems to
operate normally.

I hope you have not forgotten that in the PCT hierarchy, ALL
interactions with the system take place through the lowest level.

Your turn.

···

-----------------------------------------------------------------------
Best,

Bill P.

[Hans Blom, 960419]

(Bill Powers (960418.0955 MDT))

Even an inherited reference level gives the organism autonomy
relative to the associated controlled perception -- the environment
can't influence it.

Isn't that stretching the meaning of the word "autonomy"? A great
many people think that they are the "victim" of their inherited
reference levels rather than their proud owners. I am well aware that
psychotherapy emphasizes the autonomy of the client (and I do so in
many of my encounters with others), but let's not equate scientific
discussions in PCT with therapy, shall we?

We can dismiss the Gaia hypothesis for the same reasons we rule out
all supernatural explanations of natural phenomena in a scientific
discourse: the explanation is too easy.

Not quite, I think. We have the problem of time scales. For instance,
in a controller that has an integrator in its output path (as many
PCT models do), behavior/output does not change in the presence of
high frequency disturbances due to the low pass filter characteristic
of the system. Thus, if one is constrained (due to finite life times
of both human individual and human race) to perform The Test with
high frequency signals only, one might erroneously conclude that no
control goes on, whereas in reality control is fine but only
extremely slow. Thus based on The Test only, one might conclude "no
control"; based on meta-knowledge (that of time scales, which we know
about through e.g. simulations) we might conclude "control might be
possible". To this extent the Gaia hypothesis, although unprovable
due to experimental limitations, _might_ be correct...

I have never claimed that external stimuli can't change what an
organism wants, at a level lower than the highest level ... In
fact, I was the one who pointed out, years and years ago, that by
judicious application of disturbances you can control the output of
any control system, given that the reference signal for that system
is constant.

But that is different! You said that one can change BEHAVIOR. I say
that one can arbitrarily change WANTS. In my opinion, that is quite
in contrast with the standard PCT exclamation "you can do what you
want!". So what, if the environment tells you what to want?

What I am saying is that external stimuli can't _arbitrarily_
change what an organism wants, without arousing opposition to the
effects of those stimuli.

OK, let's see. Let us avoid imprecise words and turn to hard math.
You pose the following model:

Here is a simple herarchical system. At the top level there are two
fixed reference signals, r21 and r22. The perceptual signal p21
represents the sum of two lower-level perceptions, p11 + p12. The
perceptual signal p22 represents the difference between the same two
perceptions, p11 - p22.

The outputs of the higher-level systems contribute to the states of
the reference signals for two lower systems, one controlling p11
representing environmental variable v1, and the other controlling
p12 representing variable v2.

<drawing skipped>

The above problem can be translated into the following set of
equations:

o21 = fo21 (r21 - p21) p21 = fi21 (p11 + p12)
o22 = fo22 (r22 - p22) p22 = fi22 (p11 - p12)
o11 = fo11 (r11 - p11) r11 = - o21 - o22
o12 = fo12 (r12 - p12) r12 = - o21 + o22
p11 = o11 + d1 p12 = o12 + d2

where we have the following extra knowledge:

externally observed: o11, o12
externally manipulated: d1, d2
to be set: r11, r12
constants: r21, r22

To solve this set of equations, eliminate 'intervening' variables and
find expressions for r11 and r12 in terms of known and/or manipulat-
able variables/constants. After some calculations we get (if I did
not make any substitution errors ;-):

r11 = - fo21 (r21 - fi21 (fo11 (r11 - d1) / (1 + fo11) + d1
      + fo12 (r12 - d2) / (1 + fo12) + d2))
      - fo22 (r22 - fi22 (fo11 (r11 - d1) / (1 + fo11) + d1
      - fo12 (r12 - d2) / (1 + fo12) - d2))

r12 = - fo21 (r21 - fi21 (fo11 (r11 - d1) / (1 + fo11) + d1
      + fo12 (r12 - d2) / (1 + fo12) + d2))
      + fo22 (r22 - fi22 (fo11 (r11 - d1) / (1 + fo11) + d1
      - fo12 (r12 - d2) / (1 + fo12) - d2))

Introducing some substitutions of constants, we get:

r11 = a * r11 + b * r12 + c * r21 + d * r22 + e * d1 + f * d2
r12 = j * r11 + k * r12 + l * r21 + m * r22 + n * d1 + p * d2

Using linear algebra (inversion of the matrix ((a b)(j k)) this
reduces to:

r11 = p + q * d1 + r * d2
r12 = s + t * d1 + u * d2

where I leave the calculation of the constants p through u upto you.
Note that if the matrix ((a b)(j k)) is not invertible, this scheme
breaks down, but so does control.

These formula's show, I hope to your satisfaction, that by suitably
choosing values for d1 and d2 we can reach all possible combinations
of values for r11 and r12. This follows from a simple dimensional
analysis. Note that I need not know any of the constants internal to
the controller in order to derive this result. Not knowing those
constants, I will of course not know either WHICH PARTICULAR VALUES
of r11 and r12 will result from a certain combination of values of d1
and d2. But I _do_ know that I can cause any combination of r11 and
r12 to be realized through my manipulation of d1 and d2.

That is what I wanted to demonstrate. We could go further than that,
however, if you wanted to (but I think you don't...). Using systems
identification techniques, we could try to find the constants that
govern the behavior of the controller. I have shown already, in my
model-based controller, how a "world" is modelled. But a "world" is
just a system, and so is a controller; the techniques are identical.
Model building and parameter estimation techniques would do this job
of identifying the internal structure of a control system by modeling
the relationship between applied "inputs" (in your example: d1 and
d2) and the observed actions (in your example: o11 and o12), i.e.
from equations like

o11 = p' + q' * d1 + r' * d2
o12 = s' + t' * d1 + u' * d2

whose derivation I leave up to you.

Greetings,

Hans Blom

···

================================================================
Eindhoven University of Technology Eindhoven, the Netherlands
Dept. of Electrical Engineering Medical Engineering Group
email: j.a.blom@ele.tue.nl

Great man achieves harmony by maintaining differences; small man
achieves harmony by maintaining the commonality. Confucius

[Hans Blom, 960419b]

(Bill Powers (960404.0930 MST))

But, by the way, don't control systems minimize error? Can't that
be called optimization?

No. Control systems may behave in such a way that error is
(approximately) minimized, but few of them work by minimizing error.

Depends upon the level of discourse. Let me give an example. Suppose
you want to design a control system that minimizes the sum (or time
integral) of squares of errors (perception minus reference).
Introducing some constraints (use only comparators, gains and
integrators in your design), some "optimal" design will result. Such
a design may be identical to an "intuitive" design that is based on a
less formal method, e.g.

They work by producing output that is proportional to the magnitude
of error, and aimed to oppose the direction of error.

Using a less formal design method one simply does not know what the
"design criterium" is, whereas the formal method makes it explicit.
Can one then not say of such an informally designed controller that
it best realizes (within the bounds of the constraints) some
optimization criterium?

The usual concept of a minimizing or maximizing system assumes that
directional information is not available: if the variable is not at
its minimum or maximum, the error does not indicate which way to
move to correct the error.

This is incorrect. The reason why a sum of squares criterium is
almost universally chosen as the optimality criterium is the fact
that its derivative is a linear function of the error. This linear
function preserves directional information. Compare the optimality
criterium's shape (a multi-dimensional parabola) with a hill (or a
hole). Its derivative then gives the slope of this hill. If you want
to maximize your criterium, you pick a positive slope and you go up.
Continu indefinitely and you'll find the (or a) top.

Your question brings me back again to a problem that keeps coming
up, and which I seem to be very poor at talking about. As observers,
we can say that a control system minimizes error, because that is a
valid description of the outcome of its behavior. But this does not
mean that inside the control system there is something concerned
with finding a minimum. What we see as minimizing is a _side-effect_
of the actual mode of operation.

So I look at this differently. What for you is a side-effect is for
me the crucial central criterium that is the basis of the design
process that results in a unique design. One could say that the
optimization criterium has been "compiled out" by the design process,
so that it is not recognizable anymore in the final product, just
like the Pascal source code is not recognizable anymore in the
compiled exe file. Reverse engineering the exe file back into Pascal
code is hardly possible anymore. Likewise, analyzing a control system
to discover its optimality criterium may be (almost) impossible. Yet
it can be proven that (almost all, with non-important exceptions)
control systems are based upon a well-defined optimization criterium,
even if they weren't designed from one to start with. Particularly,
each of the PCT controllers whose diagram or description I have
encountered has such a well-defined optimization criterium (a sum of
squared errors), even if you aren't aware of that...

(Bill Powers (960405.0600 MST))

The whole concept of maximizing is dubious even as a description of
what people or organisms are observed to do -- even when they
actually are doing it. In cases where maximization does appear to be
occurring, we see the result as pathological: maximizing power,
wealth, food intake, number of cars owned, number of sexual
performances, muscle development, beauty, driving speed, and so
forth. What is the "maximum" amount of anything? Infinite! An
organism that had no concept of "enough" would have its existence
threatened by a surplus of resources just as much as by a shortage.

You go overboard here. Maximization or minimization, as in optimizat-
ion, invariably takes place on the convex surface. It is ERROR
minimalization. Think of it as hill-climbing where a hill-top exists.

Your discussion is well-meant, and I agree with almost anything you
say. But it rests on disregarding the (technical) context of the
notion of extremization. For example:

The idea of control is that the organism aims to produce a specific
amount of something for itself -- not the maximum possible amount.

The purpose of a control system is to minimize the discrepancy
between what it wants and what it perceives it can do to achieve that
want. The want itself is usually finite. If an organism wants an
infinity of something, it is of course doomed to failure, as you so
aptly describe.

... a great deal of thought has been put into methods for
maximization, on the assumption (without much thought) that this is

a good idea.

This "good idea" is still the basis of most (all?) of the formal
design methods of "modern" control theory, where a (weighted) sum of
squares of the errors of a multidimensional control system is the
criterium that guides the design process. Where the design goes wrong
is if you forget to specify wants and their importances that turn out
to be crucial in practice.

Greetings,

Hans Blom

···

================================================================
Eindhoven University of Technology Eindhoven, the Netherlands
Dept. of Electrical Engineering Medical Engineering Group
email: j.a.blom@ele.tue.nl

Great man achieves harmony by maintaining differences; small man
achieves harmony by maintaining the commonality. Confucius

[From Bill Powers (960419.1000 MDT)]

Hans Blom, 960419 --

Before I start quibbling with you, let me say how I admire your facility
with mathematical manipulations! To do what you did in this post so
neatly would have taken me days and days, and I probably would have made
uncountable errors. I hope you don't mind your abilities being taken
advantage of; in return, I am quite willing to trust your mathematics.

Even an inherited reference level gives the organism autonomy
relative to the associated controlled perception -- the environment
can't influence it.

     Isn't that stretching the meaning of the word "autonomy"?

On the contrary, it's the only meaning of the word that I have been able
to find a defense for. The same question arises relative to "free will."
On that subject, I concluded long ago that our basic freedom is only the
freedom to be human -- that is, to achieve the conditions set by our
inheritance that determine what we need in order to be what we are.
Those conditions -- which I call intrinsic reference levels -- remain
largely undefined, although we know what a few of them are (they relate
to the reasons that we breathe, eat, try to stay warm, and so forth --
although they may also include the reasons for which we like to do
mathematics, appreciate art, etc.).

As to time-scales and superordinate control systems such as Gaia, my
basic objection is to the "could be" aspect of this argument. Anything
"could be." There could be an 3-cm orange cube on the floor of a crater
on the far side of the moon. If that were true, the implications would
be immense. But I think we should forego spending a lot of time on those
implications until we find such a cube.

···

------------------------------
     But that is different! You said that one can change BEHAVIOR. I say
     that one can arbitrarily change WANTS. In my opinion, that is quite
     in contrast with the standard PCT exclamation "you can do what you
     want!". So what, if the environment tells you what to want?

A want _is_ a behavior - it is the behavior of a higher-level control
system. When you disturb the input of a higher-level system, it adjusts
its output -- the want of a lower-level system -- to counteract the
effect of the disturbance on the perception of the higher-level system.
The perception of the higher system remains essentially undisturbed;
this stability is achieved by changing the want of the lower system.

I don't know where you heard that "standard PCT exclamation." I never
exclaimed it, that I can remember. I think you're arguing with someone
else -- maybe you were traumatized by a hippy when you were small. I
think that the truth of the matter is that you DO do what you want
(somewhat expanded: you do act to make your perceptions match your
want), but the question we are addressing here is where that want comes
from. The hippy, who doesn't know, thinks that wants are arbitrary and
that they can be established capriciously. During natural or substance-
induced reorganization that may be true in a sense, but at the end of
reorganization there must be a whole coherent system of wants, and wants
at a given level must be adjusted so that when they are met, the
resulting perceptions satisfy the requirements of higher-level wants,
and ultimately the requirements of the wants that are built into us as
human beings. A crude way of putting this is that you are free (when
reorganizing) to want not to eat, but you are not free to want (a) not
to eat and also (b) to live more than about 40 more days.
----------------------------------
OK, let's have a closer look at your mathematics. Bear with me; I'm
feeling my way here.

I picked (or implied, as you inferred) a linear system with single-
valued perceptual functions (you would have had considerably more
difficulty if I had specified that the input and output functions were
general). You may recall that I specified that the solution should allow
the control system to go on operating normally. Let's see what that
critical specification entails.

Consider just the two lower-level loops.

For system 11 we have

p11 = o11 + d
e11 = r11 - p11
o11 = fo11(e11).

Solving this equation for p11 we have

p11 = fo11(r11 - p11) + d1

The other equation is

p12 = fo12(r12 - p12) + d2

Separating variables and isolating p1x, we have

      fo1x*r1x dx
p1x = -------- + --------
      1 + fo1x 1 + fo1x

Notice that the effect of dx on px is divided by (1 + fo1x). The gain
factor in the output function can be large. The larger it is, the closer
we approach the condition

p1x = r1x,

with the effect of dx on p1x approaching zero.

Without going through your detailed analysis (which will give the same
result, I believe), we can see that even if the output gain in the lower
two systems is large, it will be possible (as you say) to find a d1 and
d2 which will change the perceptual signals p1x enough to make the top-
level system produce any setting of r11 and r22 that is desired.
However, this implies that the disturbances will be large enough to
counteract the opposing effects of the two lower control actions. If the
loop gain is high, this means correspondingly large disturbances. Just
to make the point, suppose the output gains Fo1x were one million. This
means that the disturbances would have to be a million times as large,
to produce a given effect on p1x, as they would have to be without the
feedback action of the lower systems. Of course it is the effect on p1x
that induces the highest system to alter the "wants" r11 and r12.

In your mathematical analysis, no account is taken of any actual
numbers, the magnitudes of the variables that might be involved in any
practical case. You have shown, correctly, that there exist values of d1
and d2 that will produce any specified values of r11 and r12. But the
question that is raised by my specifying "normal operation" is whether
these values of d1 and d2 lie within the range that the control systems
are capable of handling.

There is a relationship between the loop gains of the control systems
and the magnitudes of d1 and d2 that are required to achieve any given
values of r11 and r12. The greater the loop gains of the control
systems, the larger the disturbances must be to achieve the same values
of r11 and r12. The largest disturbance that a control system can handle
is the one that demands the greatest output that the control system can
produce. If it turns out that the disturbance needed to bring r11 and
r22 to the specified values would demand far greater output than the
control system can generate, then the system would (a) fail to oppose
the disturbance in the normal way, and (b) lose control.

In your purely mathematical analysis, there is no mention of a "normal
range of control" or of any limits on the values of any variables. This
is also true of PCT analyses, but since we deal mostly with simulations,
the limits are inherent in the conditions of the simulation -- we simply
don't apply disturbances that are huge.

Forgive me for struggling a bit and repeating myself here, Hans. I can
see clearly where I am going, but having some trouble figuring out how
to get there. Maybe you can help me find a quicker approach.

The point I am trying to make is that normally, the perceptual variables
inside a control system are well-defended against normal disturbances.
But because outputs are limited, we have to judge the magnitude of a
"normal" disturbance in terms of the maximum possible output available
to oppose it. When disturbances become too large, the output hits a
limit, and from then on the disturbances can freely alter the perceptual
variables. The system loses control. And the system equations change.

We can specify the "normal" range of disturbance magnitudes in terms of
the maximum output the control system can produce. When disturbances
become larger than this value, the output can no longer increase to
continue opposing their effects, and two things happen: (1) control is
lost, and (b) the original equations cease to apply. Beyond this point,
we are seeing a control system being overwhelmed by superior physical
force.

If we now consider only disturbances within the normal range, defined as
above, the picture changes considerably. The question becomes, "How far
from their _undisturbed_ values can r11 and r12 be changed by
manipulating d1 and d2?" The answer will depend on what the output
limits are, and on the loop gains. For a given limit on output (and
hence on the maximum "normal" disturbance), the range of achievable
effects on r11 and r12 will decrease as the loop gain increases. You can
now freely adjust r11 and r12 by manipulating d1 and d2 -- but only
within the range just calculated. If you choose as target values an r11
and r22 outside this range, the disturbances required to achieve that
condition will cause the outputs to hit their limits, and the system of
equations will have to be changed.

I suppose we could now go back and make this same argument much more
quickly, using your full set of equations. All that is required is to
specify positive and negative limits on o11 and o12 (and perhaps o2x as
well). If we solve for the amount of change of r11 and r12 that is
caused by introducing the maximum allowable disturbances (the amounts
that cause either o11 or o12 to reach a limit), we can calculate the
relative change in r11 and r12 that is achievable by manipulating normal
disturbances. Our ability to affect r11 and r12 by manipulating d1 and
d2 is limited to that range -- if we agree that normal control is to be
maintained by the system.

This took me a lot of words to say -- I hope that my point survived.

Let's postpone maximizing and minimizing until this thread is tied off.
-----------------------------------------------------------------------
Best,

Bill P.