Controlling uncertainty

[From Rick Marken (940526.1030)]

Bill Powers (940525.1630 MDT) and (940526.0700 MDT) --

Some very excellent points, the heart of all of them being summed up, I
believe, in this nice comment.

I think there's some misunderstanding about "preaching nonviolent
control." There aren't two modes of control, violent and nonviolent,
which are alternatives. There is only control, the process of acting
to keep the perceived world as close as possible to a match with the
states one desires to perceive.

Bill Cunningham (940526.0800) --

I find it astonishing that you don't or cannot control for reduced
uncertainty. I findmy self quite conscious of doing so and find it hard to
imagine that others do not -- especially a scientist.

Bill Powers has been trying to clarify this too, If he can't, I don't see how
I can, but, what the heck, I'll keep trying.

When you say that you control for "reduced uncertainty" I hear you saying
that "reduced uncertainty" is a controlled variable -- analogous to the
electrical measure of temperature controlled by a thermostat. It is hard for
me to conceive of "reduced uncertainty" or even just "uncertainty" as a
perceptual variable that can be controlled. Well, as I (and Bill) said there
are perceptions that I suppose one could call "uncertainty" -- perceptions of
apparent puzzlement in others or the perception of one's own sense of
puzzlement about what to do in some situation. These are perceptual
_variables_; I can imagine perceiving different degrees of the apparent
uncertainy in others, and of feeling different levels of uncertainty about
what to do in certain situations. There are not perceptions that I have (or
control) very often -- but that's just me. I'm usually controlling where I
eat, where I go home to, who I sleep with when I get there -- stuff like
that.

I suppose I might have been experiencing some feeling of "uncertainty" when I
asked about whether anyone felt offended by my saying they were controlling.
Maybe I was trying to control this feeling (among the many other perceptions
that I'm sure I was controlling); but it really didn't seem like it; I didn't
perceive an increased level of a feeling "certainty" or a decreased level
of a feeling of "uncertainty" after I got the replies, for example. At
least I didn't notice it. The main variables I'm sure I was actually
controlling were the questions I asked, the steps I went though to post the
posts and so forth.

I don't think a scientist necessarily controls for feeling a certain level of
(un)certainty. I think much of what a scientist does involves no controlling
at all; controlling is involved in setting up the conditions under which an
observation is to be made; those conditions are determined by theories and
models created in imagination. Once everything is set up, you see if what you
perceive (without your having control of the perception) matches what you
expected to perceive (based on your imagination -- the model). The scientist
is doing all this low level controlling (and non-controlling) to control
some higher level perception -- such as the perception of a principle (like
the PCT principle that o = -kd). The particular perception (subject's handle
movement is inverse of disturbance) can be seen as consistent with (or
inconsistent with, if the results don't come out as expected) that principle.

Your hypothesis that I was controlling for IT having relevance to PCT was
false, as stated. It correctly identified my interest, but not purpose --

But my hypothesis concerned both your interest (the perceptual variable) and
your purpose (the reference setting for that variable -- high). Apparently I
was not right about either -- but I was close. It was your perception of the
layered protocol (rather than the IT) model's relevence to PCT that you seem
to have been controlling for.

which was to enter into layered protocol, at the conclusion of which my
perception of your understanding of IT would have changed.

I'm afraid I don't understand this still. Did you want to talk about the
layered protocol model? If so, that's a great idea. Or is a layered protocol
a phenomenon that we can can somehow participate in, one that is explained by
the layered protocol model?

I had hoped to use your enquiry to lead to a point; but since you don't
control for reduced uncertainty, that thread is a dead end

Again, I don't understand this.

It may also cut the thread I was starting when I asked on what basis would
you change your control (for reduced uncertainty) to other matters. If not
for control for uncertainty, on what basis do you change variables for which
you do control?

I don't know on what basis I change the variables I control, but they are
probably always different; for example, I might switch from controlling the
variables involved in typing to the variables involved in "having a meeting"
because I am controlling for perceiving myself as doing my job (at the level
I want to perceive it).

So on what basis are some controlled and others not? What would provide the
basis for change?

Well, I suggested one; the need to change the means of control in order to
control higher order perceptual variables.

I would be interested, however, in seeing a working model of a hierarchical
control system that selects the variables controlled on the basis of higher
level control of uncertainty. In fact, just a simple working "control of
uncertainty" control system might help me understand your idea that people
"control uncertainty".

Best

Rick

[From Bill Powers (940531.1800 MDT)]

Bill Cunningham (940530.1530) --

The most interesting aspect of your Paul Revere example is that
nowhere in it is it necessary to speak of controlling for low
uncertainty. What Revere is perceiving is, as you say, 0 lights, 1
light, or 2 lights. This is translated into higher-level verbal
perceptions: no attack, attack by land, or attack by sea. The
reference level specifies a logical relationship between the place
where an attack occurs and the place where the troops are to be
deployed.

So initially there is no light, and Revere perceives "no attack." No
attack implies no error and no error requires no action. No
uncertainty in that, unless Revere is wondering whether the person
who is supposed to show the light got waylaid, fell ill, ran out of
lantern fuel, went to the wrong church-tower, or any of the other
million and one things that could gang agley. But if Revere does not
worry about these explanations for the lack of a light, then he is
certain that there is no attack yet and the Minutemen are not
deployed (as they should not be).

Now one light appears. This constitutes an error, which Revere hopes
to eliminate by deploying troops to the right place. But now
additional information is required: one light or two lights. There
is one light; Revere perceives that there is an attack by land,
which allows selecting one of the two courses of action implied by
appearance of any number of lights. Again, he might wonder whether a
tree branch is blocking the second light, if the second lantern
broke, if he is standing too far away or the lanterns have been
placed too close together for him to tell the difference between one
light and two, and so forth. But again assuming that he does not
think of these possibilities, he is certain of three things: an
attack is occurring, it is occurring by land, and the Minutemen are
not deployed.

In the case of two lights, he might wonder if he is seeing a
reflection of a single light in the pane of an open window, or if
someone else turned on a second light just as the first light came
on, but most likely he is now certain that an attack is under way,
it is coming by sea, and the Minutemen are not deployed.

In each case, he then takes action to control the relationship
between the meaning of the signal as he remembers it (wait, was it
one if by land, or two if by land?) and the action of alerting the
troops as to where to go. "One light AND send the troops to the
seaward route" would result in an error, since the perceptual
function is "no light OR (one light XOR sea route)" -- I think --
and the reference signal is TRUE.

The only reason for bringing up the subject of uncertainty is to
consider the possibility that Revere is wondering whether he is
perceiving or remembering correctly. If he doesn't wonder about
that, there is no uncertainty. He simply acts to control a logical
perception made up of perceptions of lights and perceptions of
troops going the right way. He keeps this logical perception
continuously TRUE.

You say

Controlling for reduction of uncertainty, Revere positions
himself to observe whether 0 or 1 or 2 lights are displayed.

But he does not have to be controlling for reduction of uncertainty
in order to carry out this task. "Controlling for uncertainty" is
your way of CHARACTERIZING his attitude toward what is going on.
Even if there is some uncertainty in Revere, that is not what he is
controlling for -- even if the uncertainty is reduced. What you are
doing is like saying that a person who buys a new Buick is
controlling for raising the percentage of Buick owners in his
neighborhood. You can show with incontrovertible mathematics that
this is indeed a consequence of his buying a Buick, but he is most
probably not controlling for that side-effect at all.

You say "formal calculation of the uncertainty requires
knowledge of the decision space of each recipient."

But that is precisely what we cannot know, as I have tried to bring
out in my discussion of what Paul Revere might actually be uncertain
about, if he thinks about it enough. And besides, _our_ calculation
of the uncertainty is like our calculating the effect on the
percentage of Buick owners: it is irrelevant to what is going on in
the person. The only way in which we could say that a person is
controlling for low uncertainty would be to show that that person is
_perceiving_ uncertainty, which would require that person first to
perceive the size of the decision space and all the associated
probabilities, and second to perform the appropriate calculations.
Only then would there be a perceptual signal representing
uncertainty, and only if such a signal existed could we say that
there is or may be a control system controlling uncertainty relative
to a reference level of zero.

Suppose that there appeared in the church-tower not one light, not
two lights, but three lights. The third possibility was not in
Revere's scheme, his "decision space." Yet how long would it take
Revere to realize that the British had split their forces and were
attacking both by land and by sea? And how much longer would it take
him to realize that by telling the troops to split into two groups,
he could assure that the desired delay could still be brought about
because now the smaller groups would each be dealing with a smaller
number of British?

Or suppose that Revere saw one light, then two lights, then one
light, then two lights.... . Being a hero, he would quickly realize
that an attack is under way but the signaller was unable to
determine whether it was by land or by sea. And he would probably
take the same action to cause a delay of the attack, even though not
as much as would be possible if the Minutemen were not split up.

In order to set up this example in a way appropriate to analysis by
information theory, it's necessary to think open-loop. One
perceptual situation leads to one action, a different perceptual
situation leads to a different action. But the PCT analysis says
that the action is intended to control the perception, not the
perception to control the action. This means looking beyond the S-R
appearance to see what the controlled perceptions might be, which is
what I have been doing above.

Once we have found a plausible way to set up this situation as a
true control process, we can consider more possibilities that did
not occur during construction of the formulation that fits the
approach of information theory. For example, what would Revere do if
he saw two lights, and simultaneously heard that the troops had
given up waiting for him and had decided to gamble on committing
themselves to the sea route?

As the proposed control system is designed, he would do nothing. He
perceives that there are two lights AND the troops are deployed to
the sea route, which makes the perceptual signal TRUE, matching the
reference condition, and leads to no error. No error, no action.
Instead of careening around Middlesex, he could go home and have a
beer.

Under the other way of visualizing the scenario in which his ride
and his message are simply triggered off by the occurrance of one or
two lights, he would go riding around Middlesex shouting that the
British were coming by sea, with nobody left to hear him who might
have benefited from the message.

And if he heard that the troops were already heading for the land
route, he would still carry out the plan, telling an audience of
noncombatants that the British were coming by sea.

Listen to your own words:

So long as he perceives 0 lights, he remains uncertain and in
the dark. Once he perceives 1 or 2 lights, he controls for
riding and spreading the alarm.

You have made no connection between the fact that he may be feeling
uncertain and "in the dark," and what he is controlling. His state
of uncertainty, if there is any, is a consequence of what he does
under any input condition, not vice versa.

···

--------------------------------------------------------
I do not in any way doubt your statements about decision spaces, the
implied world of probabilities, or the process of making decisions
based on probabilistic knowledge (like triage). It is possible to
estimate decision spaces, to assign realistic probabilities to the
alternatives, to calculate the best course of action based on the
outcome of the calculations, and to produce the decided-upon action.

But I think that the only people who behave this way are those who
have been taught to behave this way or have invented it for
themselves -- who have learned to see the world in terms of
probabilities and disjoint alternatives, who think in terms of
making either-or decisions, and who believe that once the best
course of action has been selected, it should simply be carried out.
It is quite possible to set up guidelines, to provide computing
services to do the necessary calculations for people who don't know
how to do them themselves, and to teach people how to use the
results in deciding on preselected courses of action.

It's even possible that this sort of approach, once learned, can
lead to better results than acting at random, in the long run. A
little better, and once in a while even a lot better.

But none of that has anything to do with a theory of mind and
behavior. It has no more to do with the actual processes in a brain
than does the ability to design electronic circuits, or make better
arrowheads. Doing these things -- calculating the best decision,
designing a circuit, making an arrowhead -- requires the use of
mental capacities which are common to all people, but these things
are just PRODUCTS of those capacities the brain, as music is the
PRODUCT of someone playing an instrument. You are talking about
things that a brain can learn to do, but not about how the brain
works. A real model of the brain would explain how a person can
learn to calculate probabilities, and to reason out logical
decisions.

It would also explain how a person can learn to calculate or
estimate probabilities incorrectly, and evaluate the decision space
incorrectly, and choose a course of action that will make matters
worse instead of better, while still believing that the result is
good. The same faculties of the mind are used to do these things
wrong as to do them right. The particular calculations a person
carries out, the particular way a person implements actions, are
irrelevant; they are just examples of the same brain working with
the same basic capabilities.

This is why in my proposed model of the hierarchy I speak only of
general capabilities, such as the capacity to perceive and control
categories, sequences, logical functions, principles, and system
concepts. The theory doesn't care WHICH categories, sequences,
logical computations, etc. are perceived and controlled. The
particular instances that we see individuals controlling are all
learned, they are ways in which the brain can come to be used at
each of these levels. But those instances tell us nothing about the
brain if we get hung up on any particular kind of behavior at a
particular level. If a person can calculate probabilities, this
tells me that he can calculate, which is the basic capacity I am
interested in. I don't care WHAT he calculates; that's matter of
happenstance and won't be exactly the same in any two people.

So it could be that in your work with decision-making under adverse
circumstances, you are working out methods and principles that can
be valuable. But you are INVENTING and DESIGNING these methods and
principles; they are not an inborn aspect of the brain. They are
products of a brain.
--------------------------------------------------------------
Best,

Bill P.

[From Bill Powers (960624.1530 MDT)]

Martin Taylor 960621 16:45 --

     When I innocently tried to use information theory in analyzing
     control those years ago, I ran into this immediate (I might almost
     say "reflex" if I didn't know better :slight_smile: opposition to the idea, an
     antagonism I simply didn't understand, since I have never treated
     information as relating to cause-effect systems. Neither did Claude
     Shannon in the theory, though he did apply it only to cause-effect
     systems. The theory is based around uncertainty at one point about
     what is/was happening at another point. It's, in a way, a theory of
     measurement. Information is the reduction of uncertainty.

     Looked at this way, to me the application to control was obvious.
     In words, the "perception" is a continuous version of Shannon's
     "observation". By making the perception/observation, one is less
     uncertain about the current state of the thing observed (the CEV)
     than one would otherwise have been. With this reduced uncertainty,
     one is better placed to push or pull on the CEV to bring it to
     where one wants it to be than if one didn't observe. This seems to
     me to incontrovertible, and it says that PCT has to work better
     than a planning-outflow model.

I think most of the problem I have with your information-theoretic ideas
arises when you try to put them into informal language. Perhaps you
think you are making them clearer by doing this, but for me you simply
raise modeling issues that are red flags. Consider this, from the above:

     By making the perception/observation, one is less uncertain about
     the current state of the thing observed (the CEV) than one would
     otherwise have been.

This says explicitly that the goal of making an observation is to become
less uncertain about the state of the thing observed. To me, this
indicates that you're proposing a control system, one with a goal of
experiencing low uncertainty, and using the means called "observing" in
order to do it. That is a specific proposal about a specific control
system, as I read it. This system would extract from the input some
measure of uncertainty, compare it with a reference-level for
uncertainty, and on the basis of the error, alter the method or amount
of observing.

Note that in extracting a measure of uncertainty from the input, this
system is specifically not extracting any other measure -- such as
position, force, velocity, configuration, relationship, and so on. Its
perceptual signal consists, as usual, of a scalar quantity, one
representing the degree of uncertainty, and that is all. Some other
system would have to be extracting signals representing other quantities
in order to control them. If another system were perceiving relative
position, for example, its perceptual signal would vary with relative
position, not with uncertainty in the position. It would be concerned
with making relative position match a reference position, not with
controlling uncertainty toward some goal-state.

So you can see that by referring to somebody or something that would be
better off from reducing uncertainty, you are conveying to me a control
process that aims at being "better off" in this regard, and quite
probably conveying something that you don't mean to say. Furthermore, I
infer from your words that there must be an ability to perceive
uncertainty as an explicit perceptual variable, and as many other
abilities as there are operations required to calculate uncertainty, the
reduction in uncertainty (a rate of change), and the information content
or flow that is involved. What I see is a very complex special-purpose
control system designed to deal with uncertainty.

I don't dispute that such a system might exist. When someone says "Don't
tell me what I'm getting for my birthday, I want it to be a surprise,"
it's likely that there is some subsystem in there that can perceive a
state of subjective uncertainty, and has a non-zero reference level for
it. But that says nothing about all the other control systems that are
operating, does it?

···

-----------------------------------------------------------------------
Best,

Bill P.

[From Bruce Abbott (960624.1900 EST)]

Bill Powers (960624.1530 MDT) --
    Martin Taylor 960621 16:45

    By making the perception/observation, one is less uncertain about
    the current state of the thing observed (the CEV) than one would
    otherwise have been.

This says explicitly that the goal of making an observation is to become
less uncertain about the state of the thing observed. To me, this
indicates that you're proposing a control system, one with a goal of
experiencing low uncertainty, and using the means called "observing" in
order to do it. That is a specific proposal about a specific control
system, as I read it. This system would extract from the input some
measure of uncertainty, compare it with a reference-level for
uncertainty, and on the basis of the error, alter the method or amount
of observing.

As I understand information theory (and I'm no expert, but have read some
elementary explications of it), it merely provides a way to quantify the
ability of a system ("observer") to predict the value of the signal at some
time t. For example, if a signal can take only one of two values, then
without any further knowledge of the signal, I know that at time t it may
have either one of those values (and no other), but I do not know which
value it will be. The total uncertainty is one bit (I think; it's been a
while since I last had any contact with this). Give me a perfect predictor
of the state of that signal, and I can reduce that uncertainty to zero. The
reduction of uncertainty (from 1 bit to 0 bits) is a measure of the
information provided by that predictor. Information is thus a relative
term, since the reduction provided depends on the amount of uncertainty
present in the signal to begin with.

A perfect control system with a steady reference value (or one known to the
observer) would reduce any uncertainty as to the state of the controlled
variable (CV) to zero, regardless of any disturbances acting on the system.
In fact, one could say that it is the job of a control system to remove any
information about disturbances from the CV. Of course, real control systems
are not perfect as there is always residual variation in the CV not
predictable from variation in the reference level. But one could quantify
how well the control system is doing its job by computing the reduction in
uncertainty it produces as to the state of the CV. Whether this measure of
performance is better in some way than other possible measures, or better
for some purposes, is a question others may be better qualified to answer
than I am.

So for me information theory simply provides one metric for analyzing
control-system performance. Information is not something the control system
"uses," it is a description of how well the control system reduces
unpredictable (for the observer) variation in the CV. And what is
"unpredictable" for the observer depends on what knowledge the observer has
of the disturbance to the CV (and note that by "observer" I do not
necessarily refer to a human being; the observer could be a simple
mechanical or electronic device, for example). Hans Blom's adaptive
controller, for example, may "learn" that the variation is cyclical, a sine
wave, let us say. Such "knowledge" allows the controller to offset the
effect of the disturbance on the CV even when the perception of the CV is
temporarily interrupted. The knowledge about disturbance behavior developed
by the adaptive controller allows the controller to reduce uncertainty so
long as the knowledge remains accurate (no drift in the period of the cycle
or other unpredicted disturbances). It could even offer better control than
the standard control system (under these same conditions) by allowing the
system to begin responding to periodic disturbances even before they
actually occur and thus compensating for system response lag.

Yet none of this requires that one adopt the information metric to describe
how these systems (adaptive or not) behave. The system itself does not have
to compute uncertainty, or uncertainty reduction, does not have to be
"aware" of the information content of its signals, in order to do its job,
not unless the system has been designed to do so. Perhaps the argument
between Bill and Martin would go away if it both parties would accept
uncertainty and information simply as measures of the limits imposed on a
signal by the communication channel (the way in which the signal is encoded
and transmitted) and the ability of the control system to reduce variation
in the CV induced by the disturbance (and perhaps by the side-effects of its
own output). Information theory may simply provide another, valid way to
describe how control systems work and to quantify how well they do their
jobs. Whether the view it provides of control system operation is useful,
or more useful than other ways of describing such systems, is an empirical
question whose answer may depend on what you are trying to understand about
the system.

Regards,

Bruce

[Martin Taylor 960625 15:15]

Bill Powers (960624.1530 MDT)

Perhaps you
think you are making them clearer by doing this, but for me you simply
raise modeling issues that are red flags. Consider this, from the above:

    By making the perception/observation, one is less uncertain about
    the current state of the thing observed (the CEV) than one would
    otherwise have been.

This says explicitly that the goal of making an observation is to become
less uncertain about the state of the thing observed. To me, this
indicates that you're proposing a control system, one with a goal of
experiencing low uncertainty, and using the means called "observing" in
order to do it.

I didn't say "one perceives oneself to be less uncertain" or "one is
controlling a perceptual function whose output is uncertainty". I said
"one _is_ less uncertain." Now it is possible that some other control
system might have a perceptual function to allowed it to evaluate this
uncertainty, but I was not talking about such an other control system.

The rest of your posting seems to deal only with this other control system,
introduced by you, apparently as a straw man you can knock down easily.

you can see that by referring to somebody or something that would be
better off from reducing uncertainty, you are conveying to me a control
process that aims at being "better off" in this regard, and quite
probably conveying something that you don't mean to say.

This last clause appears to be true.

However, taking off from this point, let us observe that a controlled
perception is indeed more stable than the "same" perception would be if
the output gain were set to zero, leaving the perception uncontrolled.

Nobody observes this stability, nobody controls for "better stability"
(unless you spread your range of interest to include reorganization and
evolution), but nevertheless, control brings stability. Now, going to
any of a myriad of writings of yours, I could take your comment on what I
said and change only one word to make it a comment on your own writings:

I infer from your words that there must be an ability to perceive
[stability] as an explicit perceptual variable, and as many other
abilities as there are operations required to calculate [stability],...

What I see is a very complex special-purpose
control system designed to deal with [stability].

Now, would those comments be a fair expression of what someone reading
your writings would be expected to say? I think not.

"Stability" has a lot in common with "uncertainty". They aren't the same,
but they share a lot of conceptual common ground. Something that is more
stable is often less uncertain to a wide range of possible measuring
instruments. Something that is less uncertain may well be more predictable
to the measurer of the uncertainty, and something that is more stable is
definitely more predictable than something that is less stable.

Both stability and uncertainty deal with more than a single measured value.
Stability reflects a whole set of measured values of something. Uncertainty
reflects a measured value together with a set of values to which the measuring
instrument assigns some prior probabilities. Both are abstractions, not
known to, or used by, the control system whose stability/uncertainty
parameters are being assessed.

So, lower your red flag and return to the free market of ideas.

Martin