[From Bill Powers (940531.1800 MDT)]
Bill Cunningham (940530.1530) --
The most interesting aspect of your Paul Revere example is that
nowhere in it is it necessary to speak of controlling for low
uncertainty. What Revere is perceiving is, as you say, 0 lights, 1
light, or 2 lights. This is translated into higher-level verbal
perceptions: no attack, attack by land, or attack by sea. The
reference level specifies a logical relationship between the place
where an attack occurs and the place where the troops are to be
deployed.
So initially there is no light, and Revere perceives "no attack." No
attack implies no error and no error requires no action. No
uncertainty in that, unless Revere is wondering whether the person
who is supposed to show the light got waylaid, fell ill, ran out of
lantern fuel, went to the wrong church-tower, or any of the other
million and one things that could gang agley. But if Revere does not
worry about these explanations for the lack of a light, then he is
certain that there is no attack yet and the Minutemen are not
deployed (as they should not be).
Now one light appears. This constitutes an error, which Revere hopes
to eliminate by deploying troops to the right place. But now
additional information is required: one light or two lights. There
is one light; Revere perceives that there is an attack by land,
which allows selecting one of the two courses of action implied by
appearance of any number of lights. Again, he might wonder whether a
tree branch is blocking the second light, if the second lantern
broke, if he is standing too far away or the lanterns have been
placed too close together for him to tell the difference between one
light and two, and so forth. But again assuming that he does not
think of these possibilities, he is certain of three things: an
attack is occurring, it is occurring by land, and the Minutemen are
not deployed.
In the case of two lights, he might wonder if he is seeing a
reflection of a single light in the pane of an open window, or if
someone else turned on a second light just as the first light came
on, but most likely he is now certain that an attack is under way,
it is coming by sea, and the Minutemen are not deployed.
In each case, he then takes action to control the relationship
between the meaning of the signal as he remembers it (wait, was it
one if by land, or two if by land?) and the action of alerting the
troops as to where to go. "One light AND send the troops to the
seaward route" would result in an error, since the perceptual
function is "no light OR (one light XOR sea route)" -- I think --
and the reference signal is TRUE.
The only reason for bringing up the subject of uncertainty is to
consider the possibility that Revere is wondering whether he is
perceiving or remembering correctly. If he doesn't wonder about
that, there is no uncertainty. He simply acts to control a logical
perception made up of perceptions of lights and perceptions of
troops going the right way. He keeps this logical perception
continuously TRUE.
You say
Controlling for reduction of uncertainty, Revere positions
himself to observe whether 0 or 1 or 2 lights are displayed.
But he does not have to be controlling for reduction of uncertainty
in order to carry out this task. "Controlling for uncertainty" is
your way of CHARACTERIZING his attitude toward what is going on.
Even if there is some uncertainty in Revere, that is not what he is
controlling for -- even if the uncertainty is reduced. What you are
doing is like saying that a person who buys a new Buick is
controlling for raising the percentage of Buick owners in his
neighborhood. You can show with incontrovertible mathematics that
this is indeed a consequence of his buying a Buick, but he is most
probably not controlling for that side-effect at all.
You say "formal calculation of the uncertainty requires
knowledge of the decision space of each recipient."
But that is precisely what we cannot know, as I have tried to bring
out in my discussion of what Paul Revere might actually be uncertain
about, if he thinks about it enough. And besides, _our_ calculation
of the uncertainty is like our calculating the effect on the
percentage of Buick owners: it is irrelevant to what is going on in
the person. The only way in which we could say that a person is
controlling for low uncertainty would be to show that that person is
_perceiving_ uncertainty, which would require that person first to
perceive the size of the decision space and all the associated
probabilities, and second to perform the appropriate calculations.
Only then would there be a perceptual signal representing
uncertainty, and only if such a signal existed could we say that
there is or may be a control system controlling uncertainty relative
to a reference level of zero.
Suppose that there appeared in the church-tower not one light, not
two lights, but three lights. The third possibility was not in
Revere's scheme, his "decision space." Yet how long would it take
Revere to realize that the British had split their forces and were
attacking both by land and by sea? And how much longer would it take
him to realize that by telling the troops to split into two groups,
he could assure that the desired delay could still be brought about
because now the smaller groups would each be dealing with a smaller
number of British?
Or suppose that Revere saw one light, then two lights, then one
light, then two lights.... . Being a hero, he would quickly realize
that an attack is under way but the signaller was unable to
determine whether it was by land or by sea. And he would probably
take the same action to cause a delay of the attack, even though not
as much as would be possible if the Minutemen were not split up.
In order to set up this example in a way appropriate to analysis by
information theory, it's necessary to think open-loop. One
perceptual situation leads to one action, a different perceptual
situation leads to a different action. But the PCT analysis says
that the action is intended to control the perception, not the
perception to control the action. This means looking beyond the S-R
appearance to see what the controlled perceptions might be, which is
what I have been doing above.
Once we have found a plausible way to set up this situation as a
true control process, we can consider more possibilities that did
not occur during construction of the formulation that fits the
approach of information theory. For example, what would Revere do if
he saw two lights, and simultaneously heard that the troops had
given up waiting for him and had decided to gamble on committing
themselves to the sea route?
As the proposed control system is designed, he would do nothing. He
perceives that there are two lights AND the troops are deployed to
the sea route, which makes the perceptual signal TRUE, matching the
reference condition, and leads to no error. No error, no action.
Instead of careening around Middlesex, he could go home and have a
beer.
Under the other way of visualizing the scenario in which his ride
and his message are simply triggered off by the occurrance of one or
two lights, he would go riding around Middlesex shouting that the
British were coming by sea, with nobody left to hear him who might
have benefited from the message.
And if he heard that the troops were already heading for the land
route, he would still carry out the plan, telling an audience of
noncombatants that the British were coming by sea.
Listen to your own words:
So long as he perceives 0 lights, he remains uncertain and in
the dark. Once he perceives 1 or 2 lights, he controls for
riding and spreading the alarm.
You have made no connection between the fact that he may be feeling
uncertain and "in the dark," and what he is controlling. His state
of uncertainty, if there is any, is a consequence of what he does
under any input condition, not vice versa.
···
--------------------------------------------------------
I do not in any way doubt your statements about decision spaces, the
implied world of probabilities, or the process of making decisions
based on probabilistic knowledge (like triage). It is possible to
estimate decision spaces, to assign realistic probabilities to the
alternatives, to calculate the best course of action based on the
outcome of the calculations, and to produce the decided-upon action.
But I think that the only people who behave this way are those who
have been taught to behave this way or have invented it for
themselves -- who have learned to see the world in terms of
probabilities and disjoint alternatives, who think in terms of
making either-or decisions, and who believe that once the best
course of action has been selected, it should simply be carried out.
It is quite possible to set up guidelines, to provide computing
services to do the necessary calculations for people who don't know
how to do them themselves, and to teach people how to use the
results in deciding on preselected courses of action.
It's even possible that this sort of approach, once learned, can
lead to better results than acting at random, in the long run. A
little better, and once in a while even a lot better.
But none of that has anything to do with a theory of mind and
behavior. It has no more to do with the actual processes in a brain
than does the ability to design electronic circuits, or make better
arrowheads. Doing these things -- calculating the best decision,
designing a circuit, making an arrowhead -- requires the use of
mental capacities which are common to all people, but these things
are just PRODUCTS of those capacities the brain, as music is the
PRODUCT of someone playing an instrument. You are talking about
things that a brain can learn to do, but not about how the brain
works. A real model of the brain would explain how a person can
learn to calculate probabilities, and to reason out logical
decisions.
It would also explain how a person can learn to calculate or
estimate probabilities incorrectly, and evaluate the decision space
incorrectly, and choose a course of action that will make matters
worse instead of better, while still believing that the result is
good. The same faculties of the mind are used to do these things
wrong as to do them right. The particular calculations a person
carries out, the particular way a person implements actions, are
irrelevant; they are just examples of the same brain working with
the same basic capabilities.
This is why in my proposed model of the hierarchy I speak only of
general capabilities, such as the capacity to perceive and control
categories, sequences, logical functions, principles, and system
concepts. The theory doesn't care WHICH categories, sequences,
logical computations, etc. are perceived and controlled. The
particular instances that we see individuals controlling are all
learned, they are ways in which the brain can come to be used at
each of these levels. But those instances tell us nothing about the
brain if we get hung up on any particular kind of behavior at a
particular level. If a person can calculate probabilities, this
tells me that he can calculate, which is the basic capacity I am
interested in. I don't care WHAT he calculates; that's matter of
happenstance and won't be exactly the same in any two people.
So it could be that in your work with decision-making under adverse
circumstances, you are working out methods and principles that can
be valuable. But you are INVENTING and DESIGNING these methods and
principles; they are not an inborn aspect of the brain. They are
products of a brain.
--------------------------------------------------------------
Best,
Bill P.