[From Bill Powers (940601.1220 MDT)]
Bill Cunningham (940601) --
Now I am sure we are arguing about different things, although in
some places there is a fuzzy interface between our points of view.
What I am after is a model of the way the brain is organized, in
terms of fundamental classes of functions. Because of this, specific
behaviors are of little theoretical interest in themselves. At the
lower levels, you and I would have little trouble finding agreement,
because it is clear that an example of control is only an example of
a class of control. Controlling the posture of the body is simply an
example of configuration control, so the way a particular person
holds his or her body only illustrates this class of controlled
variables. Controlling the middle note of a chord to change it from
major to minor is another example of configuration control -- in
people who have learned the Western musical scales. What is
universal is the control of configurations, not the control of
particular configurations.
Agreement becomes more difficult at the higher levels.
... but the central point I wanted to portray was shifting
control for X to control for Y. At the higher levels and for
the problems I have the most interest, the result of action is
usually not immediate and the report of results is certainly
not immediate. So, one sets forth to do some things that will
change a future perception of the world. That involves use of
the imagination loop to predict that the future perception will
agree more closely with what is controlled for. Once the
projected future us satisfactory, the candidate actions are
initiated.
This is a particular program that might well run in some person's
higher-level systems. It is distinctly program-like in character.
One person might think of a goal as a "future state of the world,"
and reason logically about possible methods for reaching that goal,
choosing among (symbolically represented) courses of action, testing
them (logically) in imagination, and finally settling on the course
that seems most likely (subjectively or after formal calculations)
to achieve the desired state. Such a way of behaving would be a
valid example of program-level control, or something up there in
that area.
An equally valid example would be in another person who also reasons
about the world logically, but picks the first possible mode of
action that might work, without testing or even considering any
alternatives. Still another example would be a person who reasons
with words, deciding, for example, that if you eat "nutritional"
foods, you will stay well. And still another person might simply
follow rules of conduct that were adopted directly from others
without thought: if someone hits you, hit him back.
My point is that people follow all sorts of rules, from the formally
mathematical to the intuitive and merely habitual. You can argue
that a person following some other set of rules with which you find
fault is "illogical," but the fact is that people do use illogical
rules as well a logical ones. All that people have in common at
these levels is THE CAPACITY TO THINK AND ACT IN TERMS OF RULES. A
proper theory of behavior must explain how ALL rules are followed,
whether we think they are wise ones or not.
You are proposing a particular example of rule-following, or
logical/computational control. I see nothing to keep a person from
becoming organized this way, but I also see nothing basic in this
particular organization over any other. You do mention one general
consideration: the ability of a control system at one level to
adjust reference signals for lower systems. This includes, of
course, the possibility of setting some reference signals to zero
and raising the level of other reference signals from zero, which is
equivalent to changing which lower systems are being used for the
higher-level control. However, you seem to be considering only on-
off control, implying that the only computational ability being used
is like Boolean logic. There are many other settings for lower-level
reference signals besides "on" and "off"; more sophisticated control
would be quantitative, with lower-level reference signals being
adjusted continuously according to the results of computations with
continuous variables.
You are proposing a way of using the computational or rule-following
level that relies on certain types of computations:
The simple comparators could be replaced by what I'll call
Kanerva boxes, the output of which is the Hamming distance
between the perceptual input and the associated template. That
is scalar measure of mismatch, and thus uncertainty. Unmatched
coordinate pairs denote the specific uncertainty that needs
resolution.
Assuming you could reduce these general propositions to something a
person could learn to do without pencil and paper, this strategy
might well lead to a useful mode of control. Implementing the
various processes to which you allude would certainly require a
computational level of perception, comparison, and action. But you
would find this type of behavior only in people who had learned to
implement these rules by simulating them in their heads at the logic
level. It is not a fundamental kind of behavior; it's just one way
this level of perception and control could be used.
Maybe it's a good way; maybe we should all use it. But that is
irrelevant to the question of identifying the fundamental classes of
perception and control that make up the human hierarchy. The only
reason we might be interested in seeing such a mode of computational
control in action is to help us clarify just what capacities for
computation we should include in this level. Assuming that a person
could learn to compute Hamming distances in his head, what sorts of
computations are required? But we would have to consider other kinds
of organization at this level as well, because the general
capacities can be found only through seeing them employed in many
specific ways.
What Revere is perceiving is, as you say, 0
lights, 1 light, or 2 lights. This is translated into
higher-level verbal perceptions: no attack, attack by land, or
attack by sea.
Thereby resolving ambiguity about the possible states of the
world.
You're just telling me that you can see it that way if you want to.
I have no objection, but I see no compelling reason to refer to an
ambiguity that was not necessarily felt by anyone. The control-
system design I offered for this situation would work perfectly well
in simulation without anyone's ever mentioning uncertainty or
ambiguity, and more importantly, without including in the simulation
any "uncertainty" or "ambiguity" variable. I see probability
calculations as being similar to the calculations being made by a
bettor at the horse races. These calculations may well influence the
bettor's behavior, but they do not influence the way the horses run.
Your ability to predict Paul Revere's behavior may well involve your
uncertainty about how many lights are going to show, but that does
not mean that Paul's behavior involves the calculation of
probabilities BY HIM. Paul Revere is in a comfortably deterministic
situation. No lights and no troops employed --> do nothing, and so
forth for all the cases. There are no decisions to make. All that
Paul has to do is keep the appropriate function of light-perceptions
and perceptions of Minuteman deployment TRUE. That is how the PCT
model would handle this situation -- not by calculating
uncertainties or proposing that inputs cause outputs according to
rules.
And to do that he had to deliver the appropriate message, which
was uncertain until the moment received (perceived and decoded,
if you like).
Here is a message: you forgot to zip your fly. Tell me, what was
your state of uncertainty about that message before you got it?
I think you are forcing a way of speaking on situations in a way
that is inappropriate. Sometimes we are uncertain about the content
of a message, but that is only when we knew we were going to get it
or had some concept of what its content might be. And even then, not
knowing does not necessrily lead to a state of uncertainty. When I
get a telegram I don't immediately start wondering who died, but I
had an aunt who did wonder, every time, even when the message turned
out to be "CONGRATULATIONS ON 80th BIRTHDAY." It's just a matter of
how you've learned to act, nothing fundamental.
What you are doing is like saying that a person who buys a new
Buick is controlling for raising the percentage of Buick >>owners
in his neighborhood.
False simile. What I am doing is like saying that if a person
controls for raising the percentage of Buick owners in his
neighborhood, buying one will achieve what is said to be
desired.
That is exactly the point that Rick and I are trying to make. You
can't control something if there is no perceptual signal
representing its state. To control for uncertainty, a person must
have a perceptual signal indicating the magnitude of uncertainty
that is present. And to have such a perceptual signal, the person is
required
first to perceive the size of the decision space and all
the associated probabilities, and second to perform the
appropriate calculations.
You did not reply to that comment. Instead you veered off onto
another subject:
In your post of 940527.1400, you suggest "anxiety" as the thing
controlled for. That might be a better term, or perhaps the
connection we seek. But if "anxiety" is okay for you, please
tell me how you are going to show that as a controlled
variable. Polygraph?
I'd ask the person, but that's a different question. I was pointing
out that if uncertainty is to be a controlled variable, there must
be a perceptual signal representing the perceived degree of
uncertainty. When you propose that people control for a low level of
uncertainty, you are necessarily (under PCT) proposing that they
contain perceptual signals representing degree of uncertainty, and
therefore have input functions that are doing the calculations
necessary to provide such a signal. All control is control of
perceptual signals, in PCT.
I went on:
Only then would there be a perceptual signal representing
uncertainty, and only if such a signal existed could we say
that there is or may be a control system controlling
uncertainty relative to a reference level of zero.
And you replied:
You are changing the game, perhaps prematurely. The original
question was whether such a system could exist, not whether one can
prove a particular system does control for reduced
uncertainty. Proof of the latter would certainly constitute
proof of the former. Absence of proof of the latter does not
disprove the former.
I am not changing the game, you are. The PCT game says that if x is
controlled, x must be a perception. I pointed out that if a
controlled x is "uncertainty," then "uncertainty" must be a
perception. I pointed out the implication that the related input
function must therefore be doing the probabilistic calculations that
generate a number standing for uncertainty. How is that changing the
game?
As to my comment that it is possible to estimate decision spaces and
calculate probabilities, you missed my point altogether and started
talking about which people might be more likely to gravitate toward
positions in which they did such things, and other irrelevancies. My
point was that using these methods is a matter of personal taste and
training, not anything basic to human organization.
And you totally misread my intent in saying this:
Where I know you are dead wrong (by observing counter-examples)
is the universal assertions that (a) "who believe that once the
best course of action has been selected, it should simply be
carried out;" and (b) "It is quite possible to set up
guidelines.......and teach people how to use the results..."
I said only that there are people who believe such things, and that
is it possible to set up guidelines and teach people to use the
results. I didn't say it worked well, I just said it's possible to
do these things. My point was that how we approach problem-solving
or triage or preparing for contingencies is a matter of how we have
learned to use the basic capacity for computation and rule-
following. I happen to believe that most of the methods that have
been invented are pretty useless in real situations, and you seem to
agree. But that is beside the point. The point is that we can learn
ANY method, whether it's a good one or a bad one. It's the capacity
to learn methods that matters in a model of behavioral organization,
not the particular methods that are learned.
The principles, I think, would belong to a process model,
whereas the methods would be learnable tools for exploiting the
model. But contriving an adequate process model would be a
great leap forward, even if it is a model of what a few select
people have learned to do with their brain. Given the power
of PCT, how can it contribute to a solution? In particular,
does PCT explain how control is shifted from one variable to
another, and on what basis?
PCT explains THAT control is shifted from one variable to another by
raising and lowering the values of reference signals sent to
existing lower-level systems. Shifting from one variable to another
is just a special limiting binary case of adjusting the relative
magnitudes of reference signals. We adjust our driving speed to
conditions, but not just between "fast" and "stop." We modify our
demands in a negotiation, but not just between "give me what I want"
and "go to hell." Higher systems adjust many lower-level reference
signals at once, and each lower-level reference signal is the sum of
many higher-level outputs, positive and negative. In the HPCT model
there is no simple relationship between a reference setting at one
level and error signals at higher levels -- except when we're trying
to describe the underlying principle in terms of one system at each
level.
As to HOW this shifting is done, that is a matter for modeling to
propose and experiment to test. There may be as many answers as
there are people. On a more engineering level, perhaps we can find
simple mechanisms that would explain how, for example, a logical
error can be turned into a reference signal for the degree of
pressure on an accelerator pedal. That's a matter of searching for
designs and seeing which ones work like real people and which ones
don't.
BTW, PCT is a model. A very good model, but a model
nonetheless. Kanerva's algorithm is based on neurophysiology
and a model of pattern matching in the brain. Models are >products
of brains.
Too true. But the basic question I'm trying to keep before us is
WHAT ARE WE TRYING TO MODEL? You seem to be trying to model
particular ways of doing things; I am trying to model the basic
capacities needed to do any of those things, either the ones you
propose or any others.
Kanerva may base his algorithm on a model of pattern matching in the
brain, but that is not what neurophysiology tells us. If you assume
that pattern-matching takes place (which I definitely do not
assume), then you will interpret what little information we have
about neural functions in the light of your concept of pattern-
matching, and will no doubt come up with something. But suppose that
perception does not work by pattern-matching? Then what good is a
neurophysiological model of pattern matching?
···
--------------------------------------------------------------
Best,
Bill P.