Finding the will

[Martin Taylor 2000.06.18 23.32]

[From Bruce Gregory (2000.0618.1758)]

Martin Taylor 2000.06.17 09:04

A long time ago, when I was first learning PCT, Rick encouraged me
when I "correctly" identified "purpose" with a reference for
something to occur.

I think this identification is unfortunate. One doesn't normally think of
having a "purpose" or a "goal" to maintain one's body temperature at a
temperature of 98.6 F. Clearly it is reasonable to talk about a control
system in the hierarchy with this reference level, however.

Whether the purpose is conscious or not does not affect whether the
purpose of the acts of a control system is to bring the control
system's perception to its reference value. I guess one has to
distinguish the "purpose" of the constructor of the control system
from the purpose of the control system. Maybe that's where your
demurral comes from?

"Will" seemed to me to go beyond that, when I said: [Martin Taylor
2000.06.02]

The concept of "will" is a bit strange here, since it is recursive
within PCT. The "will" to change something is a reflection of a
difference between a perception and a reference level for that
perception. This imples that the thing to be changed is perceived,
and that for some purpose higher in the hierarchy the state of that
perception matters.

The implication of this is a bit different from what Rick' partial
quote suggests. To "will" something to occur implies a perception
that the thing willed is not now occurring. What is perceived is a
difference between a reference and a perception (i.e. the existence
of an error). One does not normally "will" an existing state of
affairs. One does not normally perceive the value of an error signal,
but to perceive oneself to be "willing" something necessarily asserts
that one perceives the existence of the error.

Again, I think this terminology is unfortunate since you can always
eliminate the need to perceive the existence of an error by postulating a
higher order control system.

Think of it the other way round. Of course error doesn't have to be
perceived for it to be corrected. In "classic" HPCT, error is never
perceived. But we are talking about a situation in which error _is_
perceived. You don't "will" something unless you perceive error in
the pct sense.

The thought "this isn't right" may have
nothing to do with error in the HPCT sense in which the term is used.

In "classic" HPCT the only signals that contribute to perceptual
signals are sensory signals or lower-level perceptions (and
imaginations). But in HPCT more generally, input to a perceptual
function can come from anywhere. The question one has to ask is
whether deviating from classic HPCT buys anything in describing a
phenomenon. This is a case where I think it does. Subjectively, one
clearly perceives that there is a state of affairs that needs
changing, which is, by definition, a perception of the existence of
error.

No, the problem here seems to be the ambiguous use of the term "perceive".
One does not perceive that something needs changing, one perceives a state
of affairs.

However, we are talking about a situation in which one _does_
perceive that something needs changing. That--the need for change--is
the state of affairs that is perceived. And it is the perception of
the "need for change" (i.e. error) that makes the question of "will"
interesting.

The issue is not whether one can control without perceiving error,
but whether one can _consciously_ perceive error without "perceiving"
it in the PCT sense. If that is your claim, there's a philosophical
mountain to climb, and I don't own crampons. However, without
claiming that it is possible consciously to perceive something that
cannnot be a perception in a control system, you can't get away from
the notion that the error in a control system is at least sometimes
perceptible, at least in the form of simultaneously perceiving an
existing state and its difference from a desired (reference) state.

Martin

[From Bill Powers (2000.06.18.1146 MDT)]

Bruce Gregory (2000.0618.1758)--
Martin Taylor (2000.06.17 09:04) --

Bruce:

I think this identification is unfortunate. One doesn't normally think of
having a "purpose" or a "goal" to maintain one's body temperature at a
temperature of 98.6 F. Clearly it is reasonable to talk about a control
system in the hierarchy with this reference level, however.

One also doesn't think of standing upright as being a purpose or goal while
one is doing it -- but there is a control process going on while one does
this, so there must be a reference signal, perception, error signal, and so
on.

The terms purpose and goal are nontechnical terms, representing unorganized
attempts to describe how we somehow know or intend what is to happen before
it happens, or keep it going "on purpose", in a way that is not exactly
what we mean by "prediction." If you intend, or if your purpose is, to wash
your hands, you're not just predicting that you're going to wash your
hands; you somehow "know" you're going to do it, in a sense that seems
stronger than mere prediction. If you're washing your hands already and the
purpose is turned off, you will instantly cease washing your hands.
Something unusual or unexpected, and more than trivial, would have to
happen to prevent the hand-washing. Ordinary disturbances, such as a lack
of water when you turn the faucet handle, will not keep you from washing
something disgusting off your hands.

PCT solves these old problems first by identifying "what is to happen" as a
_perception_ rather than an objective event, even in cases where most
people would automatically assume an objective event. If you're aware that
something is happening, the fundamental assumption of PCT is that the
something is a perception. This isn't to say it's an hallucination: the
experience could, for all we know, have some some sort of counterpart in
reality. We just have no way of proving to ourselves that it does, with the
same certainty that we can know that a perception, an experience, is
happening.

Once we accept all that we experience as consisting of perceptions, which
is to say afferent perceptual signals in our brains, and once we have
grasped how a control loop works, we can easily see that the only part of
the control loop that fits what we call intention, volition (willing), or
purpose is the reference signal. The error signal doesn't fit, Martin,
because it is at least as much caused by external events, disturbances, as
by changes in a reference signal.

If we can identify the cause of an action of our own (as we perceive it), I
think we tend to think of the action as _not_ willed. We call actions like
jerking a hand back when we touch something very hot "involuntary," where
"voluntary" and "willed" mean the same thing. Of course we can fight the
involuntary action, as when we pick up something that is not only
uncomfortably hot, but valuable. We can "will" ourselves to put it down
carefully even though holding onto it hurts. This willing, in that case,
seems opposite to the tendency to act involuntarily: it comes from within
rather than from outside. The conflict is between an external cause and an
internal one. That is what brings the internal cause to attention. We can
see and feel what tends to cause the involuntary action -- but what causes
the opposition to it?

I'm speaking here more or less as if trying to figure out what is going on
without any theory. We do, however, have a theory that explains these
phenomena. The voluntary aspect of our actions arises because reference
signals exist. Of course so do the involuntary aspects, but the involuntary
actions usually seem to arise because of built-in reference values: we are
wired up to act as if we desire sensations like burning to have zero
magnitude. So the difference between voluntary and involuntary acts seems
to be in the source of the reference signals. If the reference signals are
built-in, not adjustable by higher systems, we call the actions that arise
from them (such as breathing) involuntary, even if we can temporarily
oppose them by an "act of will." If the reference signals come from higher
in the hierarchy, we call the actions that arise from changing the
reference signals voluntary, and those that arise from disturbances not so
much automatic or involuntary as optional. If a dealer quotes you a price
on a car that is higher than the price in the advertisement, you may react
angrily to this disturbance, but that reaction is optional: you could also
change your mind about wanting that car, in which case there would be no
effective disturbance.

Martin:

To "will" something to occur implies a perception
that the thing willed is not now occurring.

I don't think this is true of all cases of willing. If it isn't, we can't
base any general analysis on this assumption. We have to try to distinguish
between what we know theoretically must be occurring and what, without
benefit of theory, seems subjectively to be occurring. It doesn't matter
what willing "implies," because implication is always within the framework
of some theory. The basic question we're trying to answer is _what
phenomena are denoted_ by terms like willing. The theory then is supposed
to offer an explanation of the phenomena.

I can breathe in a regular rhythm yet feel that the breathing is willed
rather than automatic. We can often tell when someone is pretending to be
asleep by noting that the rhythm of breathing, while slow and regular, is
not the same as the rhythm when the person is really asleep. I just had a
pulmonary test which required, at one stage, relaxing and "breathing
normally." Breathing normally on purpose is almost a contradiction in
terms. How can I know how I breathe when I am not conscious of breathing?
I'm fine, by the way, except for some permanent loss of function from all
those years of smoking.

So I can't accept that willing corresponds to error signals. Remember, to
know that a perception is not the same as a reference signal would require
that something sense both the reference signal and the perceptual signal
and compare them. We know that something as a comparator, but there is also
the possibility that it could be a perceptual input function, receiving an
imagined version of a reference signal and a real version of a perceptual
signal, and computing the difference between them -- a relationship. I
would prefer to interpret all cases of apparent perception of an error
signal this way; it's worth the very slight complication to avoid greater
complication of a very nice general proposition: that all perception
consists of afferent signals. One doesn't give up such an elegant
generalization easily.

There's also another reason for not accepting perception of error signals
easily. Have you ever tried to construct a working hierarchical model in
which an error signal from one control system becomes part of the inputs to
a higher level control system? I have, and I couldn't make it work
properly. That doesn't mean nobody could, but I certainly failed to get a
stable system, and until I see how such a system could work, I'm not going
to believe in it. Part of the problem, as far as I've been able to
understand it, is the existence of parasitic feedback loops at the higher
level. The lower-order error signal is a function, in part, of the
reference signal, and the reference signal is a function of the
higher-level error signal, which is a function of the higher-level
perception. So the perceived error signal from below is a function of the
perceiving system's own error signal, which closes a loop that is local to
the higher system. This is probably why I couldn't get stability, or even
understand what was happening when I tried to simulate such a system.

What is perceived is a
difference between a reference and a perception (i.e. the existence
of an error).

Note that this same effect can be achieve completely in terms of perceptual
signals if the reference signal is perceived via the imagination
connection. Of course the "synthesized error signal" may or may not be the
same as the actual error signal in the lower system -- it's probably not
the same. But it could be roughly the same.

Bruce:

No, the problem here seems to be the ambiguous use of the term "perceive".
One does not perceive that something needs changing, one perceives a state
of affairs.

Precisely. Nothing "needs changing" objectively.

Another ambiguity that is interfering here is the difference between plain
perception ("existence of a signal in a perceptual pathway") and _conscious
perception_, which brings in awareness. Control systems need perceptions to
operate at all, but the perceptions do not have to reach awareness. Many
control systems are working at all times. Some operate at the same level
where awareness happens to be currently engaged, but outside its scope
(you're listening to a concerto and ignoring the sound of your breathing).
Some operate at levels higher than the momentary locus of awareness (you're
aware of controlling the car's path, but at the moment are not aware of the
reason for which you're driving north on I-5). And of course very many
control systems are operating at levels lower than the current level of
awareness. There must be perceptual signals in all operating control
systems, yet we are aware of only some of them. So the existence of a
perceptual signal does not, in general, mean that we are aware of it. The
most telling instances of this involve _becoming_ aware of a perceptual
signal that must have existed all along: supply your own examples.

Bruce G.:

If you have a reference state for that perception that differs
from the input, the hierarchy may act to eliminate error (assuming a lack
of conflict). You may have thoughts about those actions ("I really have to
do something about the mess in my office") but those thoughts are not an
essential part of the control process. That is, you can model the control
process without paying any attention to the thoughts.

Let's not have any orphan phenomena if we can help it. Is thinking simply
an ephiphenomenon with no functional significance in the hierarchy? I think
quite the opposite -- in fact, I have devoted several proposed levels in
the hierarchy to processes involved in thinking (as I see it), and together
with the imagination connection these processes put thinking in charge of
most of the hierarchy. It may or may not be conscious thinking. I see
thinking as imposing logical and rule-driven structure on the operation of
the system. "I really have to clean up this mess some day" is a nice piece
of logic which simultaneously specifies an action and postpones it to the
indefinite future, so nothing is actually required to be done. But many
other thoughts ("If I take his knight, I'll have his queen and rook in a
fork") have determining effects on how lower systems operate.

Best,

Bill P.

[From Bruce Gregory (2000.0619.1100)]

Martin Taylor 2000.06.18 23.32

Whether the purpose is conscious or not does not affect whether the
purpose of the acts of a control system is to bring the control
system's perception to its reference value. I guess one has to
distinguish the "purpose" of the constructor of the control system
from the purpose of the control system. Maybe that's where your
demurral comes from?

Perhaps.

Think of it the other way round. Of course error doesn't have to be
perceived for it to be corrected. In "classic" HPCT, error is never
perceived. But we are talking about a situation in which error _is_
perceived. You don't "will" something unless you perceive error in
the pct sense.

Again I'd like to distinguish subjective experience from the model. The
model does not require the perception of error. As you point out, we can
"explain" subjective experience in terms of the perception of error. I
don't want to minimize the significance of subjective experience. I simply
want to point out that model works just fine without the need to invoke
subjective experience. We do not need to worry about the subject's thoughts
in order to model her tracking performance.

However, we are talking about a situation in which one _does_
perceive that something needs changing. That--the need for change--is
the state of affairs that is perceived. And it is the perception of
the "need for change" (i.e. error) that makes the question of "will"
interesting.

O.K. You find it more interesting than I do. However, notice that the HPCT
model works perfectly well without invoking "will" or "the need for
change." I can invoke both in a simple tracking experiment, but I believe
we agree that the simpler PCT model works extremely well.

The issue is not whether one can control without perceiving error,
but whether one can _consciously_ perceive error without "perceiving"
it in the PCT sense. If that is your claim, there's a philosophical
mountain to climb, and I don't own crampons. However, without
claiming that it is possible consciously to perceive something that
cannnot be a perception in a control system, you can't get away from
the notion that the error in a control system is at least sometimes
perceptible, at least in the form of simultaneously perceiving an
existing state and its difference from a desired (reference) state.

I think I can. And no, I am making no claim about "conscious" perception in
the absence of PCT perception. I am simply trying to distinguish between
perceptions we must model and those we have successfully avoided having to
model. We can model the actions without modeling the thoughts is all that I
am trying to say. I can talk about the thermostat's ability to "perceive"
the difference between its goal and the temperature it perceives, but I
have no need to model this "consciousness". Interestingly enough, the same
is true of humans in tracking experiments. It may just be that the "voice
over" of consciousness is simply a running commentary on the actions of the
HPCT system. But as you say, that's a philosophical mountain we don't need
to climb.

BG

[From Bruce Gregory (2000.0619.1128)]

Bill Powers (2000.06.18.1146 MDT)

Bruce:
>I think this identification is unfortunate. One doesn't normally think of
>having a "purpose" or a "goal" to maintain one's body temperature at a
>temperature of 98.6 F. Clearly it is reasonable to talk about a control
>system in the hierarchy with this reference level, however.

One also doesn't think of standing upright as being a purpose or goal while
one is doing it -- but there is a control process going on while one does
this, so there must be a reference signal, perception, error signal, and so
on.

Indeed.

If the reference signals come from higher
in the hierarchy, we call the actions that arise from changing the
reference signals voluntary, and those that arise from disturbances not so
much automatic or involuntary as optional. If a dealer quotes you a price
on a car that is higher than the price in the advertisement, you may react
angrily to this disturbance, but that reaction is optional: you could also
change your mind about wanting that car, in which case there would be no
effective disturbance.

Perhaps in the same way that the fox changed its mind about wanting the
grapes. Changing one's mind sounds like reorganization to me and we
normally don't think of reorganization as an act of volition.

There's also another reason for not accepting perception of error signals
easily. Have you ever tried to construct a working hierarchical model in
which an error signal from one control system becomes part of the inputs to
a higher level control system? I have, and I couldn't make it work
properly. That doesn't mean nobody could, but I certainly failed to get a
stable system, and until I see how such a system could work, I'm not going
to believe in it. Part of the problem, as far as I've been able to
understand it, is the existence of parasitic feedback loops at the higher
level. The lower-order error signal is a function, in part, of the
reference signal, and the reference signal is a function of the
higher-level error signal, which is a function of the higher-level
perception. So the perceived error signal from below is a function of the
perceiving system's own error signal, which closes a loop that is local to
the higher system. This is probably why I couldn't get stability, or even
understand what was happening when I tried to simulate such a system.

Thanks for reporting on this effort, it's always nice to learn what a model
does and does not do.

Let's not have any orphan phenomena if we can help it. Is thinking simply
an ephiphenomenon with no functional significance in the hierarchy? I think
quite the opposite -- in fact, I have devoted several proposed levels in
the hierarchy to processes involved in thinking (as I see it), and together
with the imagination connection these processes put thinking in charge of
most of the hierarchy. It may or may not be conscious thinking. I see
thinking as imposing logical and rule-driven structure on the operation of
the system. "I really have to clean up this mess some day" is a nice piece
of logic which simultaneously specifies an action and postpones it to the
indefinite future, so nothing is actually required to be done. But many
other thoughts ("If I take his knight, I'll have his queen and rook in a
fork") have determining effects on how lower systems operate.

Of course there is no need to be conscious to function at these higher
levels. Deep Blue executed programs very much like the one you
described--and acted on the output without needing to be conscious of what
it was doing. I think you were on target when you put for the idea that the
Observer simply observes. Everything else (as far as consciousness is
concerned) may be nothing more than a running commentary on what's going on.

BG

[From Bill Powers (2000.06.19.1231 MDT)]

Bruce Gregory (2000.0619.1128)--

I think you were on target when you put for the idea that the
Observer simply observes. Everything else (as far as consciousness is
concerned) may be nothing more than a running commentary on what's going on.

But I still say we shouldn't have orphan phenomena. What is a running
commentary _for_? It must have some role to play in the functioning of the
whole system.

I ran across one possible valuable function of the running commentary. Back
in the 1950s I built a device that implemented a learning task that someone
at the Mental Health Research Institute at Ann Arbor invented, with a few
added twists. 30 or so years later, Dick Robertson programmed the same task
on an Apple (or was it a Commodore?) computer, which shows how long ago
this was.

The apparatus had four lights and four keys. The lights came on in a
specific sequence, and could be turned off by pressing the right keys. When
a new light came on, the player had to find the right key (always the same
one in a given run) to turn it off, and the computer counted up the
machine's score (rapidly) during the delay. The object was to make the
machine's score zero, which could be done only by pressing the right key
after the previous light went out and _before_ the next one came on. When
the right key was pressed prior to the light coming on, the score counted
downward. So one stage in the learning process was to learn the sequence in
which the lights would come on and the keys had to be pressed.

I found that even knowing how the apparatus worked, it still took me a
number of trials to learn a new sequence, especially since the relationship
of keys to lights was scrambled between experimental runs, and I had to
learn not only which light would come on next, but which key would turn it
off. Quite by accident, I discovered that if I _said to myself_, out loud
or in imagination, the name of the next key that had to be struck, calling
them A, B, C, and D, I could learn the sequence in at most four
light-events. For example, I would try key A, key B, and key C, upon which
the light would go off. So I said to myself, "C". Then the next light came
one, and I tried keys D, A -- and the light went off. So I said to myself,
"C,A". After the next trial I might say "C,A,D", and finally "C, A, D, B".
from then on, I would just wait for the next light, and press the next key
according to the the sequence labeled with the symbols C,A,D, and B. After
that it was a simple matter to anticipate the next light to come on, and
win the game after the fourth successful keypress (or, if I bothered to use
logic, after the third).

Learning the sequence _without_ saying or thinking a symbolic sequence took
much longer. Interestingly, simply refraining from saying or thinking the
sequence was enough to slow the learning of a new sequence.

I would love to see how someone else fares in this experiment. So far I'm
the only one I know who has tried it, and as we all know, an N of 1 is not
very impressive.

Anyhow, there is a possible role for the "running commentary." It may help
us keep track of sequential relationships, and who knows what else? If
we're specifically controlling for a sequence, we may do so best by
assigning symbols to each step and using the sequence of symbols to help
keep track of the sequence of other (nonverbal) perceptions.

Best,

Bill P.

[From Bruce Gregory (2000.0619.1543)]

Bill Powers (2000.06.19.1231 MDT)

Bruce Gregory (2000.0619.1128)--

>I think you were on target when you put for the idea that the
>Observer simply observes. Everything else (as far as consciousness is
>concerned) may be nothing more than a running commentary on what's going on.

But I still say we shouldn't have orphan phenomena. What is a running
commentary _for_? It must have some role to play in the functioning of the
whole system.

My guess is that it may be social. From the time we are young we are asked
to explain our behavior. ("Why on earth did you do that?") From an HPCT
perspective any answer we give is likely to be folk psychology. (We are
unlikely to respond that the act in question was the unintended side effect
of controlling some perception.) We can say, "I didn't mean to do that,"
but this response way not be well received. The process by which we judge
ourselves and our motives may be "internalized" as a running commentary. In
fact, this is probably "the voice on conscience". Since we need something
like MOL to uncover the "real" reasons for our behavior, the running
commentary is likely to be irrelevant or at best severely limited in value.

Anyhow, there is a possible role for the "running commentary." It may help
us keep track of sequential relationships, and who knows what else? If
we're specifically controlling for a sequence, we may do so best by
assigning symbols to each step and using the sequence of symbols to help
keep track of the sequence of other (nonverbal) perceptions.

That sounds plausible. Rehearsal is an important component for remembering
sequences. Yet another example is GUMP, the pre-landing check list
(Gas--fullest tank, boost pump on; Undercarriage--down and locked;
Mixture--rich; and Propeller--climb pitch.)

BG

[From Bill Powers (2000.06.19.1637 MDT)]

Bruce Gregory (2000.0619.1543)--

Me:

But I still say we shouldn't have orphan phenomena. What is a running
commentary _for_? It must have some role to play in the functioning of the
whole system.

Bruce:

My guess is that it may be social. From the time we are young we are asked
to explain our behavior. ("Why on earth did you do that?") From an HPCT
perspective any answer we give is likely to be folk psychology. (We are
unlikely to respond that the act in question was the unintended side effect
of controlling some perception.) We can say, "I didn't mean to do that,"
but this response way not be well received. The process by which we judge
ourselves and our motives may be "internalized" as a running commentary. In
fact, this is probably "the voice on conscience". Since we need something
like MOL to uncover the "real" reasons for our behavior, the running
commentary is likely to be irrelevant or at best severely limited in value.

This mode of discussion, it seems to me, takes the main phenomenon for
granted and emphasizes a side-effect. Whether a verbal commentary has
social applications or not, it is still a phenomenon and needs to be
explained in its own right. Why does anyone ask for an "explanation" of
behavior, whether it is someone else's or one's own? After finding or being
given an explanation, what do we have that we didn't have before?

The MOL finds "real" reasons for behavior, as you suggest, and at some
point in the process, those "real" reasons consist of verbal sayings, rules
to be followed, reasoning, logic, and so on - all the stuff of
explanations. One of the barriers to progress up the levels is a highly
developed set of reasoning skills at this verbal-symbolic level. When a
person operates skillfully at this level, it can be very hard to jog that
person up a level -- in effect, to get the person to look at all his
reasons and logic as just a bunch of words. The more thoroughly identified
with processes at this level a person's awareness is (and I'm sorry that I
can't say what I mean any more clearly than that), the harder it is to find
a higher-level point of view and move the center of awareness above this
verbal-reasoning level. Eastern philosopher/therapists refer to the
activities of the verbal level as "monkey-chatter," which is their way of
trying to get the Observer to step back and stop taking that level so
seriously. Of course the monkey-chatter has important uses and can't be
left out of the hierarchy, but temporarily downplaying it can open the way
to higher levels of Observation.

Rehearsal is an important component for remembering
sequences. Yet another example is GUMP, the pre-landing check list
(Gas--fullest tank, boost pump on; Undercarriage--down and locked;
Mixture--rich; and Propeller--climb pitch.)

Yes, and that's sort of like the Method of Loci for memorizing things. We
use easily manipulable perceptions as tags for addressing more complex
memories. I don't really know why this works so well, but there's clearly
a strong hint about what symbols are for, when someone comues along who can
recognize the hint.

Best,

Bill P.

[From Bruce Gregory (2000.0620.1153)]

Bill Powers (2000.06.19.1637 MDT)

This mode of discussion, it seems to me, takes the main phenomenon for
granted and emphasizes a side-effect. Whether a verbal commentary has
social applications or not, it is still a phenomenon and needs to be
explained in its own right. Why does anyone ask for an "explanation" of
behavior, whether it is someone else's or one's own? After finding or being
given an explanation, what do we have that we didn't have before?

If you are interested in speculation, you might take a look at The Mating
Mind : How Sexual Choice Shaped the Evolution of Human Nature , by Geoffrey
F. Miller.

The MOL finds "real" reasons for behavior, as you suggest, and at some
point in the process, those "real" reasons consist of verbal sayings, rules
to be followed, reasoning, logic, and so on - all the stuff of
explanations. One of the barriers to progress up the levels is a highly
developed set of reasoning skills at this verbal-symbolic level. When a
person operates skillfully at this level, it can be very hard to jog that
person up a level -- in effect, to get the person to look at all his
reasons and logic as just a bunch of words. The more thoroughly identified
with processes at this level a person's awareness is (and I'm sorry that I
can't say what I mean any more clearly than that), the harder it is to find
a higher-level point of view and move the center of awareness above this
verbal-reasoning level.

You are as clear as anyone I've encountered. Giving up the belief that you
_are_ your thoughts turns out to be incredibly difficult. Apparently it is
belief that keeps many variables near their intrinsic reference levels.

BG

[From Bill Powers (2000.06.20.1532 MDT)]

Bruce Nevin (2000.06.20.1843 EDT, 1143 UT Scotland)--

What are you doing in Scotland?

Could you filter a copy of the higher-level error, somewhat as in the
filtering that is done for adaptive control?

Deep waters, Bruce. Best to leave such guesses on the shelf until you're
ready to model what you're talking about (I'm not).

Best,

Bill P.

[From Bill Powers (2000.06.20.1534 MDT)]

Bruce Gregory (2000.0620.1153)--

Giving up the belief that you
_are_ your thoughts turns out to be incredibly difficult. Apparently it is
belief that keeps many variables near their intrinsic reference levels.

Perhaps the problem is to realize that you, the Observer, are not the Self
that exists at the logical/verbal level. As long as you perceive valuable
characteristics like vocabulary, reasoning, problem-solving skills, ability
to follow complex rules, and so forth as aspects of your Self (rather than
tools at your disposal), it will be hard to acknowlege that ultimate
control does not reside in that Self. That Self is fully sentient up to the
level of reason, and protects itself against disturbances.

Be kind to it, and it might relax enough to let you get on with the Noble
Eightfold Way (or is that Elevenfold?).

Best,

Bill P.

[From Bruce Nevin (2000.06.20.1843 EDT, 1143 UT Scotland)]

Bill Powers (2000.06.18.1146 MDT)--

···

At 01:26 AM 06/19/2000 -0600, Bill Powers wrote:

Part of the problem, as far as I've been able to
understand it, is the existence of parasitic feedback loops at the higher
level. The lower-order error signal is a function, in part, of the
reference signal, and the reference signal is a function of the
higher-level error signal, which is a function of the higher-level
perception. So the perceived error signal from below is a function of the
perceiving system's own error signal, which closes a loop that is local to
the higher system. This is probably why I couldn't get stability, or even
understand what was happening when I tried to simulate such a system.

Could you filter a copy of the higher-level error, somewhat as in the
filtering that is done for adaptive control?

         Bruce Nevin

[From Bruce Nevin (2000.06.19.1856 EDT, 1156 UT Scotland)]

Bill Powers (2000.06.20.1855 EDT)--

···

At 01:02 PM 06/19/2000 -0600, Bill Powers wrote:

there is a possible role for the "running commentary." It may help
us keep track of sequential relationships, and who knows what else? If
we're specifically controlling for a sequence, we may do so best by
assigning symbols to each step and using the sequence of symbols to help
keep track of the sequence of other (nonverbal) perceptions.

Perhaps this is the role of "giving oneself instructions" -- something
discussed on the net with Chuck Tucker and Clark McPhail 5-10 years ago.

         Bruce Nevin

[From Bruce Gregory (2000.0620.2017)]

Bill Powers (2000.06.20.1534

Bruce Gregory (2000.0620.1153)--

>Giving up the belief that you
>_are_ your thoughts turns out to be incredibly difficult. Apparently it is
>belief that keeps many variables near their intrinsic reference levels.

Perhaps the problem is to realize that you, the Observer, are not the Self
that exists at the logical/verbal level. As long as you perceive valuable
characteristics like vocabulary, reasoning, problem-solving skills, ability
to follow complex rules, and so forth as aspects of your Self (rather than
tools at your disposal), it will be hard to acknowlege that ultimate
control does not reside in that Self. That Self is fully sentient up to the
level of reason, and protects itself against disturbances.

Be kind to it, and it might relax enough to let you get on with the Noble
Eightfold Way (or is that Elevenfold?).

Beautifully said. Thank you.

BG

"Either the wallpaper goes, or I do."
--Oscar Wilde, last words

[From Bruce Nevin (2000.06.21.0205 EDT, 0705 UT Scotland)]

[From Bill Powers (2000.06.20.1532 MDT)]

Bruce Nevin (2000.06.20.1843 EDT, 1143 UT Scotland)–

What are you doing in Scotland?

Cisco acquired a company in Cumbernauld, north of Glasgow. I’m helping
some of their people get up to speed. (And Cisco is not slow.) That’s why
I’m going to miss all but Friday night and the very first part of
Saturday at the conference. I’ll arrive at Logan about 5:30 Friday,
tutelary deities of air travel willing, get myself to BU, find out where
you are and check in. Saturday I have to leave middayish to get myself to
Danvers, north of Boston, for my niece’s wedding.

Could you filter a copy of the
higher-level error, somewhat as in the

filtering that is done for adaptive control?

Deep waters, Bruce. Best to leave such guesses on the shelf until
you’re

ready to model what you’re talking about (I’m not).

TouchŽ. But the question seemed
worth asking. Maybe I was quite wrong in my interpretation of your
earlier effort as you described it (or rather as you mentioned it, since
you did little more than that), but that seemed not so much to be
modelling certain behavior as adding a feature to an existing model to
see what would happen, and it seemed to me that you found it frustrating
that it failed to do anything that you could reasonably interpret, so I
supposed you had continuing interest in the problem.

    Bruce
···

At 03:34 PM 06/20/2000 -0600, Bill Powers wrote:

[From Bill Powers (2000.06.21.1029 MDT)]

Bruce Nevin (2000.06.21.0205 EDT, 0705 UT Scotland)--

... it seemed to me that you found it
frustrating that it failed to do anything that you could reasonably
interpret, so I supposed you had continuing interest in the problem.

The problem was created by the postulate that a higher control system in
the hierarchy could sense the error signal in a lower-order system. Since
the same effect can be achieved without sensing the error signal (compare a
perception with the imagined reference signal using a relationship-type
perceptual input function), there doesn't seem to be anything to gain by
pursuing this further.

I haven't forgotten the postulate, but I'm not working on it at the moment.

Best,

Bill P.

[From Bruce Nevin (2000.06.23.0831 EDT, 1331 UT Scotland)]

Bill Powers (2000.06.21.1029 MDT)--

The problem was created by the postulate that a higher control system in
the hierarchy could sense the error signal in a lower-order system. Since
the same effect can be achieved without sensing the error signal (compare a
perception with the imagined reference signal using a relationship-type
perceptual input function), there doesn't seem to be anything to gain by
pursuing this further.

Yes, I see. Given a choice of evolving a complicated filtering system and
evolving/developing a connection into yet another relationship-control
system, the latter is vastly more likely. How likely is it that a
complicated filter function would evolve, when the immediate consequence of
inputting an error signal as a perceptual input is destabilization!
Homunculizing the reorganization system: "Oops! that didn't work, did it
<disconnect>." The fact that it is a "meta-" relationship by way of a
copied reference signal can have potentially complicated psychological (and
philosophical) consequences, but the means to implement it needn't be
complicated at all.

···

At 10:33 AM 06/21/2000 -0600, Bill Powers wrote: