"Strength" of neural signals (was Re: Query re: Intro to Modern Psychology)

[From Bill Powers (970106.0600 MST)]

John Anderson (970105.2210) --

But changing loop gain doesn't have the same effect as changing the
strength of a reference signal, does it? It's my understanding that
changing the loop gain only affects how quickly and effectively a
disturbance is countered. A control loop with negatively infinite gain
will maintain the controlled perception exactly at the reference level
(see the discussion on pages 50-51 of IMP). A lower (less negative) loop
gain would hold the perception less exactly to the reference level, but
the reference level and the controlled perception would still have the
same "strength". A _stronger_ reference signal, on the other hand,
would require the perception to become "stronger", whatever that means,
with the loop gain still only affecting how closely the controlled
perception matched the reference. Am I right?

This leads me to wonder (with Bruce Gregory) what exactly is meant by
the "strength" of a reference (or perceptual, or error) signal?

The term "strength" is not a technical term in PCT. A signal can have a
variable magnitude, but since it is one-dimensional in PCT there is no other
way it can vary.

Loop gain does affect how quickly a disturbance is countered. One factor in
the loop gain, the sensitivity of the output function, also determines how
much action will result from a given amount of error. It is this sensitivity
to error which gives the impression of "strength" to a person's goals.

Suppose you and I both have a reference principle that says "a person should
tell the truth." Let's say, too, that we both perceive truthfulness the same
way, and that we have set neither the highest nor the lowest degree of
truthfulness as a goal. We will both say that a person who tells the truth
about important matters is truthful enough to satisfy us.

But suppose that we differ in what we do when a person is less truthful than
we would desire. Let's talk about persons A and B here. Person A says "Well,
the kid really shouldn't tell outright lies about important things; I'll
have a talk with her in the morning and explain that she's stepped over the
line." Person B says "Nonsense. She's gone too far and how she has to face
the consequences. I want her in her RIGHT NOW and I'm going to lay down the

What's the difference here? A and B both agree that there is a error between
the girl's truthfulness and the degree of truthfulness they want to see.
They want to see the same degree of truthfulness. But person A is prepared
to take a relatively mild kind of action, tomorrow morning, while person B
wants to take strong action right now.

I think that most people would say that person B's goal is "stronger" than
person A's. B's convictions about truth-telling are more "strongly held"
than A's. This impression is created, however, not by the nature or setting
of the goal, but by the strength of the _action_ that B is seen to take when
there is a deviation of perception from the goal-state. B reacts to smaller
discrepancies than A, or given the same discrepancy, reacts more forcefully.
In controlling for truthfulness, person B has a higher output sensitivity
than person A has.

So I would say that attributing strength to the goal itself is simply a
mistake that arises from not understanding how control works (a not uncommon
problem). The goal of having a certain amount of salt in the soup simply
defines a specific degree of the taste of saltiness. Such a goal can't be
stronger or weaker; it's just a specification for a certain amount of a
given perception. The strength or weakness is not revealed until the
experienced taste differs from the actual taste. Then we see that one person
is very fussy about getting exactly the right degree of saltiness, while the
other will tolerate rather large deviations before taking any strong action.
If the first person is the cook, there is no problem, but if the second one
is, there's going to be trouble, even though the two agree on the right
degree of saltiness.


The PCT model treats the brain basically as an analog computer. In an analog
computer, a signal inside the system represents a variable outside the
system (in our physical models). At the lower six levels, through the
relationship level, the magnitude of a signal indicates the magnitude of the
external variable, like a meter reading. If the external variable is
something like a joint angle, then a small magnitude of signal indicates a
joint angle near one extreme, and a large magnitude indicates a joint angle
near the other extreme. A reference signal for a joint-angle control system
indicates, by its magnitude, what the joint angle is to be, between these
extremes. You can't have a "stronger" reference signal for a given joint
angle, because the signal can vary only in magnitude, and a specific
magnitude indicates a specific joint angle. There is no additional degree of
freedom left over to indicate how "strongly" this reference state is
asserted. Instead, what we see as "strength" of the reference signal is
simply how powerfully the system will react when a disturbance or a mistake
induces an error. That's determined by the amplification factor in the
output function of the joint-angle control system. The higher that factor,
the stiffer will be the joint against disturbances.

At the seventh level, according to the definitions I've proposed, the analog
world of the lower orders is perceived in terms of categories. A range of
different shapes of different sizes, colors, brightnesses, and so on results
in the same perception: "a dog." This is where the analog world gets chopped
into discrete entities, so that a given perception is seen as "a dog" or "a
cat" but never as both at once. Martin Taylor has proposed that "lateral
inhibition" occurs at this level to create hysteresis; we go on perceiving
"dog" as the shape changes until there is a sudden flip-flop switch to
"cat", and when the shape changes back, the reverse flip doesn't take place
until the transition has gone much further toward "dog" again.

However, while we're perceiving a given category, we also perceive the
_degree_ to which a given perceptual situation exemplifies that category. In
other words, the categorization doesn't completely eliminate the analog
nature of the signals. There are degrees of dogness, and we can set
reference signals not only for the category, but for the degree of fit. As
the shape changes, we can say "That's getting to be a pretty poor example of
a dog, but it's still not a cat. It might be a really lousy elephant."

This leads to the situation where persons A and B can have the same
perception of a dog, and set the same reference condition for degree of fit,
yet react differently to deviations of a perceived shape from the nominal
category "dog." Those with very high output sensitivity become judges at dog
shows. They have "strong opinions" about how much deviation is allowed from
"Weimeraner" or "Siluki" or "English Bull" before an objection is raised --
or, as before, how strongly they will act when there is a given discrepancy
("not too bad" versus "you're disqualified!"). Of course perceptual
discrimination also plays a part. The amount of change of perceptual signal
per unit change of the inputs is also part of loop gain.
The PCT model allows us to speak of experience and action in more detail
than the usual commonsense view recognizes. So we can be much more specific
about what we mean by commonsense terms like "strength."


Bill P.