[From Bill Powers (2009.11.27.0831 MDT)]
Bruce Abbott (2009.11.24.1000 EST) –
BP earlier: But a good model
doesn’t reproduce the data just by writing an
equation that reproduces it. It proposed a mechanism the operation of
which reproduces the data, which is a very different matter.
BA: Solomon and Corbit do propose a mechanism, although much of it
unspecified, hidden inside those boxes in their
BP: The point is not what behavior they give rise to, but how they do it.
We can draw a model consisting of one box called “Track
Function”, an arrow going in labeled “target position”, an
arrow coming out of it labeled “cursor position” and a graph
off to one side showing the cursor position closely tracking the target
position. But how this tracking is produced is the crucial point. One
model simply proposes that the target position is sensed and converted
into arm movements that make the mouse move the same way the target
moves. Engineering psychologists drew their diagrams showing the target
position and cursor position (on a display screen) entering a summing
point (still on the screen), and a “tracking error” coming out
of the summing point and entering the “human operator.” The
arrow coming out of the human operator became the cursor position, so the
human operator was a simple transfer function converting the
error-stimulus into the output-response, as most scientists thought it
worked back then. The summing point was shown as being in the
So just showing input-output relationships of the whole system is not
enough to make it a PCT model, or even close to one.
BA: Inside those boxes are
labels hinting at what those hidden mechanisms are supposed to give rise
to, without saying how. In Solomon and Corbit’s model, those are treated
as “black boxes,” somewhat like the input and output function
boxes in the control-system diagram.
BP: At some point we all have to resort to black boxes. In PCT they are
the input function, comparator, output function, and feedback function.
But in the PCT model, these little black boxes are connected in a circle.
In the Solomon and Corbitt model an input causes an output through a
model consisting of two different ways of responding to the input, with
one response subtracted from the other to produce the net output
response. The output response doesn’t affect the input, so this is not a
feedback system, much less a control system. The most we could say
would be that the S&C model, taken as a whole, is an input function
showing (partly) how emotions, or maybe just the feeling components, are
perceived. What is not shown is how the actions of the control system,
via the somatic branch, are causing those feelings, while the behavioral
branch carries out the motor behavior that is drawing on the resources
supplied by the changed physiological state (in the case of an emotion
calling for increased motor activity).
BP earlier: The code I posted
was not intended to be a control system and it
isn’t one. It’s just another way of generating that curve, and there
are others. There’s no theory behind it, just as there is no theory
behind the opponent-factor way of generating that curve. It’s just a
way to do it.
BA: The “good model” to which I referred is a PCT-based model
that I imagine
would account for the phenomena of concern to Solomon and Corbit (along
a host of other related phenomena). Your post reads as though you
that I am attempting to defend Solomon and Corbit’s model, or even
little curve-generating “model” you provided as an example of
how easy it is
to generate Solomon and Corbit’s curves. In fact I have been arguing
developing the PCT-based model as an alternative to Solomon and
proposal. I don’t have the time for it now, but to me it looks like
worthwhile future project.
I agree. I have sketched what mine would be; the “opponent
process” aspect would show up in the highest level of behavioral
control system involved, in cases where there is an undershoot following
the end of an energetic but transient action. Not all emotional episodes
end in an “opposite” emotion, though some do. I can imagine
that in control systems having a small amount of instability, there could
be several damped oscillations after a brief episode. Also, if a control
system became continuously unstable and started spontaneous oscillations,
the emotional state could also oscillate continuously between exaggerated
states of preparedness and retreat or inactivity – depression. A control
system in a continuing monopolar positive feedback situation could pin
the activity and the emotional state at either extreme.
By the way, some readers may
wonder why I say “the PCT-based model” rather
than “the PCT model.” PCT gives a general picture of the
(multiple layers of control systems organized into a hierarchy,
system, and so on), but it does not delineate what specific control
are present nor how they may be organized.
BP: Good point worth mentioning. Sometimes when I say “I haven’t
modeled that behavior yet” people look puzzled and ask if I don’t
already have a model of behavior. Isn’t PCT a model of behavior? Your
point is that it’s not a working model until a lot of specific
details have been filled in as appropriate to a particular
BA: Specifically, I’m
thinking about the possibility that “emotions” are
systems that in some cases may act to oppose each other in the way
by Solomon and Corbit. But without a model and some data to test it
I’m just speculating of course.
BP:I don’t think any “opposition” will be necesssary, in the
sense of two different emotional states at the same time. One goal
(“attack”) can be replaced by another (“flee”),
particularly if the physiological states involved are close to being
identical. There can be conflict between goals, or a switch from one goal
to another that can be thought of as “opposite” to it, but I
think there’s only one physiological state at a time. Opposition implies
something about the direction of action, and that belongs to the
cognitive aspect of emotion rather than the physiological. Physiology
doesn’t know anything about directions. That’s the business of the
hierarchy of perception and control.
BP: How does the idea that there
are separate emotion-systems developed over
evolutionary time explain these phenomena?
It doesn’t. How a given brain-system develops is a separate question
how that system is structured.
The development wasn’t what I was asking about, though I didn’t make that
clear. It’s the idea of a separate emotion system that I was thinking of.
My model proposes that emotion is the way we experience all active
goal-seeking behavior. The feeling states simply accompany that behavior.
Some behavioral systems are inherited, but the feeling-states that go
with them are the same as they are for learned behavior. The amygdala and
hypothalamus are lower level systems in the somatic branch, and are
involved in all behavior whether inherited or learned.
PCT proposes that the
structure develops in
response to intrinsic error, through the random variation and
retention of the reorganizing system. The evolutionary view proposes
the basic structure develops as a result of random variation and
retention of structures that improve reproductive fitness. Such a view
not exclude the possible influence of learning within the systems so
created. Neither proposal explains the phenomena in question, because
address a different question: how did these systems develop?
BP earlier: How does any existing theory explain how a person can get
angry with a computer when it refuses to make a program run
BA: There are current “theories” (I use that term loosely here)
that explain it. For example, Plutchik says that anger arises when one
perceives that one is being blocked from achieving a goal that one is
trying to achieve. Sounds
like the operation of a control system to me.
BP: o you mean that Plutchik’s description sounds as if he is identifying
this as the operation of a control system, or (as I would say) that he
missed seeing that it was a control system, whereas you are pointing out
that it suggests a control system in operation?
Being blocked from achieving a goal one is trying to achieve normally
results in an increase of effort, which would generate arousal of the
physiological systems. But to what does Plutchik ascribe the cause of the
Why should being blocked from achieving a goal cause any feelings? It’s
just a fact, isn’t it? It seems so obvious to me that “trying”
to achieve something calls for cranking the physiology up to support the
action, and that explains why especially strong feelings arise in that
case. But why do somewhat less strong feelings also arise when we’re not
being blocked? And why does a cognitive assessment of a situation and a
decision to act result in feelings? My model offers an explanation, but
the standard model, it seems to me, is vague on this problem. Either it’s
vague, or it assumes that the feelings come first, produced by some
emotion-system as a warning.
BP earlier: But most important
of all, how does any other theory of emotion
integrate into an overall theory of how behavior works? As far as I
know mine is the only one that does, and it is certainly the only one
that is integrated with PCT.
BA: You have yet to describe how the PCT model of emotions explains
“positive” emotions. I’m not suggesting here that it can’t, but
it does seem
to me to be a more difficult case to handle. Love, for
The feelings are not hard to account for, are they? Others have suggested
that positive feelings can also be indentified as a negative rate of
change of unpleasant feelings – a bad feeling going away.
What is hard to define are words like “pleasure” and
“displeasure.” A perceptual signal has no value attached to it;
it’s just a row of blips, a factual report of the magnitude of some
variable. Any value is given to it by the control system that receives
If there is a positive reference signal for the perception and the
perception is less than the reference, we feel an urge to experience more
of it, and I suggest that is the kind of perception we call pleasant. We
seek more of the perceptual signal because it is pleasant, and it is
pleasant because we seek more of it. What we mean by pleasant is
something we want more of, or a large amount of. Conversely, if we want
less of a variable or none of it and it is decreasing, the decrease is
perceived as pleasant because we feel an urge to experience less of the
variable that that is what is happening. Conversly, a variable that is
higher than its reference level is unpleasant, and if it is getting still
higher, it is even more unpleasant. So we can interpret emotions in terms
of what the reference level is for the related controlled variable and in
terms of the condition and rate of change of the controlled variable. We
try to change variables in the direction of their reference levels, and
the feeling of wanting to control in ther appropriate way is the measure
of the quality or value of the perception.
This theory also explains why it is that there can be such a thing as
too much of a pleasant signal. It is a pleasure to come out of the
cold weather into a warm house. But if the thermostat is set too high,
that is soon felt as an unpleasantly hot house – a temperature above the
reference level. Practically any sensation becomes unpleasant when it
exceeds the reference level for its preferred value, no matter how
acceptable or pleasurable in lower amounts. Control systems don’t just
try to get perceptions into an “acceptable range.” They try to
keep the variable at one specific level, and either an excess or a
deficiency is unpleasant to some degree. An interesting corrolary of all
this is that some error signals are interpreted as confusingly pleasant,
in that we want to increase the perception that is lower than its
In any case, I think we can deduce the value of a controlled variable
from the kind of feedback effects of behavior that we observe. If
behavior tends to increase the magnitude of the variable, it is perceived
as pleasant; if the effect is to decrease the variable, it is
BP earlier: If you want to know
how the phenomena mentioned by Solomon and Corbit are accounted for by
PCT, you will have to set up a model that has
emotions. I highly recommend that you and any other interested
programmers do this. . . .
BA: Well, at last. That’s all I’ve been arguing for.
BP: That’s what I’ve been doing all along, though I haven’t got very far
with it. The ideas in this post are just a few pieces of a theory that
are lying around on shelves waiting to be assembled.