More on opponent processes

[From Bill Powers (2009.1.1.23.1440 MDT)]

Attached is the code for a program that generates the same output
waveform that the opponent-process model generates. Rename it to
"opponent.exe" to run it. Click to start it.

The code that does it represents a forward amplifier with a somewhat
smoothed response, and a feedback connection through a leaky
integrator. That code, as a Delphi Unit, is also attached. This is
not a control system, either; it just shows that there is another
way, among dozens, of recreating the observed affect curves. The A
process and B process can't be defended by pointing to the output
curve they generate.

Best,

Bill P.

Opponent.txt (365 KB)

OpponentUn.pas (1.15 KB)

[From Bruce Abbott (2009.11.23.2147 EST)]

Bill Powers (2009.1.1.23.1440 MDT) --

BP: Attached is the code for a program that generates the same output
waveform that the opponent-process model generates. Rename it to
"opponent.exe" to run it. Click to start it.

Nice work.

BP: The code that does it represents a forward amplifier with a somewhat
smoothed response, and a feedback connection through a leaky integrator.
That code, as a Delphi Unit, is also attached. This is not a control system,
either; it just shows that there is another way, among dozens, of recreating
the observed affect curves. The A process and B process can't be defended by
pointing to the output curve they generate.

No, but as a starting point, a good model will reproduce the phenomena it is
designed to account for. In addition, it should generate testable
predictions which differ from those of alternative models that also
reproduce the same phenomena. In this case, the Test for the controlled
variable would distinguish between the control-system and non-control-system
models.

Bruce A.

[From Bill Powers (2009.11.24.0003 MDT);

Bruce Abbott (2009.11.23.2147 EST) --

The A process and B process can't be defended by pointing to the output curve they generate.

No, but as a starting point, a good model will reproduce the phenomena it is
designed to account for. In addition, it should generate testable
predictions which differ from those of alternative models that also
reproduce the same phenomena. In this case, the Test for the controlled
variable would distinguish between the control-system and non-control-system
models.

But a good model doesn't reproduce the data just by writing an equation that reproduces it. It proposed a mechanism the operation of which reproduces the data, which is a very different matter.

The code I posted was not intended to be a control system and it isn't one. It's just another way of generating that curve, and there are others. There's no theory behind it, just as there is no theory behind the opponent-factor way of generating that curve. It's just a way to do it.

Let me turn the argument around. I have a theory of emotion that is based on control theory, and in fact on the whole PCT model, as well as on facts from physiology and body chemistry -- the same facts all other theories have to deal with. This theory shows why it is that emotions have aspects that resemble the experiences we get from doing various kinds of control processes, why emotions are so much more intense when the action connected with them is prevented from correcting an error, why the physiological states accompanying emotions are so similar across certain experiences to which we give different emotion names, and why many of those physiological states arise even when we're just exercising or doing something that requires strenuous action, and don't identify the feelings as emotional (though sometimes some people -- say, football players -- confuse the feelings that arise from strenuous action with emotions and become angry with opponents or even inanimate objects). How does the opponent-process model explain those phenomena? How does the idea that there are separate emotion-systems developed over evolutionary time explain these phenomena? How does any existing theory explain how a person can get angry with a computer when it refuses to make a program run correctly?

But most important of all, how does any other theory of emotion integrate into an overall theory of how behavior works? As far as I know mine is the only one that does, and it is certainly the only one that is integrated with PCT.

If you want to know how the phenomena mentioned by Solomon and Corbit are accounted for by PCT, you will have to set up a model that has emotions. I highly recommend that you and any other interested programmers do this. If I do it I will just have to defend it, but if you do it you will get the same intuitive grasp of how it works that I have in my head but can't communicate to anyone who hasn't experienced this model working, as I have in imagination. I suggest setting up some dummy physiological processes that are affected by strenuous actions, and physiological control systems that respond to depletion of essential somethings by starting up the machinery that maintains them against disturbances. I'm quite sure you will see the physiological states following curves like those that Solomon and Corbit describe, when a higher-order behavioral control system is significantly disturbed. The more like the real physiological systems you make the modeled systems, the more the patterns of emotions will resemble those that really happen. You can even categorized the emotion on the basis of the variable being controlled by the behavioral system, and mark its course through time by assuming that there are perceptual signals corresponding to the deviations of the physiological states from their neutral magnitudes.

I am sure that if you make this model are realistic as you can, it will reproduce all the major phenomena of emotion, without needing any specialized emotion systems at all beyond those that are operated by the behavioral systems to back up large actions in a preparatory way. It will predict the phenomena I mention above that I say no other existing theory can explain. The only drawback to doing all this is that it will destroy a picture of emotions that has grown up over the history of psychology because it seemed plausible and is now believed by a lot of people just because they learned it in school. That doesn't sound like a serious loss, though of course I have no investment in it, while others do.

···

=============================================================================
I have to get ready to go to Durango for Thanksgiving and the following weekend: leave Wednesday, return Sunday. I may get on the internet during that time but may not. My daughter Barbara is putting on the feast and her two sons will be there, and Allie and her daughter also, which will be very pleasant, but being in Durango is generally a pretty sad experience for me (Henry, that's where my wife died in 2004 after we had lived there for 15 years). I'll try to stay current with the conversation via my gmail account which is signed on to CSGnet, if I have any trouble connecting using my laptop.

Back to bed -- I couldn't stay asleep for thinking about this.

Best,

Bill

[From Bruce Abbott (2009.11.24.1000 EST]

BP: Bill Powers (2009.11.24.0003 MDT) --

BA: Bruce Abbott (2009.11.23.2147 EST)

BP earlier: >The A process and B process can't be defended by pointing to
the

output curve they generate.

BA: >No, but as a starting point, a good model will reproduce the phenomena
it is

designed to account for. In addition, it should generate testable
predictions which differ from those of alternative models that also
reproduce the same phenomena. In this case, the Test for the controlled
variable would distinguish between the control-system and

non-control-system

models.

BP: But a good model doesn't reproduce the data just by writing an
equation that reproduces it. It proposed a mechanism the operation of
which reproduces the data, which is a very different matter.

Solomon and Corbit do propose a mechanism, although much of it is
unspecified, hidden inside those boxes in their diagram. Inside those boxes
are labels hinting at what those hidden mechanisms are supposed to give rise
to, without saying how. In Solomon and Corbit's model, those are treated as
"black boxes," somewhat like the input and output function boxes in the
control-system diagram.

BP: The code I posted was not intended to be a control system and it
isn't one. It's just another way of generating that curve, and there
are others. There's no theory behind it, just as there is no theory
behind the opponent-factor way of generating that curve. It's just a
way to do it.

The "good model" to which I referred is a PCT-based model that I imagine
would account for the phenomena of concern to Solomon and Corbit (along with
a host of other related phenomena). Your post reads as though you believe
that I am attempting to defend Solomon and Corbit's model, or even the
little curve-generating "model" you provided as an example of how easy it is
to generate Solomon and Corbit's curves. In fact I have been arguing for
developing the PCT-based model as an alternative to Solomon and Corbit's
proposal. I don't have the time for it now, but to me it looks like a
worthwhile future project.

By the way, some readers may wonder why I say "the PCT-based model" rather
than "the PCT model." PCT gives a general picture of the overall system
(multiple layers of control systems organized into a hierarchy, reorganizing
system, and so on), but it does not delineate what specific control systems
are present nor how they may be organized. If you want to explain a given
behavioral phenomenon, you have to develop a specific model that is
consistent with the general PCT framework but incorporates elements relevant
to the problem. For example, to account for feeding behavior, one must
identify what variables are under control that relate to feeding, what
outputs their control systems produce, and how the various systems are tied
together to produce observed cycles of feeding and the effects of various
disturbances. There is no one "PCT model" of this; different versions can be
developed and tested against the data. To qualify as a PCT model, however,
any given proposal must be consistent with the overall PCT framework.

BP: Let me turn the argument around. I have a theory of emotion that is
based on control theory, and in fact on the whole PCT model, as well
as on facts from physiology and body chemistry -- the same facts all
other theories have to deal with. This theory shows why it is that
emotions have aspects that resemble the experiences we get from doing
various kinds of control processes, why emotions are so much more
intense when the action connected with them is prevented from
correcting an error, why the physiological states accompanying
emotions are so similar across certain experiences to which we give
different emotion names, and why many of those physiological states
arise even when we're just exercising or doing something that
requires strenuous action, and don't identify the feelings as
emotional (though sometimes some people -- say, football players --
confuse the feelings that arise from strenuous action with emotions
and become angry with opponents or even inanimate objects).

BP: How does the opponent-process model explain those phenomena?

It wasn't designed to. But it does propose some elements that might be
considered for incorporating into a PCT-based model. Specifically, I'm
thinking about the possibility that "emotions" are unidirectional control
systems that in some cases may act to oppose each other in the way suggested
by Solomon and Corbit. But without a model and some data to test it against,
I'm just speculating of course.

BP: How does the idea that there are separate emotion-systems developed over
evolutionary time explain these phenomena?

It doesn't. How a given brain-system develops is a separate question from
how that system is structured. PCT proposes that the structure develops in
response to intrinsic error, through the random variation and selective
retention of the reorganizing system. The evolutionary view proposes that
the basic structure develops as a result of random variation and selective
retention of structures that improve reproductive fitness. Such a view does
not exclude the possible influence of learning within the systems so
created. Neither proposal explains the phenomena in question, because they
address a different question: how did these systems develop?

BP: How does any existing theory explain how a person can get angry with a
computer when it refuses to make a
program run correctly?

There are current "theories" (I use that term loosely here) that explain it.
For example, Plutchik says that anger arises when one perceives that one is
being blocked from achieving a goal that one is trying to achieve. Sounds
like the operation of a control system to me.

BP: But most important of all, how does any other theory of emotion
integrate into an overall theory of how behavior works? As far as I
know mine is the only one that does, and it is certainly the only one
that is integrated with PCT.

You have yet to describe how the PCT model of emotions explains the
"positive" emotions. I'm not suggesting here that it can't, but it does seem
to me to be a more difficult case to handle. Love, for example.

BP: If you want to know how the phenomena mentioned by Solomon and Corbit
are accounted for by PCT, you will have to set up a model that has
emotions. I highly recommend that you and any other interested
programmers do this. . . .

Well, at last. That's all I've been arguing for.

Bruce A.

[From Bill Powers (2009.11.27.0831 MDT)]

Bruce Abbott (2009.11.24.1000 EST) –

BP earlier: But a good model
doesn’t reproduce the data just by writing an

equation that reproduces it. It proposed a mechanism the operation of

which reproduces the data, which is a very different matter.

BA: Solomon and Corbit do propose a mechanism, although much of it
is

unspecified, hidden inside those boxes in their
diagram.

BP: The point is not what behavior they give rise to, but how they do it.
We can draw a model consisting of one box called “Track
Function”, an arrow going in labeled “target position”, an
arrow coming out of it labeled “cursor position” and a graph
off to one side showing the cursor position closely tracking the target
position. But how this tracking is produced is the crucial point. One
model simply proposes that the target position is sensed and converted
into arm movements that make the mouse move the same way the target
moves. Engineering psychologists drew their diagrams showing the target
position and cursor position (on a display screen) entering a summing
point (still on the screen), and a “tracking error” coming out
of the summing point and entering the “human operator.” The
arrow coming out of the human operator became the cursor position, so the
human operator was a simple transfer function converting the
error-stimulus into the output-response, as most scientists thought it
worked back then. The summing point was shown as being in the
environment.

So just showing input-output relationships of the whole system is not
enough to make it a PCT model, or even close to one.

BA: Inside those boxes are
labels hinting at what those hidden mechanisms are supposed to give rise
to, without saying how. In Solomon and Corbit’s model, those are treated
as “black boxes,” somewhat like the input and output function
boxes in the control-system diagram.

BP: At some point we all have to resort to black boxes. In PCT they are
the input function, comparator, output function, and feedback function.
But in the PCT model, these little black boxes are connected in a circle.
In the Solomon and Corbitt model an input causes an output through a
model consisting of two different ways of responding to the input, with
one response subtracted from the other to produce the net output
response. The output response doesn’t affect the input, so this is not a
feedback system, much less a control system. The most we could say
would be that the S&C model, taken as a whole, is an input function
showing (partly) how emotions, or maybe just the feeling components, are
perceived. What is not shown is how the actions of the control system,
via the somatic branch, are causing those feelings, while the behavioral
branch carries out the motor behavior that is drawing on the resources
supplied by the changed physiological state (in the case of an emotion
calling for increased motor activity).

BP earlier: The code I posted
was not intended to be a control system and it

isn’t one. It’s just another way of generating that curve, and there

are others. There’s no theory behind it, just as there is no theory

behind the opponent-factor way of generating that curve. It’s just a

way to do it.

BA: The “good model” to which I referred is a PCT-based model
that I imagine

would account for the phenomena of concern to Solomon and Corbit (along
with

a host of other related phenomena). Your post reads as though you
believe

that I am attempting to defend Solomon and Corbit’s model, or even
the

little curve-generating “model” you provided as an example of
how easy it is

to generate Solomon and Corbit’s curves. In fact I have been arguing
for

developing the PCT-based model as an alternative to Solomon and
Corbit’s

proposal. I don’t have the time for it now, but to me it looks like
a

worthwhile future project.

I agree. I have sketched what mine would be; the “opponent
process” aspect would show up in the highest level of behavioral
control system involved, in cases where there is an undershoot following
the end of an energetic but transient action. Not all emotional episodes
end in an “opposite” emotion, though some do. I can imagine
that in control systems having a small amount of instability, there could
be several damped oscillations after a brief episode. Also, if a control
system became continuously unstable and started spontaneous oscillations,
the emotional state could also oscillate continuously between exaggerated
states of preparedness and retreat or inactivity – depression. A control
system in a continuing monopolar positive feedback situation could pin
the activity and the emotional state at either extreme.

By the way, some readers may
wonder why I say “the PCT-based model” rather

than “the PCT model.” PCT gives a general picture of the
overall system

(multiple layers of control systems organized into a hierarchy,
reorganizing

system, and so on), but it does not delineate what specific control
systems

are present nor how they may be organized.

BP: Good point worth mentioning. Sometimes when I say “I haven’t
modeled that behavior yet” people look puzzled and ask if I don’t
already have a model of behavior. Isn’t PCT a model of behavior? Your
point is that it’s not a working model until a lot of specific
details have been filled in as appropriate to a particular
situation.

BA: Specifically, I’m

thinking about the possibility that “emotions” are
unidirectional control

systems that in some cases may act to oppose each other in the way
suggested

by Solomon and Corbit. But without a model and some data to test it
against,

I’m just speculating of course.

BP:I don’t think any “opposition” will be necesssary, in the
sense of two different emotional states at the same time. One goal
(“attack”) can be replaced by another (“flee”),
particularly if the physiological states involved are close to being
identical. There can be conflict between goals, or a switch from one goal
to another that can be thought of as “opposite” to it, but I
think there’s only one physiological state at a time. Opposition implies
something about the direction of action, and that belongs to the
cognitive aspect of emotion rather than the physiological. Physiology
doesn’t know anything about directions. That’s the business of the
hierarchy of perception and control.

BP: How does the idea that there
are separate emotion-systems developed over

evolutionary time explain these phenomena?

It doesn’t. How a given brain-system develops is a separate question
from

how that system is structured.

The development wasn’t what I was asking about, though I didn’t make that
clear. It’s the idea of a separate emotion system that I was thinking of.
My model proposes that emotion is the way we experience all active
goal-seeking behavior. The feeling states simply accompany that behavior.
Some behavioral systems are inherited, but the feeling-states that go
with them are the same as they are for learned behavior. The amygdala and
hypothalamus are lower level systems in the somatic branch, and are
involved in all behavior whether inherited or learned.

PCT proposes that the
structure develops in

response to intrinsic error, through the random variation and
selective

retention of the reorganizing system. The evolutionary view proposes
that

the basic structure develops as a result of random variation and
selective

retention of structures that improve reproductive fitness. Such a view
does

not exclude the possible influence of learning within the systems so

created. Neither proposal explains the phenomena in question, because
they

address a different question: how did these systems develop?

BP earlier: How does any existing theory explain how a person can get
angry with a computer when it refuses to make a program run
correctly?

BA: There are current “theories” (I use that term loosely here)
that explain it. For example, Plutchik says that anger arises when one
perceives that one is being blocked from achieving a goal that one is
trying to achieve. Sounds

like the operation of a control system to me.

BP: o you mean that Plutchik’s description sounds as if he is identifying
this as the operation of a control system, or (as I would say) that he
missed seeing that it was a control system, whereas you are pointing out
that it suggests a control system in operation?

Being blocked from achieving a goal one is trying to achieve normally
results in an increase of effort, which would generate arousal of the
physiological systems. But to what does Plutchik ascribe the cause of the
feeling states?

Why should being blocked from achieving a goal cause any feelings? It’s
just a fact, isn’t it? It seems so obvious to me that “trying”
to achieve something calls for cranking the physiology up to support the
action, and that explains why especially strong feelings arise in that
case. But why do somewhat less strong feelings also arise when we’re not
being blocked? And why does a cognitive assessment of a situation and a
decision to act result in feelings? My model offers an explanation, but
the standard model, it seems to me, is vague on this problem. Either it’s
vague, or it assumes that the feelings come first, produced by some
emotion-system as a warning.

BP earlier: But most important
of all, how does any other theory of emotion

integrate into an overall theory of how behavior works? As far as I

know mine is the only one that does, and it is certainly the only one

that is integrated with PCT.

BA: You have yet to describe how the PCT model of emotions explains
the

“positive” emotions. I’m not suggesting here that it can’t, but
it does seem

to me to be a more difficult case to handle. Love, for
example.

The feelings are not hard to account for, are they? Others have suggested
that positive feelings can also be indentified as a negative rate of
change of unpleasant feelings – a bad feeling going away.
What is hard to define are words like “pleasure” and
“displeasure.” A perceptual signal has no value attached to it;
it’s just a row of blips, a factual report of the magnitude of some
variable. Any value is given to it by the control system that receives
it.
If there is a positive reference signal for the perception and the
perception is less than the reference, we feel an urge to experience more
of it, and I suggest that is the kind of perception we call pleasant. We
seek more of the perceptual signal because it is pleasant, and it is
pleasant because we seek more of it. What we mean by pleasant is
something we want more of, or a large amount of. Conversely, if we want
less of a variable or none of it and it is decreasing, the decrease is
perceived as pleasant because we feel an urge to experience less of the
variable that that is what is happening. Conversly, a variable that is
higher than its reference level is unpleasant, and if it is getting still
higher, it is even more unpleasant. So we can interpret emotions in terms
of what the reference level is for the related controlled variable and in
terms of the condition and rate of change of the controlled variable. We
try to change variables in the direction of their reference levels, and
the feeling of wanting to control in ther appropriate way is the measure
of the quality or value of the perception.
This theory also explains why it is that there can be such a thing as
too much of a pleasant signal. It is a pleasure to come out of the
cold weather into a warm house. But if the thermostat is set too high,
that is soon felt as an unpleasantly hot house – a temperature above the
reference level. Practically any sensation becomes unpleasant when it
exceeds the reference level for its preferred value, no matter how
acceptable or pleasurable in lower amounts. Control systems don’t just
try to get perceptions into an “acceptable range.” They try to
keep the variable at one specific level, and either an excess or a
deficiency is unpleasant to some degree. An interesting corrolary of all
this is that some error signals are interpreted as confusingly pleasant,
in that we want to increase the perception that is lower than its
reference.

In any case, I think we can deduce the value of a controlled variable
from the kind of feedback effects of behavior that we observe. If
behavior tends to increase the magnitude of the variable, it is perceived
as pleasant; if the effect is to decrease the variable, it is
unpleasant.

BP earlier: If you want to know
how the phenomena mentioned by Solomon and Corbit are accounted for by
PCT, you will have to set up a model that has

emotions. I highly recommend that you and any other interested

programmers do this. . . .

BA: Well, at last. That’s all I’ve been arguing for.

BP: That’s what I’ve been doing all along, though I haven’t got very far
with it. The ideas in this post are just a few pieces of a theory that
are lying around on shelves waiting to be assembled.

Best,

Bill P.