[From Bill Powers (961017.0150 MDT)]
Bruce Abbott, Martin Taylor (various posts) --
I've been having the uncomfortable idea that I've forgotten something or am
contradicting myself on the subject of emotion. I woke up quite agreeing with
Bruce, at this hour of the night, that there is overwhelming evidence that
emotions often clearly occur by themselves, just as if they were inherited.
One problem with descending into a tranquil old age is that I don't really
have any important big emotions very often other than brief episodes of
moderate annoyance and a few leaky tears about happy endings to stories, and
so I forget many of the things I have thought about them (and how I got
there) when I was closer to them. A few dusty old ideas have shown up, and I
thought I should talk about them.
1. Error signals and feelings.
When I replied to Martin the other day, I was too concerned with the
question of emotion occurring in degrees, and neglected to pick up his cue
about the relation of emotion to reorganization. Martin is right, of course,
in saying that reorganization is likely associated with emotion and that
emotion probably indicates some degree of intrinsic error. And Bruce is
right in saying that emotions betray the existence of a pre-existing
organization which is concerned with threats to well-being. With regard to
Bruce's argument, I got hung up on the subject of how an inherited
emotion-system, mechanically, could hook into the learned hierarchy in which
there is no way to predict what particular signals will mean, and I
overlooked the obvious solution to the problem, which when I mention it will
probably have Bruce (and maybe Martin) smacking his forehead, too.
First, I should emphasize that we're talking here mainly about the "bad"
emotions. It's harder to grasp what the "good" emotions are about, although
we can make some sense of them in terms of bad feelings going away, or in
terms of ordinary feelings that accompany successful action. But I'll just
focus on the "bad" ones here.
I'll stick with the idea that in all ordinary behavior, there are feelings
that accompany preparation for action -- error signals in the learned
hierarchy that drive action. Mary said some things that made this clearer in
my mind, however, notably that having these feelings doesn't necessarily
imply that any reorganization is going on. If it did, we'd start falling
apart every time we worked up a sweat. The somatic systems are an intrinsic
part of all behavior, and we sense them as they respond to the demands made
on them by the behavioral systems. We feel different when we are sitting and
reading from the way we feel when we are pushing a lawnmower or whacking
away with a hammer or an axe, or running a race. Could we agree to call
those visceral somatic sensations that accompany actions "feelings?" If we
do that, we can reserve the term "emotion" to mean something else. "Arousal"
might be another term for what I mean by feeling, except I don't like the
connotation of external causation and the term is too general. We feel
The important thing about "feelings" is that they do not indicate that
anything has gone wrong. Quite the opposite: they indicate that everything
is working right. That's another thing that Mary said. If you went out to
mow the lawn and after a few minutes weren't breathing harder and pounding
your heart faster and perspiring a bit and feeling generally jazzed up, you
wouldn't get many rows done. That's how things work; when the brain
initiates a task that requires physical action, it sends signals to the
somatic systems that raise their levels of activity in the ways required to
support the action. To some degree, the somatic systems automatically
respond to increased demands on their various products. That sort of
connection is as built-in as the connection of the spinal cord to the muscles.
But that's not the emotion system. The emotion system, at least with regard
to bad emotions, responds not when things are going right but when they're
going wrong. That's where the association with reorganization comes in; we
also reorganize when things are going wrong. This makes it look very much as
if the "emotion system" is what I have called the "reorganizing system."
2. Emotion and reorganization
Now the big question becomes "How do we, or something in us, know when
something is going wrong?" The very concept of "wrong" implies an
evaluation, a comparison of what IS going on with what SHOULD BE going on.
It implies a reference standard against which the current state of affairs
is judged as OK or NOT OK.
But it's asking a lot of an inherited system that it know WHY something is
going wrong -- that it be acquainted with the details of the current
environment, make predictions about them, evaluate them in terms of criteria
of good or bad. Processes like that sound suspiciously like the things our
learned hierarchy is supposed to be for. If the emotion system already can
make use of sensory perceptions and the higher-level interpretations that
are put on them by the brain, and can act to correct errors, why do we need
a ANOTHER learned hierarchy in the brain? The emotion or reorganizing system
already understands the world and can deal with its effects all by itself.
The learned hierarchy is superfluous.
Well, that's rhetoric, but here's some reasoning.
It's asking a lot of a built-in system that it know about things going wrong
in the external world. Even when that seems to be the case, there may be a
considerably simpler explanation than to evoke normal perceptual
interpretations. So what CAN we say that a built-in system can reasonably
and most easily know about? The answer is obvious: it can know about the
state of the system, or important parts of the system. Organisms of a given
species are pretty similar at the somatic level, all having the same organs
and chemistry. So an inherited system could, without our stretching the
imagination, have sensors that detect the states of certain essential
variables (W. Ross Ashby's term) which are tied to critical functions of the
somatic systems (and perhaps other things, but we'll get to that if I
So we have an evaluation ("wrong") that implies a reference signal, a
comparator, an error signal, and an output function, or a set of them. We
have a set of control systems concerned with maintaining certain essential
variables in particular states.
Since there's quite a range over which the somatic systems can function
during normal operation, the control systems will probably be one-way
systems: systems that detect when an essential variable is above some
acceptable limit, or below some acceptable threshold. That's exactly what
Ashby proposed, and built a model of with his "Homeostat."
All that's left is to figure out what the output function is, and does. In
Ashby's Homeostat, it rotated a bunch of stepper switches that altered the
circuitry of his device more or less at random, ceasing when the essential
variables, which were the positions of four galvanometer needles, were once
again within an acceptable range. Ashby invented the "systematic adaptation
through random reorganization" idea, which I lifted intact for my
In a human being, the output function has to be a bit more complex than
that. I've never tried to find a more efficient system than random changes,
being lazy and anyway not very good at designing complex mathematical
analyses. But Bruce Abbott has brought in the subject of emotions in
connection with things going wrong, and with them the subject of _built-in
emotional reactions_. Obviously, if bad emotions are a sign that something
is wrong, they are also a sign that the reorganizing system contains error
signals and is producing a set of outputs. So how could those outputs be
connected to produce specific behaviors? They could be connected to the
reference-signal inputs of any behavioral control systems that exist at the
time. And there we have one main feature of Bruce's emotion system as he has
described it so far.
It makes great sense that when there are bad emotions, or as I call them,
intrinsic error signals, some behavior should ensue which at least tends to
do something that results in removing the error signals -- which entails, of
course, bringing the essential variables back within their normal ranges of
Let's think about human babies. When a baby is born, there are some
low-level kinesthetic control systems in working order or close to it. They
have reference inputs, but there are no or few higher-level systems there to
provide any non-zero reference signals. The baby can resist disturbances of
its limbs, but it can't initiate any voluntary actions to speak of yet
because there's no higher level of control to demand them. According to the
Plooijs, that situation will last about one week. There can, however, be
inputs from the emotion or reorganizing system. There's no law that says the
output signals from the reorganizing system have to have ONLY reorganizing
effects. They are signals like any other neural signals, so they can just as
well serve as reference signals. What gives a reference signal meaning is
nothing about the signal; it is the fact that it enters the reference input
of a working control system.
So what does a newborn baby do when its reorganizing system detects errors?
It cries. It bellows. That's a built-in hookup; the output signals of the
reorganizing system, or emotion system, are genetically routed to the
reference inputs that cause the muscles of respiration and the vocal cords
to become tense simultaneously during the exhale part of the cycle of
breathing. Also many of the other muscles in the body go into tetanus,
creating rigidity and even oscillations, tremors. We can almost SEE where
those intrinsic error signals are going. They're going just about everywhere
they _can_ go at that stage of development. The baby turns bright red.
This, as it turns out, is a very effective way of correcting intrinsic
error, because the adults watching the performance can't stand seeing it,
and try everything until it stops. The obvious thing to do is feed the baby.
The baby is feeling hunger for the first time; something that monitors its
internal nutritional state has been awakened by a big error, and fortunately
it is connected to do something about it. That's what evolution has done for
us, on both ends of the relationship of child to adult. The baby has an
inherited reference signal for the lower threshold of its own nutritional
state; the adult has an inherited reference level of ZERO for hearing babies
cry. That particular sound, the raspy screechy loud unpleasant blast, stirs
a gut feeling that the adult will do anything to turn off. A mother bird
must feel something similar about that awful cheeping sound that can only be
stopped by stuffing worms into it.
Birds, apparently, are born knowing what to do about that awful cheeping
sound, but human being aren't. I've heard stories about incredibly ignorant
teenage mothers rushing their babies to the emergency room because they
won't stop crying. The mother knows that this crying has to be stopped, but
has to be told, "Feed it, dear, it's hungry." Or "Change its diapers more
"Try not to pin the baby into its diaper through its skin." Some mothers,
driven frantic by that terrible sound and being bereft of cultural
knowledge, kill their babies to make it stop. That works, too.
It occurs to me that there's a big contrast between a baby horse and a baby
person, aside from the fact that the baby horse can stand up and nurse right
away. The baby horse doesn't cry. While reading emotions on a horse face is
difficult, baby horses look rather impassive. Why? Because they don't need
to cry. They can just stand up and find the food. They don't need anyone
else to help them. Give them a warm body and the smell of milk, and they'll
take care of the rest themselves. But that's also why they can't learn much
in comparison to a human being.
OK, now let's turn to what happens after the early days. Bruce Abbott wants
to say that the emotion system starts making use of the hierarchy to control
for ever more complex "threats." But I think this is at least partly a step
down the wrong path. Remember that the whole point of the reorganizing
system, which I am now identifying with Bruce's emotion system, is to
produce outputs that remove the intrinsic error. If there is no intrinsic
error, the reorganizing system will go back to sleep.
So what can keep the reorganizing system asleep? The learned hierarchy of
control systems. The whole point of building the learned hierarchy is to
PREVENT INTRINSIC ERROR. And with my identification, that means that its
point, its principal effect, is to PREVENT BAD EMOTIONS.
Here's one of those dusty old ideas that came back to me tonight. Suppose
you've built up a hierarchy of control systems which keeps almost all
intrinsic errors within their reference limits very successfully in almost
all situations. You will sail serenely through life, enjoying the good
feelings that mean you're acting successfully, sound body and sound mind in
a happy world. But what if you enounter a situation for which you're totally
unprepared? Suppose everything you try, every learned skill you apply from
the physical to the intellectual, fails? Suppose you're just overwhelmed by
disturbances you've never had to deal with before, or with a situation where
nothing you know how to do has any effect on the error signals. What do you do?
That same old hookup that got you fed, diapered, cuddled, petted, and dried
when you were a baby _is still there_. If the learned systems let the
intrinsic error signals get large enough, that hookup turns on just like
before and you do the same things you did before, and wait for someone to
take pity and fix the error for you. Your great big beautiful hierarchy has
let you down, and you are again a helpless baby. A psychoanalyst would look
down his nose at you and say that you have regressed to an infantile state.
He's right. It hurts real bad, and that's what you've done. But of course
you haven't gone anywhere, in space or time. It's just that the failure of
the hierarchy has allowed intrinsic errors to get big enough to turn on a
very old crude control system.
That general idea is intriguing. Suppose that the control systems we acquire
early in life are not updated or replaced or dismantled, but are simply kept
from seeing any error by more competent systems that we acquire later. The
more competent systems keep the same error too small for the old crude
control system to act upon. So we swan around showing off our new skillful
control system until something happens that just doesn't fit the capacities
of the new system, and suddenly the error gets large. What happens? The old
system, which is still there and has just been waiting for any error big
enough to see, lurches into action, and we do something clumsy and
relatively ineffective, but familiar. I consider that a very classy story.
Now here's the thing that's going to make Bruce slap his head. We don't
control for threats; we control for the EFFECTS of threats. If our learned
hierarchies using learned perceptions can recognize a situation that might
allow a threat to have some effect, and can act to prevent the effect, then
there is never any reason to feel an emotion. But if the threat, or our
preparations to meet it, results in _intrinsic error_, the threat does have
an effect, and that effect generates an emotion as the reorganizing system,
or emotion system, acts to remove it.
So we don't need the emotion system to have inputs from the perceptual
signals _anywhere_ in the hierarchy. The hierarchy itself was constructed
(among other purposes) to control all the variables that pertain to threats,
and keep them from going into states that would allow any actual threatened
effect to occur. The hierarchy simply keeps reorganizing until that is true,
just like Ashby's homeostat. That is just too simple and elegant to be
wrong. That is the least complex, least expensive way that evolution could
Now what about intrinsic error signals of other kinds? Martin Taylor and I
have more or less converged to an agreement that the somatically originating
intrinsic error signals are not specific enough to assure reorganization of
high-level control systems where it is needed, and not where it is not
needed. What is needed is _local_ reorganization that is specific to the
control system that is having trouble (or the region of the brain where a
control system is needed).
My general principle remains: we have to consider what kind of thing an
inherited reorganizing system could sense that wouldn't require too much
knowledge about the brain or the world. We can't inherit knowledge about the
present world; only about the average world over the past 100,000 years or
so. So being afraid of automobiles definitely can't be inherited.
In fact I don't think we could rely on an inherited system to know what any
signal in the learned hierarchy means -- except one: the error signal being
emitted by a comparator. In my brief studies of neuroanatomy, I found what
seemed to be indications of comparators, and at least in the brainstem
comparators seemed to be located together, in motor nuclei. They are places
where downgoing "command" signals meet feedback signals carried by
collaterals from the sensory neurons, with opposite signs, always. This
means that there is a reliable place where the signals that are present
indicate error in a control system.
The interesting thing about error signals is that they all have the same
meaning: a difference between a perceptual signal and a reference signal. It
doesn't matter what those signals are about; all that matters is that they
are not the same. So if there is a chronically large error signal, that
signifies only one thing: a control system in trouble. A large error signal
in the learned hierarchy is an intrinsic error signal, too. The implied
reference level for error signals, from the standpoint of a local
reorganizing system, is zero.
If comparators are physically located in motor nuclei, which are output
functions, then a local reorganizing system would work primarily on the
output function. That doesn't leave us a way to reorganized input functions,
which can be physically quite far away, but we can't have everything at
once. Somebody will think of something.
This kind of reorganization would NOT involve emotions, unless as a
side-effect of the fact that the errors in one control system are abnormally
large for a long time. Any involvement of emotions would be indirect and
remote. The implied learning process would go on silently, with no
indication except for a gradual random walk toward better control and the
consequences thereof. My so-called artificial cerebellum model, by the way,
shows the feasibility of reorganizing an output function strictly on the
basis of the error signal that enters it. This model makes the reorganizing
process an inherent aspect of the output function itself.
I don't see any easy corresponding process for perceptual reorganization. I
guess that question will just have to hang.
Well, Bruce, what do you think of this suggestion? It goes along with your
model at least part of the way, although it doesn't let the emotion system
monitor any perceptions in the learned hierarchy. I don't think it needs to;
the logic of this proposal takes care of anything that such monitoring would
have accomplished. I hope you understand and agree with my strong principle
about what an inherited system can be allowed to sense; it has to be
something that is reliably there to be sensed, regardless of experience with
the external world, which is different for every individual.
Best to all,