[From Bill Powers (961014.0940 MDT)]
Bruce Abbott (961014.0930 EST)--
What I have proposed is that these systems are pre-organized to control
certain variables of paramount significance in the life of the organism.
I have always considered that kind of postulate a last resort, to be used
only when no alternative can be seen. Before concluding that ANY control
systems are built-in, I would spend considerable time and effort to see if
there isn't a way to account for the same phenomenon in terms of adaptive or
learning processes during maturation. Of course there are plenty of cases
where I simply have to give in -- it's damned hard to explain how a colt can
struggle to its feet and start nursing without assuming a pretty complex
built-in control system complete with perceptual systems and reference
signals, even given the motor practice that can take place during gestation.
In the case of human beings, my primary interest, it seems that very little
in the way of complex neural control is built in. What takes the place of
inherited control systems, in the human being, is a capacity to reorganize
with great efficiency and in an enormous number of ways. The colt may be
able to stand and walk an hour after birth, but the colt will never be able
to run, hop, skip, jump, and dance in the variety of ways that a human being
can. Outside the limits of a few natural gaits, horses become clumsy and
unskillful, with no sense of rhythm and no ability to produce other
movements with grace or competence (in comparison with humans). All the
nonhuman animals are like this: we are impressed with them as long as they
are behaving within the limits of their inherited systems, but as soon as
they are required to do anything outside their natural repertoires, they
look pretty clumsy.
Initially in the human infant the abilities of these systems to control are
not all that good, because many of the lower-level structures by means of
which control could be achieved are immature, even to the point of being
unusable for certain ends (e.g., the baby can't get up and run away when
frightened). Yet the system is there, and it is controlling its variable as
best it can. Furthermore, the types of input that will serve as
disturbances to those systems are at this early stage of development also
rather poorly developed and lacking in the appropriate knowledge base by
which they become able to appraise the situation in such terms as, e.g.,
familiar person versus stranger.
I can't buy this: that the variable to be controlled is being perceived from
the start, but just not being controlled very well. I don't think that
babies have any higher-level perceptions at all. I don't think they perceive
in terms of familiar persons or strangers in the beginning; this distinction
grows only slowly, and anyway I think the distinction is probably mis-named.
There may be some differences in interaction with people in these two adult
classifications, but the reason the baby learns to react differently
probably has nothing to do with these labels.
I think you should re-read what you said above about disturbances.
Disturbances affect the local environment that is being sensed; for any one
control system, they are known only in terms of unintended deviations of
existing perceptions from their reference levels. The effect of disturbances
on inputs doesn't "develop", nor is it necessary to sense the disturbances
themselves in order to learn to control against their effects. Perhaps you
just didn't express your meaning clearly.
I see absolutely no reason why
improvements in the cababilities of either the motor or perceptual systems
through which an innate system must operate -- whether due to simple
maturation, to experience, or to an interaction of the two -- somehow
demands separate perceptual and motor systems for emotional and nonemotional
systems.
Well, I don't either, and in my model there is no difference. We learn to
perceive what we later come to call "danger," and this learned perception is
the only perception of danger that is needed in my model. It does not have
to be built in from the start; "danger" is simply a general word for
perceptions that we have learned to avoid. Since the particular perceptions
we learn to avoid are given that status through experience with them, there
can be no built-in system that knows in advance that it should have an
emotional reaction to the 20,128th neural signal from the left in an array
of cortical neurons. The perception of a general class of experiences that
is given the name "danger" is an outgrowth of long experience, an
abstraction of which evolutionary processes can know nothing. Anyway, I
think the word "danger" and other similar words simply reflect our cultural
misunderstanding of perception, so we attribute our own efforts to avoid
things to the external world, much as we attribute our own sense of effort
in lifting an object to the object's "weight."
On the contrary, whether I decide to run like crazy because,
Spock-like, I calmly and rationally decide that the automobile bearing down
on me will kill me and decide that I don't want this to happen, or because
the same evaluation disturbs a pre-organized fear system whose outputs
include altering certain lower-level references so that I start to run like
crazy, I make use of the same cognitive and motor structures -- only the
control elements orchestrating the whole performance differ.
Spok, as you will recall, tried to give the appearance of cold emotionless
logic, but neither he or his father Sarek was actually able to behave
without emotion. The "pure Vulcan" is a myth, and the writers eventually
worked that out. If you don't rev up the somatic systems, you simply can't
undertake any strenuous activity. The most you can do is suppress the
outward evidences that others (and you) might interpret as signs of emotion,
but as Sarek eventually discovered with Jon Luc Picard standing by, the
penalty for doing that is insanity and death.
Since you can't have "running like crazy" without generating the
physiological state required to support it, there is no need for different
control elements; the situation you're imagining never happens, and can't
happen. "Fear" is just a word, without much meaning or explanatory power. It
isn't a thing: it's a word for how we feel when we need strongly to get away
from something, and especially when we can't. You're invoking a separate
intervening variable here, which my system doesn't need. In my model, we can
explain the various emotions without making entities of them or giving them
any special powers.
... see my previous post, in which I
explicitly describe how interaction with the world is required to develop
such perceptions. I'm led to believe that you still haven't really read
that post; you've just picked sentences out of it you think you can raise an
argument against.
I have read your post; you simply haven't compared your words in one place
with your words in another. If interaction with the world is required to
develop a perception that the emotional system "monitors", then that
emotional system is not innate; it comes into being through experience with
the world. But if that is true, why do we need any separate emotional
system? Why can't the ordinary hierarchy of learned control systems perform
exactly the same function? I have shown how the basic learned hierarchy can
connect to exactly the same somatic adjustments that you mention, in the
same way it connects to the muscles. The only difference between our
conceptions is that you seem to propose the possibility of control behavior
without emotion, which I deny. I have simply identified the nature of
emotion differently -- but in a way that I think fits all the facts.
In the concept of emotion you offer, there is no emotion at all, only strong
muscular action accompanied by automatic physiological adjustments that
occur in response to (or concomitantly with) vigorous activity.
Why does this strike you as an inadequate account? Is there some mysterious
quality of emotion that has been left out? I have not left emotion out; in
fact, my system makes emotion a much more integral part of behavior than
yours does. What is missing from my account, other than a feeling that
emotion ought to be more important or special than I make it out to be? In
my model, the response of the somatic systems is not "automatic" -- it is
produced by the same hierarchical systems that produce actions, in very much
the same way, by varying reference signals. Emotions are an integral part of
intentional, purposeful, behavior.
What emotion systems? Your scheme doesn't have any.
But it does: they are, I repeat, an integral part of the whole system, not a
separate part of it. And in my system, they're not things, but aspects of
the functioning of a single system.
What I call "threat" is a disturbance to a dedicated, pre-organized system
monitoring the level of safety;
Don't you see the enormous complexity in what you propose? Somehow a
genetically structured system is capable of detecting an abstract quality of
a relationship to the world, a quality of which it knows without having had
any experience with the present world, and without benefit of a
fully-developed brain that can grasp such abstractions as "safety." You are
describing an _appearance_ in terms of a whole adult human brain that thinks
in such ways. It's _as though_ there were some system concerned with safety,
just as it's _as though_ organisms were concerned with survival. But this
appearance has to be created through more detailed processes, none of which
can be concerned with an abstraction.
... threats are disturbances that push the
perceived level of safety in the unsafe direction, creating error in the
system. The experience of the emotion arises from the actions aroused by
this error, both from bodily sensations arising from sympathetic and
hormonal activation and (more importantly) from the consciously-felt strong
impulse to withdraw, to flee.
Now our tracks converge again. The only difference is that you treat
"safety" and "threats" in the abstract, and give a built-in system the
capacity to recognize them. My system deal with them as particulars;
particular disturbances, particular reference levels for particular
perceptions, and particular errors that drive both action and somatic
preparation. If we made a list of all the particular cases of this kind, we
could step back and generalize to "threats to safety," but it is not
necessary OR sufficient to do this. A "threat to safety" does not require
some general kind of action that will oppose some general kind of threat; it
always requires a _specific_ action to restore a _specific_ perception to
its reference level. If I said to you, "Look out, there's a threat to your
safety," what would your next move be? The chances of its being appropriate
are zero. But if you see a large object approaching you on a collision
course, you know all you need to know in order to step out of its path.
Knowing in addition that there was a threat to your safety would add nothing
at all to your success in _achieving_ the general condition we label "safety."
An electrode implanted in the brain that
merely set your reference for running to maximum would only produce the
cognitive sensations of running (and perhaps of wanting to run, no matter
what the state of physiological arousal; it would not produce fear.
That's a made-up "example" which simply follows from your assumptions. As I
recall from Penfield's experiments, stimulation which produces physical
sensations of fear also produces an intense desire to get away. You can't
separate the two sides of emotions that way. In fact, if you stimulate just
the right part of the system, you can produce paradoxical effects, which are
_recognized_ as paradoxical. An injection of epinephrine gives some people a
strong feeling of fear -- which they find very puzzling since they are not
actually afraid OF anything. We're so used to experiencing the motor and
biochemical aspects of emotion _at the same time_ that to experience the one
without the other is very confusing.
How do you recognize your own face? Your wife's voice on the phone? The
style of writing that says this piece was written by Bruce Abbott? My use
of the term "perceptual filters" was not intended to suggest that
perceptions are pre-existing entities that are "trying to get through," any
more than the audio signal impressed on the carrier of your FM radio
station's signal is a preexisting entity that is "trying to get through"
your radio.
OK. Your words were a red flag for me, since most of the people who speak of
perceptual filters seem to believe exactly what you say you don't believe. I
think it was Robert Galambos who said that the function of perceptual
filters was to keep too many impulses from bombarding the cortex. You have
to be careful whose verbal company you keep.
Speaking of recognizing faces, I have a running fight with Mary that's been
going on for decades. I will see somebody in on the news or in real life,
and say "Hey, he looks just like X", where X is some movie star or other
famous person. Mary will look at me with pity and say "No, he doesn't."
There is nothing, so far as I am aware, to prevent a reference signal input
to some lower-level system from being simultaneously connected to more than
one higher-level system, nor from the lower-level system being indirectly
activated by disturbances to its controlled variable produced by, e.g.,
motor activity in another system. The adjustments of which you speak can be
produced both by coordinated alteration of motor and somatic reference
levels and by the side-effects of motor activity disturbing the somatic
system.
There's an old principle of parsimony that says if you have one adequate
explanation for a phenomenon, you don't need a second one. In my model of
emotions, only one hierarchy of control is needed, whereas you are proposing
two to accomplish the same result.
I've known that the brain is an integrated system since long before I
learned about PCT. When you described the bottom-level control systems for
muscle length and tension, you forgot to mention that bit about the whole
brain being a system and that therefore you can't localize a function to any
specific part. Instead, you suggested that its major components are to be
found in the spinal cord and cerebellum.
The only firm proposal I've made concerns the spinal cord, where the
circuits for the first level of motor control have been completely traced,
both anatomically and functionally. My "artificial cerebellum" is only a
suggestion, and is not based on functional circuit analysis, although the
anatomic arrangements around the Purkinje cells are suggestive. Nothing else
I have said about localization of function in the brain is to be taken as
anything more than a suggestion for research.
If you place stimulating electrodes
any lower, all you get is coordinated action (e.g., controlled running)
without the other observable components of emotional activation. Stimulate
higher up, and you get a rat that _looks_ emotionally aroused ...
The controlled running that takes place must run out of steam pretty
quickly, if what you say is right. It would have to operate with the
existing levels of adrenaline, the amount of glucose currently circulating,
the breathing rate as it was before the running, the same heart rate as
before, and the distribution of blood supply that existed before.
As to a rat that "looks emotionally aroused," I would put that pretty low
down the list of evidence as to what is actually going on in the rat. Maybe
when the electrode is higher up, it hurts. Maybe you've found where the
downgoing reference signals bifurcate into the behavioral and somatic
branches. You could interpret such a sketchy experiment about any way that
fits your previous ideas.
Do the same in a human; only when
stimulating in certain areas of the diencephalon do you get subjective
reports of fear, anger, and so on.
ONLY then?!!! I think you mean that IF you stimulate certain areas of the
diencephalon, you get reports of fear, anger, and so on. Penfield reported
similar experiences from stimulation of the temporal cortex. Did your
experimenters systematically test all neurons at all levels in the brain? I
doubt it. You don't go poking electrodes around in live human brains like that.
Who knows whether you're simulating a perceptual signal normally supplied by
a lower system or a reference signal that is normally supplied by a higher
system? Who knows whether the report of an emotion is being made by a person
who is able to distinguish the cognitive component from the somatic
component? And as to the "only", I've already described another way in which
sensations can be aroused which some people label as fear. Flimsy evidence.
O.K., so we judge [babies' emotional states] differently. That doesn't
make >your judgment any better than mine.
Of course not, but it does make it an adult judgment. Adults commonly
project their own adult sentiments onto babies, attributing complex
attitudes like jealously, disgust, or even slyness to them.
Why do you think I place quotes around the word "plan"?
I don't know. Why do you use the word at all, if your intent is to
communicate that you DON"T consider the genetic code to be a plan?
I know how development of the brain takes place, at least in broad
outline, and the result is a set of recognizable structures all wired up in
pretty much the same fashion from person to person, although there are
occasional wiring errors and some range of genetically (and environmentally)
determined variation.
I think you greatly underestimate the variation in the "wiring." A few years
ago, Gary Cziko posted some citations of detailed neuroanatomical
investigations concerned with this very matter, and a few others contributed
more information. There are areas in the human motor cortex where the
mapping to extremities simply doesn't stay within the traditional boundaries
(the famous and highly imaginative "homunculus"), and where functions from
one "area" are found in others instead. Even in insects, where one would
expect the most pre-programming and uniformity (the case Gary brought to our
attention), comparisons of well-known ganglia (or something) among half a
dozen individuals of the same species and age showed no striking
resemblances in the gross anatomy and the connections. The human mind loves
to generalize, and it often does so spuriously.
There are roughly 5000 neurons per cubic millimeter in the human brain, and
far more connections than that. Even drastic variations in physical
connections or functional relationships would be totally undetectable by any
means we have available. The fact that we can see gross variations shows
that the differences among individuals are humungous. Your statement that
the structures are wired up pretty much the same is a conclusion from a
theoretical assumption, not an observation.
There is no plan (no quotes), but there is
a developmental sequence which under ordinary conditions (many of them
brought quickly under _control_ within the developing organism) leads to the
generation very specific structures organized in specific ways, so much so
that it is possible to construct a detailed neuroanatomy of "the" human
brain that will be essentially correct in its major "design" (note the
quotes) as a map of anyone's normal human brain, including yours.
You're telling me what everyone ASSUMES, but I don't think it's true. The
gross similarities are truly gross, and are based mostly on visual
appearances in brain preparations. The differences among brains that matter
most are on a scale of microns, not centimeters, and they depend even more
strongly on the values of physical parameters which are not even observable
by visual inspection. Present-day investigations of neural function are
extremely crude; they haven't even involved measuring transfer functions
until very recently. Most of the impulse-chasing has ended up showing us
only what is connected to what in a few individuals, mistakenly assumed to
be representative. Suppose someone studies connections in the visual
pathways of "the octopus," meaning the one that was studied. Can you imagine
how hard it would be to verify that you are looking at the "same place" in
the brain of _another_ octopus? How easy it would be to start exploring at a
place 100 or 1000 neurones away from the same (functional) place in a
different brain, assuming that there IS a "same place"? I'm afraid that a
lot of what is said about brains is simple puffery: overinterpretation of
equivocal findings.
Best ,
Bill P.