[From Bill Powers (961016.0800 MDT)]
Did you know that the pattern of facial muscle-contraction that appears
under emotional control (e.g., smiling when happy) differs from that which
is produced when you try to simulate the expression? Did you know that
certain brain lesions will abolish the latter (voluntary control over the
facial muscles) but leave the emotional control untouched and functional?
If this is true (and I haven't seen the support for it yet), this would seem
very difficult to reconcile with HPCT, but fits perfectly with my >suggestion.
Speaking of this observation, I note that Bill Powers has been rather mute
about it. Perhaps he will yet grant us the favor of a comment.
I was sort of hoping to hear an HPCT explanation of it from you. It's not
hard to come up with, although like any guess it's hypothetical. Remember
that this is a _hierarchical_ model, with many levels above the level of
facial configuration control, and many systems at each level (as many as you
need). In my emotion model, there must be some level where the downgoing
reference signals branch into the behavioral and the somatic branches (or
perhaps this happens at several lower levels). "Voluntary" control is
generally associated with higher systems, although all control is in one
sense voluntary. With that kit of parts, can't you put together a plausible
explanation for the effects of "certain brain lesions" by postulating which
paths in the model they interrupt and which they leave intact?
The hierarchical model suggests that the opposite is also possible; that a
lesion could abolish "emotional" control of somatic concomitants of facial
expression while leaving "voluntary" (high-level) control intact. It also
suggests that both could be abolished (deadpan expression syndrome).
[backtracking]
The (as you're calling it) threat-control system does not do any of these
things; it gets its "threat" signal alteration from a set of perceptual
systems, some primitive and pre-organized, some more sophisticated and
dependent on learning, some present at birth, some developing through
maturation and experience. You are assuming, I think, that each control
system must possess its own, private little perceptual input function; I
don't.
Physically, I think the perceptual functions at a given level are probably
located together in "sensory nuclei" or similar structures; functionally,
they have to be treated as separate, so we can have specific dimensions of
control with reference signals independently adjustable for each dimension,
as we seem to observe. See my Byte articles, where I drew the neural diagram
in these two different ways. The physical proximity of the input functions
allows for direct interactions between them (aside from being anatomically
correct). However, if you consider the set of all inputs to a nucleus and
the set of all outputs from it, each output can be expressed as its own
function of all the inputs, thus creating an equivalent set of independent
perceptual functions, one per control system. The lumped representation and
this one are mathematically equivalent, but I find it easier to think about
control in the equivalent form of the model.
You say that the system which responds to threat (as you put it)
gets its "threat" signal alteration from a set of perceptual
systems, some primitive and pre-organized, some more sophisticated and
dependent on learning, some present at birth, some developing through
maturation and experience.
My question was how does it recognize which signals carry threat information
and which don't? I think we have established (with thanks to Bill Benzon for
more corroboration) that the learned systems are not hard-wired. An
inherited system can't rely on the signals in any part of the brain to have
a particular significance, particularly when you consider that a shift of a
fraction of a millimeter can take you from a system handling thumb position
to a system handling pain signals from the thumb, or from a perceptual
signal to a reference signal.
You're assuming that certain signals with certain meanings get into the
emotion control system because you need them to be there and to be
recognized, in order to make the emotion control system work as you want it
to. But why do you want it to work that way? Are you working out of some
principle that says that emotions HAVE to have a separate origin and
separate control over behavior? It's clearly too early in the development of
your model for that arrangement to come as a conclusion to your reasoning;
it's already being accepted as the goal which the model has to attain, or as
a premise which is taken for granted. Is there some underlying reason for
preferring this model, regardless of the difficulties in working it out?
···
-----------------------
You cited and agreed with Bill Benzon:
In any event, I don't see that the HPCT model is itself derived
in a strong way from neural data. Yes, in places it is. But the upper
levels of the stack are pure invention and the notion that there is only
one stack seems more related to a general and understandable desire for
parsimony than to observations about real brains.
NO model of the higher functions of the brain is based on neurological data.
There isn't any data obtainable from measuring signals in the brain that
will tell you what those signals mean. What most researchers seem to forget
is that they approach the brain by using their own brains; they look for
signals that correlate with their own categories of experience. So what you
find depends on what categories you approach the problem with. Look at the
discussion with Peter Cariani a few months ago. The simple difference
between perceiving impulses in terms of the time interval between them and
the frequency of their occurrance makes an enormous difference in how you
characterize the significance of these signals. And of course the only way
to assign meanings to such signals is to see what YOU are perceiving when,
presumbly, the subject organism is perceiving the same thing. This makes the
whole business of interpreting electrical signals from the brain a totally
subjective matter.
My model is derived from "neural data" in quite a different way. To see what
I mean, all you have to do is grant my fundamental postulate: what we
experience consists of neural signals. When I observe that all
configurations seem to depend on two or more perceptions that I call
"sensations," no one of which is itself a configuration, I am pointing out
that the neural signals which represent configurations must be functions of
the neural signals which represent sensations. When I observe that every
relationship seems to be composed of two or more perceptual elements that
are not themselves relationships, I am saying that the neural signals we
perceive as relationships are functions of other neural signals that are not
relationship perceptions. When I point out that all events seem to be
composed of transitions, configurations, sensations, and intensities which
are not themselves events ... well, you get the idea.
This is every bit as valid as proposing that certain areas of the brain
support "diagonalization" and such. Being much simpler and much more
directly verifiable by others, it may even be more valid. If this set of
categories, based on a long, slow, and careful consideration of my own
experiences, were used as the measuring stick for determining what
electrical measures of neural signals mean, I would hope that some pretty
high correlations would show up. Anyway, that's really the only way we have
to identify brain functions, isn't it? To find signals in the brain that
correspond to the experiences we ourselves have?
My expectation would be that if we were to look for brain signals that
correspond to such categories as Bill Benzon mentions, we would find them
all in pretty much one place, the place where we do logical and rational
thinking in terms of categories. By this I mean not that the events or
processes to which these category labels are attached would be found in one
place; just that if we could explore Bill Benzon's brain as he tells us
about each category, we would find that these are all activities of pretty
much the same kind using the same brain functions: verbal description and
categorization. This way of approach the problem is quite different from
simply exploring experiences of all kinds from simple to complex and looking
for dependencies. I won't say my way yields more truth, but I think it's
probably more reproducible. It doesn't take me very long to explain to
someone what I mean by an "event," and so far everyone who has looked into
the matter agrees with me that events are composed of the lower elements I
have proposed. And so on. It would be nice, of course, to have some
correlations between electrically-measured signals in the brain and the
occurrance of experiences like those I have identified, but my opportunities
for getting that kind of data have been limited. Others who explore the
brain have come up with similar categories, piecemeal, but I haven't seen
much of that literature. And anyway, the neural signal measurements mean
nothing without some direct experiences against which to compare them.
The problem, of course, is that most people take such categories of
experience as given features of the outside reality, and don't see that they
are really perceptions. Maybe that's why they look so hard for mysterious
complicated functions of the brain: they don't realized that in simply
observing the world, they are looking right at the perceptual signals they seek.
Best,
Bill P.