voluntary vs emotional control

[From Bill Powers (961016.0800 MDT)]

Did you know that the pattern of facial muscle-contraction that appears
under emotional control (e.g., smiling when happy) differs from that which
is produced when you try to simulate the expression? Did you know that
certain brain lesions will abolish the latter (voluntary control over the
facial muscles) but leave the emotional control untouched and functional?
If this is true (and I haven't seen the support for it yet), this would seem
very difficult to reconcile with HPCT, but fits perfectly with my >suggestion.

Speaking of this observation, I note that Bill Powers has been rather mute
about it. Perhaps he will yet grant us the favor of a comment.

I was sort of hoping to hear an HPCT explanation of it from you. It's not
hard to come up with, although like any guess it's hypothetical. Remember
that this is a _hierarchical_ model, with many levels above the level of
facial configuration control, and many systems at each level (as many as you
need). In my emotion model, there must be some level where the downgoing
reference signals branch into the behavioral and the somatic branches (or
perhaps this happens at several lower levels). "Voluntary" control is
generally associated with higher systems, although all control is in one
sense voluntary. With that kit of parts, can't you put together a plausible
explanation for the effects of "certain brain lesions" by postulating which
paths in the model they interrupt and which they leave intact?

The hierarchical model suggests that the opposite is also possible; that a
lesion could abolish "emotional" control of somatic concomitants of facial
expression while leaving "voluntary" (high-level) control intact. It also
suggests that both could be abolished (deadpan expression syndrome).

[backtracking]

The (as you're calling it) threat-control system does not do any of these
things; it gets its "threat" signal alteration from a set of perceptual
systems, some primitive and pre-organized, some more sophisticated and
dependent on learning, some present at birth, some developing through
maturation and experience. You are assuming, I think, that each control
system must possess its own, private little perceptual input function; I
don't.

Physically, I think the perceptual functions at a given level are probably
located together in "sensory nuclei" or similar structures; functionally,
they have to be treated as separate, so we can have specific dimensions of
control with reference signals independently adjustable for each dimension,
as we seem to observe. See my Byte articles, where I drew the neural diagram
in these two different ways. The physical proximity of the input functions
allows for direct interactions between them (aside from being anatomically
correct). However, if you consider the set of all inputs to a nucleus and
the set of all outputs from it, each output can be expressed as its own
function of all the inputs, thus creating an equivalent set of independent
perceptual functions, one per control system. The lumped representation and
this one are mathematically equivalent, but I find it easier to think about
control in the equivalent form of the model.

You say that the system which responds to threat (as you put it)

gets its "threat" signal alteration from a set of perceptual
systems, some primitive and pre-organized, some more sophisticated and
dependent on learning, some present at birth, some developing through
maturation and experience.

My question was how does it recognize which signals carry threat information
and which don't? I think we have established (with thanks to Bill Benzon for
more corroboration) that the learned systems are not hard-wired. An
inherited system can't rely on the signals in any part of the brain to have
a particular significance, particularly when you consider that a shift of a
fraction of a millimeter can take you from a system handling thumb position
to a system handling pain signals from the thumb, or from a perceptual
signal to a reference signal.

You're assuming that certain signals with certain meanings get into the
emotion control system because you need them to be there and to be
recognized, in order to make the emotion control system work as you want it
to. But why do you want it to work that way? Are you working out of some
principle that says that emotions HAVE to have a separate origin and
separate control over behavior? It's clearly too early in the development of
your model for that arrangement to come as a conclusion to your reasoning;
it's already being accepted as the goal which the model has to attain, or as
a premise which is taken for granted. Is there some underlying reason for
preferring this model, regardless of the difficulties in working it out?

···

-----------------------
You cited and agreed with Bill Benzon:

In any event, I don't see that the HPCT model is itself derived
in a strong way from neural data. Yes, in places it is. But the upper
levels of the stack are pure invention and the notion that there is only
one stack seems more related to a general and understandable desire for
parsimony than to observations about real brains.

NO model of the higher functions of the brain is based on neurological data.
There isn't any data obtainable from measuring signals in the brain that
will tell you what those signals mean. What most researchers seem to forget
is that they approach the brain by using their own brains; they look for
signals that correlate with their own categories of experience. So what you
find depends on what categories you approach the problem with. Look at the
discussion with Peter Cariani a few months ago. The simple difference
between perceiving impulses in terms of the time interval between them and
the frequency of their occurrance makes an enormous difference in how you
characterize the significance of these signals. And of course the only way
to assign meanings to such signals is to see what YOU are perceiving when,
presumbly, the subject organism is perceiving the same thing. This makes the
whole business of interpreting electrical signals from the brain a totally
subjective matter.

My model is derived from "neural data" in quite a different way. To see what
I mean, all you have to do is grant my fundamental postulate: what we
experience consists of neural signals. When I observe that all
configurations seem to depend on two or more perceptions that I call
"sensations," no one of which is itself a configuration, I am pointing out
that the neural signals which represent configurations must be functions of
the neural signals which represent sensations. When I observe that every
relationship seems to be composed of two or more perceptual elements that
are not themselves relationships, I am saying that the neural signals we
perceive as relationships are functions of other neural signals that are not
relationship perceptions. When I point out that all events seem to be
composed of transitions, configurations, sensations, and intensities which
are not themselves events ... well, you get the idea.

This is every bit as valid as proposing that certain areas of the brain
support "diagonalization" and such. Being much simpler and much more
directly verifiable by others, it may even be more valid. If this set of
categories, based on a long, slow, and careful consideration of my own
experiences, were used as the measuring stick for determining what
electrical measures of neural signals mean, I would hope that some pretty
high correlations would show up. Anyway, that's really the only way we have
to identify brain functions, isn't it? To find signals in the brain that
correspond to the experiences we ourselves have?

My expectation would be that if we were to look for brain signals that
correspond to such categories as Bill Benzon mentions, we would find them
all in pretty much one place, the place where we do logical and rational
thinking in terms of categories. By this I mean not that the events or
processes to which these category labels are attached would be found in one
place; just that if we could explore Bill Benzon's brain as he tells us
about each category, we would find that these are all activities of pretty
much the same kind using the same brain functions: verbal description and
categorization. This way of approach the problem is quite different from
simply exploring experiences of all kinds from simple to complex and looking
for dependencies. I won't say my way yields more truth, but I think it's
probably more reproducible. It doesn't take me very long to explain to
someone what I mean by an "event," and so far everyone who has looked into
the matter agrees with me that events are composed of the lower elements I
have proposed. And so on. It would be nice, of course, to have some
correlations between electrically-measured signals in the brain and the
occurrance of experiences like those I have identified, but my opportunities
for getting that kind of data have been limited. Others who explore the
brain have come up with similar categories, piecemeal, but I haven't seen
much of that literature. And anyway, the neural signal measurements mean
nothing without some direct experiences against which to compare them.

The problem, of course, is that most people take such categories of
experience as given features of the outside reality, and don't see that they
are really perceptions. Maybe that's why they look so hard for mysterious
complicated functions of the brain: they don't realized that in simply
observing the world, they are looking right at the perceptual signals they seek.

Best,

Bill P.

[From Bruce Gregory (961016.1145 EDT)]

Bill Powers (961016.0800 MDT)

The problem, of course, is that most people take such categories of
experience as given features of the outside reality, and don't see that they
are really perceptions. Maybe that's why they look so hard for mysterious
complicated functions of the brain: they don't realized that in simply
observing the world, they are looking right at the perceptual signals they seek.

"It is only shallow people who do not judge by appearances. The
true mystery of the world is the visible, not the invisible."

Oscar Wilde, The Picture of Dorian Gray

Bruce

Bill Powers (961016.0800 MDT) sez:

The hierarchical model suggests that the opposite is also possible; that a
lesion could abolish "emotional" control of somatic concomitants of facial
expression while leaving "voluntary" (high-level) control intact. It also
suggests that both could be abolished (deadpan expression syndrome).

I think that both of these have been reported.

NO model of the higher functions of the brain is based on neurological data.

You can say that again.

But the neural literature is much richer than it was 15 years ago (when
Hays and I wrote out paper) or when it was when B:TCP was written. I
haven't kept up in detail; but I do read Science and Scientific American
(which, by the way, is not what it used to be) and I recently picked up
Stephen M. Kosslyn & Olivier Koenig's "Wet Mind: The New Cognitive Science"
(Free Press 1995), which is a textbook presentation that I have found quite
interesting. They haven't a clue about PCT, but if you want to reinterpret
an interesting pile of observation and experiment, take a look at this
book.

There isn't any data obtainable from measuring signals in the brain that
will tell you what those signals mean. What most researchers seem to forget
is that they approach the brain by using their own brains; they look for
signals that correlate with their own categories of experience.

My favorite example of this has to do with those cells in the primary
visual cortex which have traditionally been called edge-detectors and such
(by e.g. Hubel & Wiesel). This leads to a conception of visual recognition
which says we start at the bottom with simple things, like edges, angles,
and ends in various orientations, and construct things like circles and
squares and then noses and eyes and then way at the top of this particular
hierarchy (not to be confused with the very different HPCT hierarchy) we
find a grandmother detector and a Starship Enterprise detector, etc.

This is all very plausible on a simple level. We've all done connect the
dots so we can see how images can be built from lines. And, with that, we
have no trouble seeing that that's what artists do. Ergo, that's what the
visual brain must be doing; we just have to sort out the details.

Meanwhile, Karl Pribram & Co. come along and argue that the visual brain is
storing Fourier transforms of input images. In that view, those edges and
angles become high-frequency components of the image. But what, pray tell,
are these folks talking about? What's spatial frequency? What's the
Fourier transform of an image? There is no simple intuitive way to get a
handle on this. You either have to learn the mathematics (preferred) or, as
I did, you spend hours upon hours blurring your vision, reading all sorts
of articles, drawing all sorts of diagrams, etc. and eventually getting a
feel for it. Once you've done all this, seeing those neural units as
registering high spatial frequency components is as easy as seeing them as
edge detectors. Now, deciding which of those (or some other
interpretation) is the case, that's another matter.

But I am pleased that folks at Caltech have a holographic memory on an
optical bench, have managed to hook it up to a mobile robot, and that robot
has been able to move around in the lab with about 50 images stored in that
memory.

Beyond this, consider all of what's often been called primary cortex? How
do we relate this to the HPCT model? We could say that these regions are
where you find the lower end of the stack. Afterall, returning to vision,
things like edges or high-frequency components, seem pretty low in the
stack. They sure aren't programs or higher. Well, Hays and I tried it that
way and had trouble believing it. We ended up thinking of the entire
neocortex as consisting of those memory boxes, from sensation on up as high
as...well, that's subject to debate. So, how can it be plausible to think
of either edge-detection units or high-spatial-frequency units as memory
units? It's not like remembering to pick up a loaf of bread on the way
home from the office or remembering your first date, etc.?

But "remember" in that sense is just a commonsense term. Those memory boxes
exist in a technical theory and Hays and I interpreted them to contain
patterns (memories) corresponding to prior states of the units they are
attached to. If the units are regulating sensations or configurations,
then the memory boxes contain patterns from various sensations or
configurations. If the units are regulating programs, then the memory
boxes contain patterns characterizing programs. In this way it became
plausible to think of the whole neocortex as memory units. In the visual
system, the primary cortex is a bunch of memory units recording prior
states of, e.g., the lateral geniculate nucleus, etc.

My expectation would be that if we were to look for brain signals that
correspond to such categories as Bill Benzon mentions, we would find them
all in pretty much one place, the place where we do logical and rational
thinking in terms of categories. By this I mean not that the events or
processes to which these category labels are attached would be found in one
place; just that if we could explore Bill Benzon's brain as he tells us
about each category, we would find that these are all activities of pretty
much the same kind using the same brain functions: verbal description and
categorization.

Just what ARE you talking about? Are you imagining what's going on in my
brain as I think about what brains are up to? Even if your imaginings are
correct, what bearing does that have on the kind of theories I make?

I also spend hours upon hours hitting a marimba and introspecting about
what happens when I decide that, for example, the 5th stroke in such and
such a pattern should fall on that key rather than this one. So, what do I
have to do to reorganize (gee, where'd I get the word, whose theory does it
come from?) my motor programs to make that change, and then what do I have
to do so that I can readily switch back and forth between the old and the
new pattern? And what does it mean to think about the music itself instead
of about executing this or that pattern? It's very interesting and one day
I may write some of this up, or, much better, I may find some $$$ to take
this sort of thing into the laboratory (and, e.g. monitor brain activity
while some musician does these sorts of things).

But what does any of this have to do with the kinds of explanations I
propose? It sounds like some arcane kind of ad hominem argument.

* * * * * *

As for diagonalization, if Hays and I are on the right track (which is not
at all obvious), then it's not exclusively a high-level sort of thing. It's
also involved in constructing stable sensations and configurations.

Which leads me to something that's been on my mind for years. HPCT seems
to have as its central problem, that of visually guided movement. There
we've got a stack with a channel of visual input and a channel of motoric
output. I want a model where the inputs and outputs are sensory. (Vision
gets tricky because eye-movement is important in mammalian vision, though
it probably plays little or no role in the "bug-detectors" found in frog's
brains).

Scientific American for June 1994 (pp. 60 - 65) had an article on adaptive
optics which is relevant. Astronomers have this problem: the atmosphere is
turbulent and that turbulence messes up the optical signals they're
analyzing. Here's a basic description (from p. 60):

"The first step is to determine how much each component of the wavefront is
out of phase with the others. One way to that end is to divide the
telescope's mirror into a number of zones and then measure the tilt of the
wave-front in each zone. After processing by high-speed electronic
circuits, this information is used to control actuators that determine the
position of individual areas of the mirror's surface. There mirror is
thereby deformed in such a way that any wave component arriving later than
another actually travels a shorter distance to the focal point. This
process of measurement and adjustment--a classic feedback setup--happens
several hundred times a second. When the adaptive optics is working
properly, all the components should arrive at the focal point in phase, to
create a perfectly sharp image."

I have a vague recollection of having read about a slightly different
scheme whereby you bounce a laser beam off the moon and focus your
telescope on that reflected image, deforming your mirror until you get a
sharp circle. Then you go hunting for what you really want to image.

In either case, what interests me is the deformed surface of the mirror.
The pattern of deformation is, in effect, an image of the pattern of
atmospheric turbulance in front of the telesclope. The people using such
optics, of course, have no direct interest in that pattern; they want to
get rid of it. But I'm thinking about somekind of servomechanical
perceptual device. So, from my point of view, this optical system is a
device for imaging turbulence patterns.

Now, how do you take this is a useful metaphor for something the brain
might be doing in seeing, as someone suggested, a refrigerator?

later,

Bill B

···

********************************************************
William L. Benzon 518.272.4733
161 2nd Street bbenzon@global2000.net
Troy, NY 12180 Account Suspended
USA
********************************************************
What color would you be if you didn't know what you was?
That's what color I am.
********************************************************

[Martin Taylor 961017 12:20]

Bill Benzon -- still without header ID, but apparently Wed, 16 Oct 1996

13:32:17 -0400

My favorite example of this has to do with those cells in the primary
visual cortex which have traditionally been called edge-detectors and such
(by e.g. Hubel & Wiesel)....

Meanwhile, Karl Pribram & Co. come along and argue that the visual brain is
storing Fourier transforms of input images. ...

...seeing those neural units as
registering high spatial frequency components is as easy as seeing them as
edge detectors. Now, deciding which of those (or some other
interpretation) is the case, that's another matter.

There's no conflict. Edge detectors and Fourier transforms can be done
by exactly the same mechanism in a simulation neural net, and may well
both be done, by the same mechanism or even possibly the same neurons
in a real neural network. You don't have to decide which interpretation
to use, except to see which is being done (if either) in a particular
situation. (Obviously, neither Pribram nor you or I are referring to
pure Fourier transforms).

But "remember" in that sense is just a commonsense term. Those memory boxes
exist in a technical theory and Hays and I interpreted them to contain
patterns (memories) corresponding to prior states of the units they are
attached to.

That would be the result of (generalized) Hebbian learning, wouldn't it?

If the units are regulating sensations or configurations,
then the memory boxes contain patterns from various sensations or
configurations. If the units are regulating programs, then the memory
boxes contain patterns characterizing programs. In this way it became
plausible to think of the whole neocortex as memory units.

That's the HPCT view, too, provided you do NOT take "memory unit" as being
some kind of stored representation that can be switched for another stored
representation and used at the some place a moment later. The HPCT "memory
unit" is the state of the perceptual input function for each control
system (stored memory representations are also admitted, but that's a
different story). The PIFs change through reorganization, which incorporates
generalized Hebbian learning.

As for diagonalization, if Hays and I are on the right track (which is not
at all obvious), then it's not exclusively a high-level sort of thing. It's
also involved in constructing stable sensations and configurations.

Not having yet read your paper (though I have found your Web site), I have
to assume that "diagonalization" is related to discovering the Principal
Components representation of relationships. In other words, the
orthogonalization of (in HPCT) the controlled perceptions. Powers has
argued (convincingly, to my mind) that reorganization is almost sure
to have this result (almost, in the statisticians' sense). But if this
is so, then there is no separate stage of diagonalization, but
diagonalization is something that happens at every level of the HPCT
hierarchy. (Incidentally, if you can find it, you might enjoy my paper
that argues this to be a primary consequence and benefit of Hebbian-type
learning in a network with lateral inhibiton: S. African J. Psych., 3,
1973, 23-44. If I am right, it happens even in the retina.)

Martin

[Martin Taylor 961017 12:20 sez:

There's no conflict. Edge detectors and Fourier transforms can be done
by exactly the same mechanism in a simulation neural net, and may well
both be done, by the same mechanism or even possibly the same neurons
in a real neural network. You don't have to decide which interpretation
to use, except to see which is being done (if either) in a particular
situation. (Obviously, neither Pribram nor you or I are referring to
pure Fourier transforms).

But it makes a difference when you start thinking about object recognition.
The folks who think in terms of edges etc. tend to think you do some kind
of syntactic concatentation of primitive edges to get X Y and Z and then do
some other kind of syntatic concatenation of X Y and Z to get, e.g. Zero
Mostel. But if you think in terms of Fourier transforms you get to Zero
Mostel in one operation with no intermediate steps (which is what happens
on optical bench realizations).

Consider what I call the "strange friend" phenomenon (from Benzon & Hays on
Natural Intelligence):

"The phenomenon occurs when someone you know acquires or shaves off a
beard, changes hair-style, etc. You recognize the person but you also sense
that something has changed. You inspect the person and eventually you
figure it out--or perhaps you do not, perhaps you have to be told what has
happened. What is going on?

        Between contextual clues, such as name, location (your friend's
living room), voice, etc., and wide band-width information, the general
shape of the head (Harmon, 1973), you have no trouble recognizing the
person as your friend. But the change is significant enough that the
current image of your friend no longer is a satisfactorily close match to
your global schema for him or her; so you inspect the face until you have
determined the change. This inspection, however, requires the use of a
propositional representation. On the basis of eye movement data, Noton &
Stark (1971) have proposed such a representation, which they call a feature
ring. The object is represented as a ring of features (see Fig. 11). A
different feature is brought into focus with each fixation point, while the
direction of the oculomotor path between two features encodes the
propositional relation between them. We propose one change in this
analysis. A feature, we suggest, is a relatively narrow band-width
component in a local area of the object (cf. Pribram, 1981). Hence, the
feature ring is part of a mechanism relating local narrow band-width
Gestalts to a global wide band-width Gestalt. Positive identification
requires that the two Gestalts, composite narrow band-width and global wide
band-width, be cross-validated. If they fail to match, one or the other is
suspect. Returning to the strange friend, the inspection of local areas of
the friend's face allows one to find the local area, or areas, which
register the poorest match with the internal schema, those areas where the
positive feedback selection process fails to stabilize. These areas
pinpoint the change."

That would be the result of (generalized) Hebbian learning, wouldn't it?

I think so, yes. (I'm not up on the technical details of Hebbian learning.)

If the units are regulating sensations or configurations,
then the memory boxes contain patterns from various sensations or
configurations. If the units are regulating programs, then the memory
boxes contain patterns characterizing programs. In this way it became
plausible to think of the whole neocortex as memory units.

That's the HPCT view, too, provided you do NOT take "memory unit" as being
some kind of stored representation that can be switched for another stored
representation and used at the some place a moment later. The HPCT "memory
unit" is the state of the perceptual input function for each control
system (stored memory representations are also admitted, but that's a
different story). The PIFs change through reorganization, which incorporates
generalized Hebbian learning.

???

At some level in the system we're going to recognize, e.g. faces. Just what
does it mean to recognize a face? One possiblity is that the input matches
some (trace of) a face one has seen before. Where are these face patterns
stored? Are they all in some one hunk of neural tissue which is linked to
the visual input channel at the configuration level or are they scattered
about? [Now, I'd guess that the most elementary pattern for even a single
face is going to involve several different cortical areas, but that's
another story.]

Here's what I think is going on. We have the visual stack, which has both
afferent and efferent connctions (I'm quite serious about what Rick termed
"output controlled perception" or some such thing). At each level there is
some transformation of input. Starting with the sensation level inputs
also go to a memory unit. If the input matches some stored pattern, the
memory unit then transmits that stored pattern to the appropriate level
perceptual system; this happens at each level. And each unit in the stack
is sending signals to the unit below as well. Let's say the configuration
unit has identified the input as Margie. The configuration unit transmits
the Margie pattern to its the various sensation units as a reference
signal. In effect it's saying, gimme anything that looks like Margie. As
long as they can produce Margie signals, OK. But if Margie disappears . .
.

As the input changes, adjustments are made to what the memory units are
signalling to the input stack. As long as the memory units can
collectively produce signals which match the signals in the input stack,
the system is satisfactorily accounting for (perceiving, recognizing) its
input. As a side effect, as long as things run smoothly, you get alpha
wave generation in the neocortex (for more detail, consult the paper).

Another anecdote:

Years ago I was at a party. Someone said "let's watch a movie." So, a
portable screen was set-up and an 8mm projector as well. Shortly
thereafter I saw a pattern of black, white, and grey blurrs moving about on
the screen. No one had announced what the film was to me, nor where there
any titles. It started right off with the movie. After a distinct period
of time (10, 15, 30, 60 seconds, I don't really recall) the blurr went away
and I saw moving images clear as day. As the images were pornographic I
won't bother to describe them.

I had never seen a pornographic film and, as no one had said we're going to
see porn, and the film had no title, I had no expectation (there's that
feedforward stuff again) about the images I was seeing. My perceptual
system had to construct an account of those blurrs with no semantic cues
and little prior experience (remember, I'd never seen this kind of thing,
the visual system was thus weak on stored patterns). So it took awhile to
do that.

I figure that what was going on is that the various visual memory areas
(which might well number above 40 or 50) were swapping stored patterns in
and out like crazy, hunting for some overall combination which would
account for the input. Once such a combination was found (and perhaps with
a touch of reorganization here and there), the image just popped right out
at me.

As for diagonalization, if Hays and I are on the right track (which is not
at all obvious), then it's not exclusively a high-level sort of thing. It's
also involved in constructing stable sensations and configurations.

Not having yet read your paper (though I have found your Web site), I have
to assume that "diagonalization" is related to discovering the Principal
Components representation of relationships. In other words, the
orthogonalization of (in HPCT) the controlled perceptions. Powers has
argued (convincingly, to my mind) that reorganization is almost sure
to have this result (almost, in the statisticians' sense). But if this
is so, then there is no separate stage of diagonalization, but
diagonalization is something that happens at every level of the HPCT
hierarchy. (Incidentally, if you can find it, you might enjoy my paper
that argues this to be a primary consequence and benefit of Hebbian-type
learning in a network with lateral inhibiton: S. African J. Psych., 3,
1973, 23-44. If I am right, it happens even in the retina.)

I can live with diagonalization working at each level; but the physical
levels in my system are a bit different from (though not necessarily
inconsistent with) the levels of HPCT. In fact, my model requires it. And,
if you examine the table which summarizes the whole thing, you'll see that
diagonalization, as a princiiple of intelligence, is associated with
reorganization, as a mode of neural action. Some I'm really quite pleased
that BP has an argument that reorganization yields diagonalization.

···

********************************************************
William L. Benzon 518.272.4733
161 2nd Street bbenzon@global2000.net
Troy, NY 12180 Account Suspended
USA
********************************************************
What color would you be if you didn't know what you was?
That's what color I am.
********************************************************