[From Bill Powers (961018.1030 MDT)]
Bill Benzon --
I'm not trying to get rid of you, Bill. Nor am I saying that you haven't
studied PCT. The problem here, I think, is that you have been building up a
system concept that basically has little to do with PCT, and have been
interpreting neural data and language data from a very different standpoint
-- within a very different model. This makes it hard to assimilate the whole
structure of PCT -- what we see in PCT (perhaps mistakenly) as no serious
problem you see as an enormous problem, because our answer doesn't fit your
model. You have been able to make only "limited use" of PCT because in many
regards the models are incompatible. My model and yours slice the pie of
experience and data along different planes, creating different entities with
different interactions. Consider this bit from one of your posts this morning:
And what I think is that, if you want to extend HPCT to cover natural
language in a non-trival way (i.e. something considerably deeper than "it's
up there on level N") you need to work on an explicit theory of those
perceptual functions. HPCT as it stands is well suited to talking about
how lungs, lips, and tongue operate to produce local atmospheric
disturbances which the auditory system perceives as words. But the word
"dog" needs to be linked to the concept of dog, and the concept of dog
needs to be linked to a whole raft of other concepts, paw, tail, eye, fur,
brown, big, run, eat, food, Collie, mammal, etc.
You have something in mind that you call a "concept." What is a concept? I'm
sure you know, but I don't. In PCT there are "system concepts", but not
concepts in the sense that you use, where "dog" can be a concept, or "fur".
In my scheme, what you're talking about would be a set of category
perceptions related through linguistic conventions and logical functions,
and in some way I couldn't possibly explain are built into structures of
symbols at the levels I associate with categories, sequences, and programs.
Up there where the structure of language exists. And you seem to be talking
about memory associations, a subject I have done essentially nothing with.
From my perspective you seem to be talking more about activities taking
place in some of the higher levels of perception, not about the nature of
those levels of perceptual functions. You're saying what they do, not what
they are.
I'm not saying I'm right and you're wrong; I'm just trying to point out what
happens when you approach the same phenomena from incompatible points of
view, and with a different kind of structure in mind. The significance you
see in a given neural pathway will be different from the significance I
would see in the same pathway, especially considering how little is known
about what those pathways or the areas that generate or receive them do.
Neurology is a lot like a projective test; you recognize what you believe
must be there.
Consider the reciprocal connections that pass between layers of the cortex.
You see these as involving output signals in the perceptual functions, and
others have proposed similar interpretations. I don't have any basic problem
with that, since I've never tried to guess at the internal workings of the
higher perceptual systems. But considering the tangles of connections in
these layers of the brain, how can anyone tell that a given efferent signal
is not simply carrying signals from the output function of a higher system
to the reference input of a lower system? Nobody's even LOOKED at it
experimentally from that point of view. Various layers of the brain have
been conventionally divided into "sensory" and "motor" areas, but at the
higher levels there's no simple or direct relationship between output
signals and the signals that end up in the lower motor nuclei. The pathways
carry no labels saying what they do. You have to put the labels on, and you
do that according to the model that you're trying to fit to the observations.
Yes, when it comes to language I do say "it's up there at level N" at least
with regard to its more abstract aspects. I'm happy to leave the sorting-out
to linguists, except where like any enthusiastic amateur I just have to tell
someone about my wonderful insight. Basically I know that only a serious and
systematic study of language will lead to believable answers.
But what I do want to keep saying is "Hey, you linguists, you're USING
perceptual capabilities that you're not putting into your model." And I want
to keep saying "Don't forget that when you're talking about language
_production_, the only way you or anyone else can know about it is to
perceive it: you're really talking about controlling language
_perceptions_." If you see a linkage between language terms, it's a
_perceived_ linkage, and belongs on the perceptual side of the system. If
you see a rule being applied, it's a _perceived_ rule, and if the rule is
broken, you can know about that only by comparing the rule you _do_ see with
the rule that you think you _should_ see. Rules are not just rules; they're
perceived rules.
So the PCT approach to linguistic problems is to see them as control
problems, and because they are control problems, primarily as perceptual
problems. It's not to examine word associations or rules just to see what
they are. It's to reveal that we have a problem in modeling just what an
"association" is, and just what a "rule" is. Seeing that there's a problem
doesn't give us the answer, but it tells us what we should be looking for.
You say that we can't get anywhere by treating the sensory system as black
boxes. I don't see what else anyone can do, since we have only a vague idea
of what these systems do and essentially no idea of how they do it. It's
easy to confuse studying the program that happens to be running in a machine
with the functions that the machine itself carries out. The programs are
optional, but the basic functions are much less so. When we study the way
language is constructed and how it's used, we're studying the programs that
happen to be running. In order to study the brain, we have to ask what
functions are required in order for those programs, or any others like them,
to run at all.
Consider how visual configurations are perceived. We can see objects in all
sorts of orientations and sizes as being the same, or as being different. We
can also perceive "size" and "orientation" as changing while the object
remains the same, or size remaining the same while orientation and shape
change, and so forth. We might study interactions among these aspects of
configuration perception, and work up rules that allow us to predict them.
But we will still not know how it is that a batch of sensations like light,
dark, edge, shading, color, and so on are combined to produce these
phenomena. The interior of the black box remains as opaque as ever as long
as we're looking only at what the black box DOES.
Even at the level of configuration perception, the problem of understanding
how it works is only very partially solved. We have neural nets that sort of
work, in limited ways. And that's about it. I don't see any alternative to
the black-box approach for a long time to come, even at the lower levels but
even more obviously, and in spades, at the higher levels.
Anyone who understands the structure of HPCT can see that we are very short
of hard data at the higher levels, "higher" being anything more than level
3, or 2-1/2. All of what we know about the higher levels has to come from
direct experience and from behavioral experiments. Brain data, especially
data on anatomy, is of little use, because the physical locations of
structures give us no clear picture of functional relationships. There are
control loops passing through the cortex that have the same level (judging
by counting synapses and looking for comparators) as those of the spinal
cord. Geometry is not a good guide to level or to function.
If you want to investigate phenomena at the higher levels, that's wonderful.
But if, in interpreting what you find, you decide to abandon the basic
_principles_ of the HPCT structure, you might as well abandon HPCT
altogether. You will only be seeing what some other set of principles allows
you to see, and you will never see whether the HPCT principles would suggest
a different view and perhaps fit better. We're talking about big wrenching
differences of viewpoint here. Trying to merge them is, I think, a fruitless
endeavor. It has never worked before.
Best,
Bill P.