[From: Bruce Nevin (Thu 92044 14:38:31)]
Avery dropped by yesterday, and we had lunch and some time to chat face
to face. A very pleasant visit. He's off to Philadelphia now.
One of his remarks just before leaving was the supposition that the core
of the metalanguage is "in the hardware" so to speak rather than in
language--neural mechanisms doing things that emerge as language.
Of course this must be so. Equally "of course," important aspects of
the metalanguage are in language ("word" is a metalanguage word). The
question then becomes, what is on each side of the division.
To avoid some predictable objections, I'll put this in considerably more
awkward terms. Read whichever version you prefer.
Words are perceptions, i.e. neural signals entering and leaving
elementary control systems (ECSs) in a hierarchical control system. In
a way that is yet to be understood in the model, word perceptions are
brought into correspondence with non-word perceptions for "object"
language, and into correspondence with word perceptions (and other
"language-internal" perceptions, for lack of a better term) for
metalanguage. So "metalanguage" in this sense consists of neural
signals, a subset of the neural signals for language. "Metalanguage" in
Avery's sense consists of ECSs (and their I/O functions) controlling the
neural signals for language. The question then becomes, what is on each
side of the division.
I asked a question some time back that seems to me crucial for this
inquiry. That question was: can an ECS control for two perceptual
signals being the same? (Say, the recognition of the particular dog
that carried off my newspaper yesterday.) Recall that this is the
condition for many reductions. A word can be reduced to pronoun or to
zero, for example, only under this assertion of "you already know what
I'm talking about" sameness.
In Generativist theory, this metalanguage assertion of sameness is
carried by subscript letters after the words in question, cute and
convenient but implausible. In Harrisian operator grammar, it is
carried by a metalanguage assertion. It is useful that many reductions
are constrained to act only on word pairs whose sameness can be asserted
in this way (whereas indexing with subscripts is relatively
unconstrained), but the proposed sources with explicit metalanguage are
implausible.
How could ECSs do this? Obviously, they could control metalanguage
words that never (or rarely) get spoken, are almost always zeroed. We
surely have a relation perception with which we associate the word
"same".
But this perception of sameness would apply not to the two occurrances
of a given word; that would be a different sort of perception, one of
repetition of the word. Nor would it apply to the category perception
with which the word is associated (if indeed that is what is going on);
that would be a perception of something being of like kind. Rather, it
would apply to lower-level nonverbal perceptions satisfying the input
requirement of the category ECS. These perceptions might differ in
detail (Bowser asleep vs. Bowser wagging his tail and panting to go out,
two sides of Mt. Shasta, etc.) The perception of sameness then
overrides these differences.
Perhaps some lower-level perceptions are in common, as a basis for
recognition (same as remembered individual).
We compare present perceptions with remembered and imagined perceptions.
If possible we test our relationship with the individual (barks like
Bowser when I call his name--must be Bowser).
Is there any other way that a perception of sameness could arise in the
model?
It appears that this entails that perception of an individual perduring
through time and across occasions depends on the category level
precisely to the extent that language is claimed to depend on the
category level. Is this an acceptable consequence?
Bruce
bn@bbn.com