RM79/[From Bill Powers 920311.0930)]
Bruce Nevin (920309) --
... it was unclear whether you were offering a game of "let's
you and him fight" or conciliating "Boys! Boys! Don't fight!"
More the latter. The motivation, however, was to challenge you and Avery to
compare the assumptions and methods on which your two approaches rest.
These assumptions and methods must be very different, assuming different
models of language processes in the brain. I'm hoping for some comment on
my comment to the effect that "you can't both be right and either approach
could be wrong if you're trying to describe language universals." I'm
hoping that you will both try to see what you're doing, in this process, as
control of perceptions, and elucidate what those perceptions are. Bruce:
How can you tell when you have a satisfactory expansion? Avery: How can you
tell when you have a satisfactory parsing? That is, what do you look at to
see whether the result meets your intentions? And what are the intentions?
In short, I'm trying to get a discussion going at the next level up, rather
than spinning out more examples that keep the superordinate perceptual
control systems in the background. I understand that you both have
theoretical cranks you can turn which will grind up a sentence and spit out
an analysis according to some procedure that has been fabricated to produce
that analysis. I want to get off the subject of what is spit out and get
our attention onto the grinding machines. It is highly unlikely that I will
be able to contribute to your efforts in linguistics by examining the
outputs of these machines.
Both methods, as far as I can see, depend on some lexicon in which the
characteristic uses of specific words in specific contexts are listed. It
seems to me that this is a level of perception and control that can be
dealt with independently of higher operations that are done once the
lexicon is available. So far it seems to me that the modeler/theorist is
supplying this lexicon out of informal private experience and knowledge
(either you know what a verb is or you don't), instead of from a publicly-
defined model. If a model satisfactory to both parties for the development
of a lexicon can be sketched in, or more than sketched in, it seems to me
that we would have some intermediate parts of a hierarchical model of
language that would have a better chance of universality, at that level,
than the greatly divergent higher-level processes that are applied using
the information in the lexicon. Perhaps by making the lower levels as
explicit as possible we can find reasons for whatever disagreements remain
at higher levels.
Your examples of the way in which physical production of sounds influences
the way phonemes are heard and used point toward a very low level part of
the model that, I think, can be specified reasonably well (well enough to
go on with). I'm now talking about specifying a slightly higher-level blob
in which we take word production and perception for granted up to the level
where the word is agreed to exist (even though it may be subject to
different higher-level interpretations), and become concerned with the most
elementary level of attaching words to meanings. What kinds of words get
attached to what kinds of perceptions? This is not a complete lexicon,
because if we take the least possible upward
step we will not reach categories such as "noun" or "verb" or "operator"
or "argument." That will come later.
I'm proposing that we use the same method I used in building up a
systematic guess about levels of perception in general. The idea is to peel
off layers, from the bottom up, that seem self-contained enough to become
the units perceived and manipulated by the next higher level. Sometimes,
Bruce, you refer to a back-and-forth interaction between language and
meaning, providing a vague picture of some very busy multileveled process
in which things are going on at many levels at once. I think we can do
better: I think we can pick out those processes that occur at one level,
with higher level processes OF A DIFFERENT KIND going on at the same time.
The higher-level process does not have to handle the processes going on at
lower levels, only the processes that are of a new and superordinate type.
Conversely, if we can find well- defined packages at lower levels, they
will not have to handle aspects of language that higher levels will later
be found to handle. What we will have at any given level of this kind of
peeling-off process will not be language itself, but the foundations of
full-blown language. And as we define the lower levels, what remains to be
handled will become clearer and clearer. As we keep going up by the
smallest steps we can think of, adding the least increment of function that
seems to hang together, the whole structure will come to look more and more
like the language we know.
It may be that a lot of the confusion in linguistics is due to trying to
handle different levels of processes as if they were mixed together at one
level.
···
--------------------------------------------------------------------
If you accomplish the aim of accounting for what all languages have in
common, and you show that it all comes down to characteristics of the
world of nonverbal perception plus fundamentals of physics and
chemistry in the environment, like the acoustics of the vocal tract--
having reached the state where linguistic universals are trivially
deduced from first principles, what would remain?
Nothing. I think you're pulling back from reductionism, which isn't implied
by my suggestion. If we find true universals, I would expect them to
include such things as the capacity to recognize and execute programs in
which both symbols and continuous variables are arguments and outputs, or
the capacity to generalize and perceive principles. What are you doing in
the search for language universals but trying to perceive principles? How
do you do it, but by applying rules and algorithms at the program level?
And why do you do it, but to construct a system concept of language? The
hierarchy of perception and control contains what the linguist is doing at
many levels, and it probably also contains language itself which is, after
all, something we do with our brains.
I don't claim that the conventions of language will be "simple and
uncontroversial," any more than I could claim that any other human
conventions are simple and uncontroversial. We can think in either simple
or complex ways, and our conventions can be easy or difficult to comprehend
and agree on. But we will find it easier to agree on what the logical
conventions are if we can remove lower-level aspects of language that don't
depend on the program level.
Language shows us the sorts of things that a brain can do. These things
are more universal than language. But the study of language gives us a
window into the higher-level processes of a brain -- if only in the form of
elaborate models constructed by linguists. EVERYTHING ANY HUMAN BEING DOES
IS EVIDENCE FOR A MODEL OF THE BRAIN. The conventions of language tell us
about the human ability to perceive and control for conventions. They tell
us first that human beings in general use conventions, and second that
students of human behavior can also perceive those conventions, and
presumably control for conformity with them. There is no privileged
position from which a linguist can see these conventions without using the
very same capacity of the brain. The linguist is in no better position to
grasp the conventions that others use than to grasp the conventions the
linguist is using. The very perception of "convention" itself demonstrates
a function of the linguist's brain.
---------------------------------------------------------------------
In particular, I believe that operator grammar shows a simple structure
for language--a structure of word dependencies--that is universal and
that accords well with perceptual control,...
I agree that it does, although you will have to agree that it doesn't
completely fit natural language as it is spoken without introducing some
important invisible processes which are in principle unverifiable. One of
the things the brain can do is create plausible sets of rules that appear
to fit what is observed. Often achieving a fit requires imagining
information not actually present in perception. The imagined information is
whatever is required to make the rule fit what is observed.
The most convincing models are those that require us to imagine the least
while still fitting what we actually observe. Operator grammar requires us
to imagine some critical parts of the process of language comprehension.
Avery's approach requires us to imagine other kinds of hidden processes.
But in either case, the rules can be made to work if we agree to imagine as
prescribed.
Given any set of experiences, it is possible to devise a rule that fits
them. This is like curve-fitting, only more complex. We need a way to find
out whether a given "curve" has some underlying justification, or whether
it is simply one of an infinity of curves that would pass through the same
data points. When we compare different sets of rules for dealing with the
same observables, and when neither set of rules fits the observations
without adding some imaginary data, we then have to ask which rules require
the least imagined data to make them work. We have to examine the imagined
data to see if some of it is more believable, or if some is in principle
more testable, or if some seems to be needed not just for these rules, but
for others in different universes of discourse.
So I ask both you and Avery: in your models of language structure, which
parts of the phenomenon of language are observed, and which are imagined in
order to make the analysis work?
-----------------------------------------------------------------
Avery Andrews (920310) --
... the pre-adapation for language is guessing what people are up do
by watching what they are doing. Given this, one can convey intentions
by miming, from which arises manual sign language. Then one has to get
from that to spoken language, a step which strikes me as very
mysterious.
By "what people are up to" I take it you mean "what people are controlling
for" -- that is, what the movements they make are intended to accomplish.
At the miming level, you simply take the movements as controlled variables
and learn to control them for yourself. But once you've mastered the
movements well enough, you have to go up a level and ask what they
accomplish, what higher-level variable is controlled by varying those
movements (or more generally, controlling those lower- level perceptions)
that you now know how to control. So now I can say "da" and "ba" and "ma"
and "baw" and "boo": that was fun, but so what? What do I use them for? Ah,
you're showing me that round red thing and saying "baw." I will show you
the round red thing and say "baw." Now I mime you at a higher level. If I
want to see the round red thing I will say "baw" and see if that works. If
I show you the round red thing you say "baw" -- or something pretty close
to it. So if I want to hear "baw" I can show you the round red thing, or if
I want to see the round red thing I can say "baw." If you were a different
parent, say a deaf one, I wouldn't learn to say "baw" but to make a
configuration with my hands. Then I could learn to use that hand
configuration to get a round red thing from you, or show you the round red
thing to make you do the hand configuration again. Manipulating either
experience at the lower level thus becomes a means of controlling for the
other. The environmental link, in both directions, consists of the
relationship the parent is controlling for such that the word is produced
on seeing the object, and the object is produced on hearing the word. I'm
being taught, but I don't know it. I'm just learning to manipulate some
things in order to control others, which is the most fun there is.
I think this is how we should build up the model for acquiring a lexicon
(see above comments).
-----------------------------------------------------------------------
Best to all,
Bill P.