Language

RM79/[From Bill Powers 920311.0930)]

Bruce Nevin (920309) --

... it was unclear whether you were offering a game of "let's
you and him fight" or conciliating "Boys! Boys! Don't fight!"

More the latter. The motivation, however, was to challenge you and Avery to
compare the assumptions and methods on which your two approaches rest.
These assumptions and methods must be very different, assuming different
models of language processes in the brain. I'm hoping for some comment on
my comment to the effect that "you can't both be right and either approach
could be wrong if you're trying to describe language universals." I'm
hoping that you will both try to see what you're doing, in this process, as
control of perceptions, and elucidate what those perceptions are. Bruce:
How can you tell when you have a satisfactory expansion? Avery: How can you
tell when you have a satisfactory parsing? That is, what do you look at to
see whether the result meets your intentions? And what are the intentions?
In short, I'm trying to get a discussion going at the next level up, rather
than spinning out more examples that keep the superordinate perceptual
control systems in the background. I understand that you both have
theoretical cranks you can turn which will grind up a sentence and spit out
an analysis according to some procedure that has been fabricated to produce
that analysis. I want to get off the subject of what is spit out and get
our attention onto the grinding machines. It is highly unlikely that I will
be able to contribute to your efforts in linguistics by examining the
outputs of these machines.

Both methods, as far as I can see, depend on some lexicon in which the
characteristic uses of specific words in specific contexts are listed. It
seems to me that this is a level of perception and control that can be
dealt with independently of higher operations that are done once the
lexicon is available. So far it seems to me that the modeler/theorist is
supplying this lexicon out of informal private experience and knowledge
(either you know what a verb is or you don't), instead of from a publicly-
defined model. If a model satisfactory to both parties for the development
of a lexicon can be sketched in, or more than sketched in, it seems to me
that we would have some intermediate parts of a hierarchical model of
language that would have a better chance of universality, at that level,
than the greatly divergent higher-level processes that are applied using
the information in the lexicon. Perhaps by making the lower levels as
explicit as possible we can find reasons for whatever disagreements remain
at higher levels.

Your examples of the way in which physical production of sounds influences
the way phonemes are heard and used point toward a very low level part of
the model that, I think, can be specified reasonably well (well enough to
go on with). I'm now talking about specifying a slightly higher-level blob
in which we take word production and perception for granted up to the level
where the word is agreed to exist (even though it may be subject to
different higher-level interpretations), and become concerned with the most
elementary level of attaching words to meanings. What kinds of words get
attached to what kinds of perceptions? This is not a complete lexicon,
because if we take the least possible upward
step we will not reach categories such as "noun" or "verb" or "operator"
or "argument." That will come later.

I'm proposing that we use the same method I used in building up a
systematic guess about levels of perception in general. The idea is to peel
off layers, from the bottom up, that seem self-contained enough to become
the units perceived and manipulated by the next higher level. Sometimes,
Bruce, you refer to a back-and-forth interaction between language and
meaning, providing a vague picture of some very busy multileveled process
in which things are going on at many levels at once. I think we can do
better: I think we can pick out those processes that occur at one level,
with higher level processes OF A DIFFERENT KIND going on at the same time.
The higher-level process does not have to handle the processes going on at
lower levels, only the processes that are of a new and superordinate type.
Conversely, if we can find well- defined packages at lower levels, they
will not have to handle aspects of language that higher levels will later
be found to handle. What we will have at any given level of this kind of
peeling-off process will not be language itself, but the foundations of
full-blown language. And as we define the lower levels, what remains to be
handled will become clearer and clearer. As we keep going up by the
smallest steps we can think of, adding the least increment of function that
seems to hang together, the whole structure will come to look more and more
like the language we know.

It may be that a lot of the confusion in linguistics is due to trying to
handle different levels of processes as if they were mixed together at one
level.

···

--------------------------------------------------------------------

If you accomplish the aim of accounting for what all languages have in
common, and you show that it all comes down to characteristics of the
world of nonverbal perception plus fundamentals of physics and
chemistry in the environment, like the acoustics of the vocal tract--
having reached the state where linguistic universals are trivially
deduced from first principles, what would remain?

Nothing. I think you're pulling back from reductionism, which isn't implied
by my suggestion. If we find true universals, I would expect them to
include such things as the capacity to recognize and execute programs in
which both symbols and continuous variables are arguments and outputs, or
the capacity to generalize and perceive principles. What are you doing in
the search for language universals but trying to perceive principles? How
do you do it, but by applying rules and algorithms at the program level?
And why do you do it, but to construct a system concept of language? The
hierarchy of perception and control contains what the linguist is doing at
many levels, and it probably also contains language itself which is, after
all, something we do with our brains.

I don't claim that the conventions of language will be "simple and
uncontroversial," any more than I could claim that any other human
conventions are simple and uncontroversial. We can think in either simple
or complex ways, and our conventions can be easy or difficult to comprehend
and agree on. But we will find it easier to agree on what the logical
conventions are if we can remove lower-level aspects of language that don't
depend on the program level.

Language shows us the sorts of things that a brain can do. These things
are more universal than language. But the study of language gives us a
window into the higher-level processes of a brain -- if only in the form of
elaborate models constructed by linguists. EVERYTHING ANY HUMAN BEING DOES
IS EVIDENCE FOR A MODEL OF THE BRAIN. The conventions of language tell us
about the human ability to perceive and control for conventions. They tell
us first that human beings in general use conventions, and second that
students of human behavior can also perceive those conventions, and
presumably control for conformity with them. There is no privileged
position from which a linguist can see these conventions without using the
very same capacity of the brain. The linguist is in no better position to
grasp the conventions that others use than to grasp the conventions the
linguist is using. The very perception of "convention" itself demonstrates
a function of the linguist's brain.
---------------------------------------------------------------------

In particular, I believe that operator grammar shows a simple structure
for language--a structure of word dependencies--that is universal and
that accords well with perceptual control,...

I agree that it does, although you will have to agree that it doesn't
completely fit natural language as it is spoken without introducing some
important invisible processes which are in principle unverifiable. One of
the things the brain can do is create plausible sets of rules that appear
to fit what is observed. Often achieving a fit requires imagining
information not actually present in perception. The imagined information is
whatever is required to make the rule fit what is observed.

The most convincing models are those that require us to imagine the least
while still fitting what we actually observe. Operator grammar requires us
to imagine some critical parts of the process of language comprehension.
Avery's approach requires us to imagine other kinds of hidden processes.
But in either case, the rules can be made to work if we agree to imagine as
prescribed.

Given any set of experiences, it is possible to devise a rule that fits
them. This is like curve-fitting, only more complex. We need a way to find
out whether a given "curve" has some underlying justification, or whether
it is simply one of an infinity of curves that would pass through the same
data points. When we compare different sets of rules for dealing with the
same observables, and when neither set of rules fits the observations
without adding some imaginary data, we then have to ask which rules require
the least imagined data to make them work. We have to examine the imagined
data to see if some of it is more believable, or if some is in principle
more testable, or if some seems to be needed not just for these rules, but
for others in different universes of discourse.

So I ask both you and Avery: in your models of language structure, which
parts of the phenomenon of language are observed, and which are imagined in
order to make the analysis work?
-----------------------------------------------------------------
Avery Andrews (920310) --

... the pre-adapation for language is guessing what people are up do
by watching what they are doing. Given this, one can convey intentions
by miming, from which arises manual sign language. Then one has to get
from that to spoken language, a step which strikes me as very
mysterious.

By "what people are up to" I take it you mean "what people are controlling
for" -- that is, what the movements they make are intended to accomplish.
At the miming level, you simply take the movements as controlled variables
and learn to control them for yourself. But once you've mastered the
movements well enough, you have to go up a level and ask what they
accomplish, what higher-level variable is controlled by varying those
movements (or more generally, controlling those lower- level perceptions)
that you now know how to control. So now I can say "da" and "ba" and "ma"
and "baw" and "boo": that was fun, but so what? What do I use them for? Ah,
you're showing me that round red thing and saying "baw." I will show you
the round red thing and say "baw." Now I mime you at a higher level. If I
want to see the round red thing I will say "baw" and see if that works. If
I show you the round red thing you say "baw" -- or something pretty close
to it. So if I want to hear "baw" I can show you the round red thing, or if
I want to see the round red thing I can say "baw." If you were a different
parent, say a deaf one, I wouldn't learn to say "baw" but to make a
configuration with my hands. Then I could learn to use that hand
configuration to get a round red thing from you, or show you the round red
thing to make you do the hand configuration again. Manipulating either
experience at the lower level thus becomes a means of controlling for the
other. The environmental link, in both directions, consists of the
relationship the parent is controlling for such that the word is produced
on seeing the object, and the object is produced on hearing the word. I'm
being taught, but I don't know it. I'm just learning to manipulate some
things in order to control others, which is the most fun there is.

I think this is how we should build up the model for acquiring a lexicon
(see above comments).
-----------------------------------------------------------------------
Best to all,

Bill P.


RM79/[From Bill Powers (920327.1800)]

Bruce Nevin (920327.1220) --

The linguistics thread is still looking good to me.

It's probably been lost in the mists of time and tangles of verbiage, but
some time ago I proposed that there are two modes of control going on in
language at the same time: control for communication of meaning, and
control for conformity to linguistic rules. These two modes are not
necessarily dependent on each other, but as you've said from time to time
they must have some strong interactions -- how strong depending to some
extent, I would guess, on formal education. Haynes Johnson, on Washington
Week in Review, often gets into a mode in which he gets his meanings across
by a series of disjointed phrases or even single words that are not any
kind of recognizeable sentence, although you end up knowing what he means.
Charlie McDowell, on the other hand, speaks in polished sentences that
could be written down and sent to the typesetter. Johnson is a writer of
note, and certainly can put out polished language when he wants to, even
when speaking. But it's clear that there's no one universal connection
between one's style of communicating meaning and one's adherence to
grammatical conventions.

Your extensions of my initial diagram of sentence expansion are quite
acceptable to me. I wasn't intending to "rule out rules" by my diagram --
only to say that it didn't seem adequate to me to have language doing the
work of expansion when we commonly do the expanding in terms of meanings,
directly. After the meaning-level expansion I suggested, I meant to say,
but didn't, that the expanded meanings are then described, and that in the
process of description the adopted rules of sentence construction as well
as those of indicating meanings come into play. As you have retained my
suggested meaning level of expansion, there seems to be no problem here,
unless what I just said has created one.

A thought that's been in the back of my mind for a long time seems a little
closer now to being expressible. You've complained occasionally that I seem
to be demanding the meanings themselves from you in cases where, as you
say, you have to use words to communicate them. What's been bothering me
can be illustrated by the verbal expansions. I've taken it for granted that
these expansions are produced (in principle if not in the heat of
generating a post) from a formal system based strictly on word-occurrances
and empirical judgments of the likelihood of various usages. But the actual
expansions I've seen don't seem to have this formal character. I would
expect, from a formal system, that an input string of words would lead to
_some particular expansion_ that anyone applying the same formal system
would come up with: the VERY SAME expansion, no matter who applied the
rule. But the offered expansions, and revisions and alternatives mentioned
at the time and later, seem to indicate that the expansion rule is not
totally formal -- in fact, that it can't be applied without referring to
meanings.

In a recent post you said something that clarified this for me. In
describing how you produce an expansion, you mentioned that one criterion
was that the expansion HAD TO HAVE THE SAME MEANING as the reduced form or
intermediate forms. A formal rule-based and empirical principle of
expansion would not require that -- as I had been (mis) understanding what
you claimed, it would produce sentences with the right meanings
automatically.

Now I understand that the formal system can at best introduce constraints:
if THAT's what you want the sentence to mean, then THIS is the way you have
to expand it according the conventions of English.

It seems to me that the problem has now shifted to a subject that we
haven't discussed, but which I've mentioned in passing now and then: how to
analyze the process we call "description." This is the process of turning
an experience into a phrase that means that experience. Or perhaps less
directionally, it's the problem of deciding whether a given phrase is in
fact a description of a given experience, and if it isn't, deciding what
would make it a better description. A valid expansion not only has to obey
the socially-agreed or learned rules of dependency and so on, but it has to
be a valid description of an expanded meaning (for incomplete meanings, I
would agree that the conventional expansions can also suggest missing
meanings).

Behind all this I am still trying to unearth mental processes that are
being used by linguists in the application of their principles, but which
are taken for granted and not included in those principles. I would be
asking similar questions of a mathematician in the course of finding out
how Ax + Bx = C is transformed into x = C/(A+B). The mathematician would,
like the linguist, begin by telling me the theorems that justify the
transformation, which in this case I would already know. What I would be
asking the mathematician would not be what the rules are that justify the
transformation, but HOW THE MATHEMATICIAN GOES ABOUT APPLYING THOSE RULES.
This requires something more than just demonstrating how the rules are
applied: it requires stepping back and trying to notice what it is to be a
mathematician. To do this, the mathematician has to abandon, for the
moment, the point of view from which it's obvious that the rule justifies
the transformation, and to try to see what "justification" is, what a
"rule" is.

When you said that you make reference to meanings while constructing an
expansion, you were telling me something about being a linguist. This, I
think, is the only way we can find out what perceptual processes and what
control processes underlie the linguistic methods that we try to describe
in words. The words themselves are the product of these processes; they are
not doing the work. But by stepping back and watching how the words are
manipulated, it may be possible to see beneath the surface and get a hint
as to the nature of the underlying processes.

I'm not sure you realize that construal of "sentences expressing
background knowledge" are not among the expansions I am talking about.

Yes, I realized it, but just ignored that subject. I'm still thinking of
simpler situations in which the necessary background is in the foreground,
as it were, and no recourse to special sublanguages is involved. I hope
it's clear now that I'm not denying the linguistic process that parallels
the nonlinguist one. As if any such denial from this quarter would carry
much weight. You're really awfully good to treat my uninformed commentaries
as if they were important.

···

--------------------------------------------------
By the way, the correct diagram for how a control system achieves food
pellets is

                            / variable \
Stimulus--> food pellets -> pressings
               ^^ \ of bar /
               >> >>
               >> >>
                 ===================

The stimulus is something that disturbs the availability of food
pellets. The bar-pressings also affect their availability.
---------------------------------------------------------------
Avery Andrews (920327) --

A negative goal is hard to handle conceptually and in a model -- "I don't
want to see a unicorn." Philosophers seem to make a lot of the fact that in
order to state a negative you implicitly have to recognize it as existent
in some way. "There's no such thing as a 20-pound mouse." As a what? "A
twenty pound mouse." Oh. One of those.

When you cast the same ideas in terms of perceptions, it becomes easier
(real perceptions or imagined ones). Any perception can exist on a scale
from none of it to some maximum amount:

     0 (No lion) --------------------------------------lion

Between the extremes you're perceiving more than no lion at all, but less
than the maximum amount of lion-ness. When I'm in a zoo, my reference level
for perceiving lionness can safely be set rather high:

     0 (No lion) --------------------------------------lion
                                                    ^
                                               ref amount

but if I'm in an African national park where lions roam free, I will set a
considerably lower reference level for the same perception:

         No lion --------------------------------------lion
                    ^
                ref amount

Notice that I don't set the reference amount to zero. Lions are dangerous
and they creep around through tall grass. It is much better to see just a
little lion-ness (a tiny image far away) than none at all, because if you
don't see any at all you don't know where the lion is. Same principle for
poison ivy. You want to see the poison ivy, but not right up close, like
underfoot. You're shown what it looks like so you can avoid it, which is
paradoxical until you think in terms of high and low reference levels for a
specific perception.

Avoidance means setting a low or zero reference level for a given
perception. the perception itself is always defined positively. Speaking
this way, you don't have to mention a perception and at the same time
indicate that you don't want it to exist. You can say, I can imagine a
unicorn but I have little desire to treat it as a factual being. My
reference level for the proposition "Unicorns exist" is zero, or not very
much. Speaking of factual beliefs as goals may sound odd, but think of
"black people are inferior."

Try this on. You have a goal for children to live, and one means is to set
your reference level for percieving them in danger (what you perceive as
danger) to a very low level. Not necessarily zero -- you don't want them to
grow up helpless -- but certainly not so high that they would get hurt.

Your conjectures seem to me to be an exploration of the logic or program
level of perception and control. Even this level can deal in continuous
variables.
----------------------------------------------------------------
Best to all,

Bill P.


[From Bill Powers (920502.1200)]

Avery Andrews (920501) --

control: but how to introduce appropriate disturbances into sentence
generation?

Some kinds occur naturally. "I want the one with green stripes -- I mean
the towel, not the washcloth." This suggests that as the meaning of the
first part was perceived by the speaker, it applied to too many things in
the visual field, requiring a refinement to narrow the choices. If you then
asked the person "What?", the person would probably NOT repeat the original
utterance, but say "I want the towel with green stripes." So you get a look
at the error and the corrected version. Any time a person adds "I mean ..."
there is probably an error correction going on.

Other kinds occur in interactions. You say, "Yoo-hoo" and someone looks
back at you. You say "Sorry, not you, him -- hey, you with the hat!"
meaning not with just any hat, but the hat with the huge purple brim and a
stuffed parrakeet on it, of which the wearer is perfectly conscious as is
everyone else. This isn't a correction OF language, but a correction of a
wrong result from using language.

I think scene descriptions or comments on stories might be a source for
seeing corrections occuring during language production. EXPERIMENTER: John
and Jane went to Alice's house for dinner. John bought a bottle for a
present but Alice doesn't drink. Jane apologizes to John because she knew
John was going to bring a present for Alice and she knew Alice doesn't
drink but John does. What does Jane say? SUBJECT: I should have told you
that Jane didn't drink before you went to the store. I mean doesn't drink.
I mean I knew she doesn't drink and I should have told you that before you
went to the store.

As a linguist you probably have a lot more examples of things that are
somewhat tricky to say or describe. One problem comes up when all you know
about someone is an irrelevant tidbit or two and you want to refer to one
of the tidbits. EXPERIMENTER: You know of a man who has a green car and a
dog, and of another man who has a white car and a dog. How do you state
that the first dog has fleas? Unless the person has already solved this
problem you might get "The man with the green car's dog has fleas." On
asking "Green car's dog?" you will probably get a correction, or you might
get one spontaneously if the person's listening for unwanted meanings.

I don't think you'll find many corrections in sentences like "Jane likes
John" unless it's a factual error. About the only syntactical mistake you
could make would be to reverse the order (in English). Also, you can't know
whether there was a mistake unless the person corrects an utterance. If the
person says "He would have done it if he was wise," this isn't an error of
production unless the person understands subjunctives and corrects "was" to
"were." I think we have to distinguish deviations from social norms in
general from deviations from what the person has ACCEPTED as a norm.

For any one person, general rules of grammar and syntax mean nothing unless
they've been installed in that person's perceptions and reference signals.
This is what's wrong with trying to generalize about "language" as if it
were a single thing with an independent existence. When you study language
in general you get an average over a large number of informants. But this
average usage of language doesn't tell you how a particular person uses it.
It's the same old statistical problem I gripe about occasionally. For any
given person, language is the way THAT person uses words, not "the way
people use words."

If I were a linguist (no remarks, please) I would begin by studying how ONE
person uses language with me. Then another. Then another. In each case I'd
look for the control processes involved, by learning how that person
corrects errors -- misunderstandings, misstatements, unfinishable
sentences, violation of formal rules that the person usually uses, and so
on: errors of all kinds. With each person I'd get some grasp of the rules
that person follows, the phrases that are simply set sequences, and
whatever else I could learn. Only after having done this with a lot of
people would I start to ask what all these people have been doing that's
the same. And by this I don't mean what general concept would cover all the
specific things they've done, but what processes have appeared in EVERY
person in EXACTLY THE SAME WAY.

Language conventions are things people are SUPPOSED to learn, but they
don't all learn all of them and they don't learn them in exactly the same
way. The way around these variations isn't to generalize them away, but to
learn how people manage to communicate in spite of them. The more I think
about it, the more it seems that a study of language as a thing in itself
divorced from individual speakers and listeners is misdirected. But I'm
used to being a minority of one.

···

-------------------------------------------------------------------
Irrelevant afterthought department:

The Golden Rule can also be stated: what you send around comes around. Or
cast your bread upon the waters and you'll have a soggy sandwich for lunch
(well, that's really Karma). It isn't that you should treat other people in
a certain way to make them treat you that way. That isn't how it works. The
way it works is that people will react to you pretty much the way you
disturb them. Push not lest ye be pushed back upon. As most people cite it,
the Golden Rule is a means of controlling other people.
--------------------------------------------------------------------
Best,

Bill P.