language

[Avery.Andrews 930129.1935]
  (Bill Powers (930128.1230)

Where I keep falling off the bandwagon is always at the same
bump. We start out talking about the relationship between words
and nonwords, and then suddenly all the direct experiences of
nonwords drop out of the conversation and we're back to talking
about the properties of words all by themselves. This feels to me

Maybe I can say something vaguely useful here. The reason the
nonword perceptions drop out of the picture so quickly is that
their connection to the word-ish ones seems to be indirect,
mediated by the lexicon. For example the verb `eat' has one
version (the active voice) whereby the preverbal NP is the eater
and the postverbal one the thing eaten, and another, the `passive',
wherby the preverbal NP is the thing eaten, and the eater is either
left out or expressed postverbally, by a prepositional phrase with
the preposition `by'. So the abstract relational percepts over
the words don't connect a simple and direct manner to the nonverbal
perceptions, though they are connected in a more complicated one.
And generative linguists at least think they can see quite a lot of
order in the inter-word relations that is somewhat independent of the
nonverbal relations. I actually have a paper, `The Major Functions of
the Noun Phrase', in a 1985 volume edited by Timothy Shopen, _Language
Typology and Syntactic Description_, vol I, where I try to put some
of this on a semi-rational basis (by systematically applying an approach
basically due to John Lyons) - I find the usual presentations to be
pretty unsatisfactory.

As for phrase-structure trees, I'm inclined to think (influenced
substantially by Penni) that they in a sense don't really exist,
but are a bit like a trace or graph of a function. As you move
through the words of an utterance your are producing or understanding,
you are sometimes dealing with information about one entity or
situation, sometimes with information about another, & there is a lot
of subtasking. E.g. you are simultaneously describing the picnic,
something that happened during the picnic, the sandwich whose hitting the
ground constituted that happening, etc. The PS tree then represents
the subtasking relationships. The empirical issue is whether
the *whole thing* is ever represented and stored at once, & I think
it's quite plausible that it isn't, tho there are some prima facie
problems with this, I think.

Bruce Nevin (930128.0830)

I got exposed to Chomsky a bit before Harris (sr. year in high school,
vs. first year collete), & somehow Chomsky's goals seemed to make
more sense to me than Harris's, where the two diverged. Wiener, on
the other hand, seemed completely incomprehensible, presumably because
of my mathematical slowness.

Avery.Andrews@anu.edu.au

Re: Bruce Nevin (4 Mar 1992)

Your explanation to David about why you think the Diverian approach is
all post hoc rationalization" did not explain. Do diverians reject in
principle any attempt to predict "what sentences are grammatically
acceptable and what they mean" in a way that could be modelled on a
computer?

My reading of Diverians is that they reject *all* attempts to
systematically predicate the grammatical and semantic properties
of individual sentences from antecedently specified principles.
What they do instead is `validate' analyses by somehow coming to
the conclusions that the distributions of constructions in texts
ought to be systematically skewed in various ways, and then doing
counting & statistics to see if they are. I see this method as
a useful supplement to other ones, but not much more than than.
And it somehow doesn't seem PCT-ish at all!

On `Jimmy's betrayal': why do you think I or any other Chomskyan
needs a [+abstract] feature here? All I think I need is (a)
a semantic specification for `betrayal' (mostly shared with the
verb `betray') (b) the info that the `treacher' argument can
be expressed as a possessive (this in fact probably being predictable
from more general principles). I'd say that `betrayal' is a noun
and `betraying+NP' is a verb (with gerund inflection) on the basis of
things like:

  the betrayal/ing of Mary cost John his happiness
. *the betraying Mary cost John his happiness.

  John's frequent betrayals of Mary were the scandal of the department.
  John's frequent betraying of Mary was ...
*John's frequent betraying Mary was ...

*John's betrayal of Mary frequently is shocking.
*Johns betrayals of Mary frequently ...
  John's betraying Mary frequently ...

Chomsky has some more discussion of this sort of thing in `Remarks
on Nominalizations'.

Generative grammar of course does not provide any coherent account of
meanings, nor does LFG as a particular flavor of generative grammatical
theory.

I think that Jackendoff provides an account which is perfectly coherent
within its own terms, though it leaves out things that I and my
philosophical friends think ought to be there (e.g. the objective
and social aspects of meaning). & I have this suspicion that the
fact that we continually interact with the world is also important
for the theory of meaning. But I don't see how different flavors
of grammatical theory could have much to do with this.

Bill Powers (920304.1000):

Now the question I have for Bruce Nevin and Avery Andrews is this: are the
rules of grammar that you are helping to develop descriptive, or
prescriptive? In short, is there an underlying system that forces language
to exhibit these rules, or are these rules like Bode's Law -- interesting
but fortuitous approximations with no necessary basis in the natural world?

A bit of both is my answer, but it's a complicated issue that I'm
actually writing a paper about at the moment. They are (a) descriptive
because we don't know how the mechanisms that produce them work
(b) more interesting than Bode's law because they can be seen
to interact in complicated ways, so it is probably the case that
they correspond in some systematic way to actual facts about
brain structure (the Peacocke/Davies concept of `description at
level 1.5', in case any of you are into the philosophical literature.

Bill Powers (920227.0900):

< on language development & development of control in PCT >

This requires a fair amount of actual work to answer, so the
answer is I just don't know. There has been heaps of work
(about which I know virtually nothing) on language & cognitive
development, but presumably not using PCT categories (I'd guess
that Piaget would be the closest one could get). And presumably
people didn't notice the right things, so it would all have to
be done all over again. My inclination is to ignore development
for the moment & continue thinking about adult grammar & how to
get a beer.

  Avery.Andrews@anu.edu.au

[From: Bruce Nevin (Thu 92015 08:22:03)]

Bill Powers (920304.1000) --

There's too much going on to track and report it all at once. I think
we are perhaps each reporting what goes on in one hemisphere. Both
aspects are necessary and each supports the other. And questions of
which comes first are I think ill conceived. The two processes jostle
and interfere with and support one another in pandaemonium parallel.

Another factor is sublanguage, specializations of language for
particular subject-matter domains. A game such as baseball or chess, or
a particular technical domain such as immunology or pharmacology
(studies of which I have mentioned) or HPCT or sailing or auto
mechanics, each such domain is characterized in part by a specialized
form of the language used by its participants. These sublanguages
differ both in syntax and semantics. Even the constitution of words may
differ. I have mentioned that "the beating of the heart" is a single
"word" of the Symptom class in a sublanguage of pharmacology, even
though it is a phrase made up of many words in other usage, especially
in the sublanguage of physiology whence it is borrowed into that of
pharmacology.

So, "hit" has associated with it particular syntactic and semantic
possibilities in a sublanguage of baseball that differ from those in
other domains. If you hear "Rino Sanders hit the umpire" you must shift
from the sublanguage of baseball to a kind of language appropriate for
talking about a fight. Having done so, you don't expect to see Sanders
placidly walk toward first base upon hearing "the umpire told him to
take a walk."

Yes, this has very much to do with the world of input and imagined
perceptions (I must never forget that), but it also has very much to do
with institutionalizations of the perceptual world that depend in great
measure upon learned linguistic patterning (you must never forget that).

I will try to respond more fully next week.

···

-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-

An initial paraphrase for clarity:

A descriptive rule is not enforced by anything. A violation merely
shows that there are exceptions to the description, or that the
description needs refinement. Your example is Bode's Law. Rules of
this sort are regularities discovered about or attributed to nature.

A prescriptive rule is enforced by something. A violation is resisted,
presumably by some control system. Your example is the definition of
how a knight can move in chess.

My version: a knight can move from corner to corner of a 2x3 rectangle.
He gallops across country. These are two different descriptive
statements of the prescriptive rule.

In talking about language, the prescriptive/descriptive dichotomy is
entirely in the realm of descriptive statements of prescriptive rules
(in your sense of the latter).

A prescriptive<L> grammar is a description of prescriptive regularities in
language that are socially esteemed plus an injunction that you should
obey them, or a description of prescriptive regularities in language
that mark one as socially inferior plus the injunction to obey.

A descriptive<L> grammar is a description of the prescriptive regularities
in language. It should refer to the (sensu strictu) descriptive
regularities in language as understood background. (These are never
mentioned in a [linguistic sense] prescriptive<L> grammar precisely because
they cannot be violated, insofar as the description in adequate.
There's a minor epistemological rathole there, but a small one of
rapidly diminishing returns, so we'll ignore it.) A descriptive<L> grammar
should also describe the different socially-marked variants of the
language and the values of each for the personal and social image of its
users. A descriptive<L> grammar should also describe the subject- matter
sublanguage specializations of the language and their intersections,
borrowings (e.g. for metaphor like "take a different tack"), and other
relations. A descriptive<L> grammar should describe similiar relations of
borrowing and sometimes deeper intersection with different, perhaps
unrelated languages used by neighbors, immigrants, invaders, merchants,
and so on. A descriptive<L> grammar traditionally describes a time-slice
of the language in its process of never-stopping change. However, by
describing all these things, it describes the sources and bases for
ongoing change processes. Furthermore, it must describe constructions
that are marginal to the language at a particular time, but were more
well established in the past, or will become more established and normal
in the future. The synchronic/diachronic dichotomy is therefore
artificial.

Needless to say, there are no descriptive<L> grammars that meet all these
criteria.

But an important point is that the descriptive<L> grammar describes what is
available to that other hemisphere (metaphorically speaking, but perhaps
literally as well) that is controlling and imagining nonverbal
perceptions, and it describes the institutionalizations embodied in
language that in-form those processes of the control of perception. The
descriptive<L> grammar or a model of these aspects of the control of
language does not describe that world of perception or those processes
of control. Another description or model does that.

Each of these models requires the other.

are the
rules of grammar that you are helping to develop descriptive, or
prescriptive? In short, is there an underlying system that forces language
to exhibit these rules, or are these rules like Bode's Law -- interesting
but fortuitous approximations with no necessary basis in the natural world?

Language has both aspects. The descriptive (your sense) aspects are
linguistic universals referred to as background for a descriptive<L>
grammar. An example is the way acoustic properties of the speech
apparatus define regions at which articulatory deviations make little
acoustic difference, and at which formants are easier to identify
because they are clustered or merged. These regions are the familiar
labial, aveolo-dental, velar, and back velar articulatory positions. A
descriptive<L> grammar describes prescriptive (your sense) regularities
reflecting norms or conventions that constitute a particular language at
a particular time. Within the constraints defined by prescriptive (your
sense) factors or strong linguistic universals, nothing forces language
to exhibit particular forms or rules except the historical contingencies
of evolving human cultural institutions.

The existence of zero-order words and of operators classified as to the
argument requirement of their argument words appears to be universal and
I suspect is a byproduct of hierarchical perceptual control. The
existence of reductions and their general types and even certain
generalizations about their form also appear to be universal. Some
aspects of discourse structure appear to be universal, though a great
deal of work needs to be done even to find out what the issues are.
Details of word shape and particular reductions of word shapes are
historically contingent. Particular sublanguages, social valuations of
language, and so on, are historically contingent and not universal,
though some responses to these evidenced in historical change appear to
be universal.

Mostly we don't know what the facts are, partly because people are still
parroting the inspired guesses of brilliant pioneers as proven truths.
For example, Roman Jakobson proposed that infants' babbling is a form of
play that ranges over all the possible speech sounds of any possible
human language, and that as they learn a language those specialized
innate devices that are not needed atrophy. This turns out not to be
true. Yet on this foundation was built an even more speculative edifice
of innate language perception and production mechanisms for syntax and
semantics as well as for phonology, such as that Pinker is talking
about.

Avery --
Date: Thu, 5 Mar 1992 14:38:50 EST

I see your post in my mailbox now, have no time but will comment
only very quickly now.

Thanks for the additional comments on Diver, no, statistical
skewing does not seem PCTish at all.

verb `betray') (b) the info that the `treacher' argument can

I don't understand "treacher".

I know about the kinds of examples that are in Remarks on
Nominalizations, though it is more than 20 years since I first read it.
Cross-paradigmatic communication is difficult, not least because both
parties must be aware that this is what is going on and committed
nonetheless to success in it. The second can't happen without the
first, and I'm not sure you're yet aware that you have to buy on to
the first if we are to talk productively here.

from more general principles). I'd say that `betrayal' is a noun
and `betraying+NP' is a verb (with gerund inflection) on the basis of

I didn't ask a question to which this is an answer. I asked how you
distinguish betrayal and glass, both nouns. (I suggested a +abstract
feature, as has been used in the past and is still used in some
quarters.) Then I asked how you would describe the relationship of the
betrayal-betraying-betray series of sentences. I said this relationship
is transparent in OG and useful in getting to a semantic representation.
I suggested that their relationship is not transparent in LFG or other
PSG-bound theories, and that their relationship has no direct bearing on
any sort of semantic representation in these theories. I asked
(implicitly or explicitly, I don't remember which) whether these
suggestions are true.

Gotta run.

  Bruce
  bn@bbn.com

re Bruce (5 Mar 92)

`treacher' = betrayer

I asked how you
distinguish betrayal and glass, both nouns.

By giving them different semantic representations, with different
argument-specifications (none for glass, various optional ones
for `betrayal'). What I have in mind is hitching something like
Jackendoff's 1990 `conceptual structures' to an LFG, though I admit that
I haven't actually implemented this yet.

The betrayal-betraying-betray sentences will then get similar
semantic structures due to having similar semantics for their
lexical entries (just like Jackendoff would do it). I get the
impression that in the `standard' view (common to LFG, GPSG,
and at least some GB work), lexical entries to a lot of the
work that Harris would do by reductions. In effect, rather than
`expand' whole sentences as syntactic structures, one `expands'
lexical items into their semantic structures, plugging arguments
into appropriate positions (my LFG does do this, except that the
`semantic structures' of lexical items are just strings with places
to put indexes into, like the first argument of an fprint
statement in C).

  Avery.Andrews@anu.edu.au

[From: Bruce Nevin ()]

Bill Powers (920305) --

There are so many hypotheticals in this post, it was a bit difficult to
digest. Also, it was unclear whether you were offering a game of "let's
you and him fight" or conciliating "Boys! Boys! Don't fight!" Setting
all that aside I will focus on the conceptual variables that seem to
matter most to you. Please correct my aim if I miss.

anything common to all languages will be found at the level of
nonverbal experience, not in language conventions (except as these
conventions are inherited from other languages).

If we can trace certain structural constraints to the world of
nonverbal perception, then they will no longer have to be explained in
terms of rules relating words as words.

I think any linguist would agree with this, though there is much
controvesy about where the boundaries lie, how to determine such
boundaries, how to characterize the different kinds of orderliness on
either side of such boundaries, and so on--mutually interdependent
questions.

There are contributors other than HPCT to linguistic universals, but it
seems to me that they are necessarily all mediated by perceptual
control. An example is the set of acoustic properties of the vocal
tract that results in favoring certain places of articulation for
consonants, which I have sketched a couple of times. In part, the
infant encounters utterances in which consonantal bursts, transients,
etc. are mimicable only by configuring the tongue so as to constrict the
vocal tract in these favored regions; in part, the exploratory
self-unfoldment of the control hierarchy finds experientially that in
these regions articulatory error makes less acoustic difference than
does a similar difference of configuration and effort at other regions
of the vocal tract. It is doubtful whether the infant would make this
sort of discovery without the prior existence of language (conforming to
these constraints) as a conventional social artifact in the environment.
Even with an environment providing many experiences of language use,
evidence is that infants do not develop requisite control without the
motivation of being able to use language to engage others in
interpersonal communication and to accomplish personal goals through
cooperative social means. Children brought up to age 4 with little
human interaction but with their cribs next to a TV that was constantly
on were severely impaired in their linguistic and social skills, though
presumably exposed to a great deal of very sophisticated use of such
skills that happened not to engage them in interpersonal ways.
(Of course, Bruner's LASS has a much more active role than I imply here.)

If you accomplish the aim of accounting for what all languages have in
common, and you show that it all comes down to characteristics of the
world of nonverbal perception plus fundamentals of physics and chemistry
in the environment, like the acoustics of the vocal tract--having
reached the state where linguistic universals are trivially deduced from
first principles, what would remain? In your hopeful estimation, the
conventional aspects of language would be simple and uncontroversial,
and the different systems for describing it would converge. I believe
that it would remain quite complex. And I believe that using some
existing systems for describing language (both aspects together) makes
an approach to the derivation of linguistic universals from first
principles unlikely.

In particular, I believe that operator grammar shows a simple structure
for language--a structure of word dependencies--that is universal and
that accords well with perceptual control, plus a more or less complex
and arbitrary, institutionalized system of conventions whereby words and
word dependencies may be given different shapes in utterances. The
principles and some of the patterns of the reduction system are
universal, but the detailed reductions and the particular word shapes
are not.

You object to what you call the "expansions" of operator grammar as
being unnatural and not corresponding to your introspective "feel" for
what you are doing when you use language. Part of the problem is some
confusion about the status of these changes of form. In the example of
analyzing a sentence I quoted from your prior post, I hinted at part of
the resolution when I said that a particular expansion need not be
carried out, it only had to be available, and that the expanded and
reduced forms were alternative forms for the same words and word
dependencies. Another part of the problem concerns the difficulty of
introspection and what is available to awareness, compounded by the fact
that you are using the thing you are analyzing and analyzing it even as
you use it.

I'll try to address both aspects.

Martin Taylor (920304 17:00) --
Rick Marken (920305) --

consciousness of means vs ends depending on level
of disturbance

This is relevant to the discussion of how "natural" a model of language
control appears to us as we use language.

if the actions that allow high-
level control are easy, then what subjects see themselves as doing is what
we would call satisfying the high-level reference. But if the lower level
control structure is disturbed or not well structured (the actions are more
difficult), then people see themselves as "doing" the low-level things.

The things that are difficult in language control, and which therefore
obtrude themself on conscious awareness, are mostly larger discourse
structures across series of sentences. Even when a single sentence must
be recast, it is typically due to relations in a larger discourse
context, and involves reduction to one complex sentence constructions
that could also be articulated as two or more sentences. This is the
problem of parcelling global, nonlinear word/percept dependencies out
into a linear sequence of linearized dependencies constituting
sentences. In my master's thesis in 1969 I called this periphrasis as
distinct from a paraphrase process within the scope of a sentence.
These paraphrase processes seldom rise to awareness.

Lower-level changes of word shape within these paraphrase processes--
what linguists call morphophonemic alternations--arise to consciousness
for the average language user only when they become socially marked as
shibboleths of region, community, or social class. Things like "ain't"
and "She don't know no better." These constitute the tiniest, though
most visible, fraction of what is going on.

The reductions of operator grammar account for sentence-paraphrase
processes. They account at present only for those aspects of discourse
periphrasis that are closest to sentence paraphrase, by reductions of
conjoined sentences and reductions to pronouns and other referentials.
It is important to understand that the reductions include and are no
different from morphophonemic alternations of word shape. Let's look at
that.

We feel that geese is the same meaning/word as goose plus the same
plural meaning/element as the -s of picnics, the -es (that is, -iz) of
foxes, the -en of children, and the zero of fish (alongside fishes) or
of series. We say that went is the same meaning/word as go plus the
same -t that is found in swept, which is none other than the past-tense
meaning/suffix that also takes the shape of -ed in braided, the -t of
swept, the vowel difference of break/broke, the zero of . . . well you
get the picture.

Some changes of form are optional.

  John came with Alice, but Alice didn't leave with John, Alice
  left with Frank.

  John came with Alice, but Alice didn't leave with John, she
  left with Frank.

  John came with Alice, but she didn't leave with John, Alice
  left with Frank.

  John came with Alice, but she didn't leave with John, she
  left with Frank.

  John came with Alice, but Alice didn't leave with him, Alice
  left with Frank.

  (etc.)

The differences here are differences of emphasis, not of meaning.

The words in a sentence may be given different forms when combined in
specifiable ways with particular other words. That is what the
reductions of operator grammar are about.

You, Bill, want to say that it is the *meanings* that are given
different word-forms under different conditions. But the conditions
("environments" as linguists say) are not specifiable in terms of other
meanings, but only in terms of other words representing meanings.

Its worse than that. You may recall that I asked some time back how an
elementary control system (ECS) could control for two input signals
being repetitious or redundant with respect to each other ("the same").
You can say she instead of Alice in the above examples only if both
words refer to the same individual. Hearing she amounts to hearing an
assertion that the individual who arrived is the same as the individual
who left. The reduction to she is one form in which that metalinguistic
assertion can be uttered. Since I don't know how an ECS can control for
sameness (same reference) of its own inputs or outputs, I think some
other ECS controls for an *assertion* of sameness. In other words,
absent an answer to my question (above) it appears to me that perception
of sameness requires metalanguage (as part of language) or a precursor
very much like it in prelinguistic control of "metasymbol" perceptions
about symbol perceptions (control of one sort of perception as a symbol
for another).

What could that be? Possibly the first step toward language is the
ability to perceive repetition (one instance or token of a category and
another instance or token of that category). Some imagined perception
is taken as a symbol for any instance of the category. Pictographs,
hieroglyphics, and ideograms work like this. Rebuses do also, but in an
ad hoc way that has not be institutionalized. Withal, we must be
careful to avoid identifying the evolution of writing systems too
closely with the evolution of language. However suggestive the
parallels may be, they are still separated by a great span of
evolutionary time.

A next step toward language must be the perception that two symbols
(category perceptions) refer to the same individual instance or token.
This "sameness" perception relative to the category perceptions is not a
category perception, but rather is about the category perceptions in
precisely the same way that a linguistic utterance is about the
perceptions to which it refers. (And indeed, by that relation it may
appear to create the act of reference, and the relation of reference
between a category perception and the perception of individual token of
the category, but that is rugged epistemological terrain that I can only
look at for now, not enter upon.) With these two evolutionary steps,
category and metacategory, you have the first requisites for language.
(Metacategory has nothing to do with the question of categories of
categories we have discussed in the past.)

This metalanguage referring to the words of language is itself a part of
the language, using a subset of its words and a limited portion of its
syntax. Metalanguage assertions are almost always reduced to morphemes
like pronouns and articles. They are thereby made especially difficult
to notice.

[from Joel Judd]

Apologies for the garbage message that predates this one. The @&!*^$@)_$*&
server for students and recently ex-students conveniently goes down every
Friday at 5:00pm when there's noone around to complain to, so I tried
switching to the faculty server to send.

···

******
Bill C.--please send me a post so I can copy your address, I just replied
without copying it yesterday.
******
Avery, Bruce (920501)

The aside comment to Rick about to where to REALLY look for language
control reminded me of something I think about when it comes up in a
language class, but never write down.

A big and I think overlooked aspect of native language use is how native
speakers know to ask questions about utterances they didn't quite get, or
want repeated. As a quick example, suppose I said something like the
following:

"Yesterday I walked too close to a construction site and got my ear lopped
off."

Now, depending on the context, of course, you might ask something like:

"You got your WHAT lopped off?" or
"You did WHAT?"

or something else which reflects what we know about the relationship of the
constituents. Almost unfailingly (and this is why I should keep notes),
early learners, middle learners, even some pretty proficient learners will
do things like just start to repeat back the sentence:

"Yesterday I walked....."

or just start picking salient words out and repeating them:

"Yesterday....?" "Construction what...?"

Do you know what I'm talking about? I can come up with some other examples
if it's not clear. This communicative technique seems to be one which takes
a long time to develop, regardless of the proficiency of the learner. Is
there a germ of an assessment experiment here?

(Penni Sibun 920820.2000)

   [Martin Taylor 920819 15:00]

   >i am deeply suspicious of hierarchical organization from my experience
   >in modelling language (as well as other behavior). language is
   >supposed to have all sorts of hierarchy in it (eg, a text is composed
   >of paragraphs are composed of sentences are composed of words) and i
   >just don't think this is a very useful way of looking at language, and
   >am working to show this.

   I would be very interested to know more. I have evolved in exactly the
   opposite direction. My theory of Layered Protocols seems to me to hold
   promise of clarifying many of the difficult problems of language, and I
   think that arguments from information theory and feedback stability theory
   almost demand that dialogue be conducted in a hierarchic way. If you have
   counterarguments, I'd be most interested in them.

i was talking about any text, written or spoken. are you talking
about just dialog? i agree that dialog has properties that
distinguish it from eg written text. but i quite disagree that dialog
is hierarchic. (that's been the model in computational linguistics
for nearly 20 years, and it has had approximately zero efficacy.) can
you tell me what forms the structure of the hierarchy?

   But if all you mean is that one cannot describe language as the simple
   combination of unitary elements of different sizes, then I totally agree.
   Your use of the word "composed" suggests that this may be your
   meaning.

yes, i mean that--and more!

   All the elements, of whatever size, change their functions dramatically
   depending on situational and linguistic context, and the contextual effects
   can be very long-lasting.

i'm going to suggest that the elements aren't anything we might think.
eg, in spoken language, ``word'' is not a useful concept. in written
language, it's only useful because ``word'' refers to elements of our
encoding system--not of language itself. because we are literate
people in a literate society, we tend to think of ``word'' as the
stuff between spaces on a page, but these words don't particularly
seem to be what we use when we utter stuff.

  Language is not a concatenative process.

i'm not exactly sure what you mean here, beyond a reaffirmation of
hierarchy.

cheers.

        --penni

I have to to some other stuff before attending to the language thread
I seem to have started, meanwhile thanks to Eileen for giving a
substantive answer to Gary's questions about dative variants.

Avery.Andrews@anu.edu.au