[Martin Taylor 2004.08.03.1756]
[From Bruce Nevin (08.03.2004 17:30 EDT)]
Martin Taylor 2004.08.03.0955--
Fascinating linguistic history, but I don't follow the steps
leading from the historical discussion to the claim in the last
paragraph. It's not obvious to me that there is, in principle, "no
explanation."
I keep harping on the ambiguity of the word "meaning" as used by
you and Bill P. Perhaps if its "meaning" were to be clarified, the
two of you might have an easier time coming to an agreement.
I think that Bill and I have both taken the position that meaning is
any perceptions that one associates with an utterance.
And I think your eloquent discussion is beginning to bring out the
complexities inherent in using the term "meaning" as if it had some
meaning of its own!
OK. So "meaning" (singular) is the set of perceptions (plural) that
one associates with an utterance (not a word). It is at least a
vector, then. That's useful to know.
Can "the same" utterance ever be repeated? In other words, is the
transform between an utternce and its meaning invertible? Or will the
fact that the situational context is inevitably different the second
time going to change the meaning of the utterance?
Can any third party guess the meaning of an utterance in a dialogue
to either of the parties involved in the feeback loops of the
dialogue? Does an utterance have a "meaning" that can be
distinguished from its function in a transaction between speaker and
listener?
I have distinguished an aspect of meaning called by Harris
linguistic information. The structure of language results from
several contributory layers of redundancy -- departures from
equiprobability in the combination of elements. Not all phoneme
sequences constitute morphemes. Not all morpheme sequences
constitute words and sentences. Not all sentence sequences
constitute coherent discourse.
The departures from equiprobability at the level of word combination
and sentence combination of course accord roughly with what we think
of as the meanings of what is said (roughly: incommodities of the
correspondence pass mostly without notice).
Of course, these probabilities change on all scales, from "spoken
English" (which differs from "written English") to, say, "cooking
English", to "my family cooking English", and between themes such as
"cooking", "banking", "canoeing" and so forth. The changes involve
not only the substantives related to the themes, but probably many of
the structural features. Structural features provide the redundancy
necessary when the listener is less well known to the speaker, either
personally or in thematic understanding, than when the speaker has
worked closely with the listener on the current topic.
However, there is great consistency in the structure of a language
from one fluent language user to another,
(given similar circumstances)
and by contrast there is notoriously no such consistency in the
broader types of meaning that different language users associate
with what is said and written in their language.
Another way to see the distinction between linguistic information
and meaning more generally is to consider that you can know a
language without knowing all the meanings that can be associated
with utterances in that language (the meanings that, as we fondly
say, can be "expressed in" the language).
Is "meaning" a categoric perception or an analogue perception?
Both, although language is categoric ("digital").
Two perceptions, at least, then. And at different levels of the
"classic" HPCT hierarchy.
Is it determined by the relations of a word to other words
within a finite network such as is provided by an extensive
dictionary?
No. Although this is to a great extent true of dictionary
definitions, definitions are not meanings (as in the previous item,
the map is not the territory).
OK. I had been getting the impression Bill P. thought that dictionary
definition had something to do with meaning.
Or by the relation of a word to other words as encountered in an
unbounded network consisting of the words encountered by a
listener/speaker during a lifetime?
Distributional analysis of a corpus comparable to this can identify
the redundancies in a language that correspond to meanings. The
problem then is how to represent those redundancies. Such a
representation is not itself the meanings, it is a representation of
the information immanent in language, and is thereby related to the
more structured aspects of meaning.
I don't think this answer is responsive. I was asking about the kind
of approach computational analysis often uses, of the tendency for
words of, say, financial connotations to occur together more often
than those words co-occur with words relating to cooking or
paleontology. You talk a bit about this when you mention sublanguages
below, but even there you don't answer this question.
Is "meaning" simply the effect of a word in context when offered
to the sensory apparatus of another?
If by "effect" you mean the perceptions that the recipient
associates with the utterance (word plus context), this is a
restatement of the position that I stated at the outset, on which I
think Bill and I agree: meaning is any perceptions that one
associates with an utterance.
"One" being independently the speaker, the intended hearer, and a
third party? To me the answer to this sub-question is definitively
"Yes, separate and distinct for all three, and different again for
different third-parties".
If the latter, can the "meaning" of a word be extracted from its
verbal and situational context?
What does such an "extract" look like?
You answered this above, by asserting that a word has no meaning, but
an utterance does.
Is the "meaning" of a word I utter (or write) the effect I
anticipate it having on my perception of my interlocutor?
It depends upon whether your perception of anticipated effects on
the other exhausts the perceptions that you associate with the word.
OK, we are back to "meaning" as being a long vector.
However, I have to disagree with your answer, in that I would argue
that meaning is related to the interpersonal transaction quite
separately from any intrapersonal transaction. More simply, I may
want to get something across, but to myself the word has many other
connotations that I am not interested in getting across. The
"meaning", to me, would involve only those elements I want to get
across to my interlocutor. The "meaning" to my interlocutor, would
not involve all her perceptions evoked by the word, but only those
that she perceives me to have intended to evoke.
If you say that it does, then you are saying that your perception of
such anticipated effects is the meaning of the word for you at that
time. Few would agree that this completed the definition of the
word, and the reason of course is that the word is a public
property, while these particular aspects of meaning are very private
to you.
AHAAAAH! "The word is a public property!!!" Now we come to the core
of the contention between you and Bill P., I think.
For most subject-matter domains, the combinability of any given word
with other words can be represented as a distribution function. In
the successive periods (roughly, predication structures) of a
discourse, recurrences of the given word in combination with
specific other words restricts the subsequent co-occurrence
possibilities for the word.
All of this is true, but we do have to come back to the notion of the
word as "public property."
Let's analyze what this means, because it is at the core of my
contention (with which I think you probably agree) that language,
along with other cultural features, can be treated as a perceptual
artifact in the same way as can a rock. We've talked at length about
this before.
What is "public property", in my view, is that in a given situation I
can use similar forms of behaviour (including using similar language)
with more or less predictable effect, when interacting with more or
less any member of what might be called a cultural group. In fact,
the interchangeability of people with whom I can use similar forms of
behaviour to get reasonably predictable effects is what defines "the
group."
As my friend Stephen Johnson (Director of Medical Informatics at
Columbia Presbyterian) put it in a recent email reply to both of us,
Contrary to what many lay persons believe, words do not have
precisely defined meanings. Instead, each word has a distribution
with regard to likelihood of occurrence with other words in the
dependence structure. As a rough analogy, each word is a wave. As
words are used together in sentences and discourse, they
interfere, and become increasingly particle like: the distribution
collapses to a point.
The wavelike nature of words allows for enormous flexibility, and
the ability to be increasingly precise as needed by adding more
words that are mutually constraining. The key point is that these
waveforms are largely fixed by social convention. These behaviors
are observable, allowing one to investigate empirically the
transmission of objective information.
I like this description. And it corresponds to the notion of "coarse
coding", which is an effective way of getting precision out of
low-resolution descriptions. (And incidentally, it corresponds to a
proposed weapon of the Second World War--see below).
At the end of this, I understand Stephen Johnson's analogy to be a
way to arrive at any one of the possibly many perceptions that you
say constitute the "meaning" of an utterance. To what he says, I
would argue that the process is one that occurs in the listener. The
speaker is attempting to generate the appropriate pattern of waves.
The proposed Second World War weapon was based on the fact that when
a stone is thrown into a pool, it excites a circular pattern of
ripples that lap against the shore. Reversing that pattern of
"shore-laps" in time ought to produce waves that coalesce at the
point where the stone slashed, and should recreate a splash. The idea
of the weapon was to line the English Channel coast with flapper
boards, compute the wave patters that would arrive at the shore if a
shell were to burst under an enemy ship, and to invert that pattern
to create a burst of water to up-end the ship! Of course, it was
never actually built.
I don't know if any of this is helping to clarify the dialogue betwee
you and Bill P., but it does make clear to me that it would be very
hard, if not impossible, to create two utterances with "the same"
meaning.
Martin
···
At 10:16 AM 8/3/2004 -0400, Martin Taylor wrote: