Bill, Rick, Chuck, Avery

[From: Bruce Nevin (Wed 931013 12:22:50 EDT)]

Bill Powers (931012.2000 MDT)

Yes, Martin's proposal accounts for "coarticulation effects" by which a
given speaker fails to reach what appear to be the articulatory and/or
acoustic targets for phonemes. The other factors I mentioned remain.

Maybe think of my proposal as n vectors in acoustic/articulatory "space",
where the apparent targets (directions of vectors) are determined by
characteristics of the space, control optimizing their equidistance
from one another, and the size n of the set of vectors.

"Hyperarticulation" is a term for pronunciation that actually attains the
postulated target for a phoneme, or nearly so. It is exaggerated,
super-distinct pronunciation. It occurs when the effects of (1) timing
wrt adjacent elements are reduced as close to nil as possible, and (2)
loop gain for achieving phoneme-targets (or, in my proposal, loop gain
for contrast) is maximized (using numbers 1 and 2 as in your post).

I like your idea of projecting what the "targets" must have been from the
enacted trajectory as successive phonemes foil one another from actually
attaining their "targets".

It may be that my notion of vectors applies to the language-learning
process, that is, to the origin and definition of the "targets", and that
the "targets" then become straightforward reference signals, as you say.
However, I think the vector notion is still needed, in an ongoing way
throughout life, to account for our easy accommodation to one another's
different dialects. That's straightforward if targets are predicted from
contrasts, complicated if you have to map from one set
of phonetically specific targets to another.

Your prediction about the terminal phoneme runs into the fact that, at
least in English, terminal syllables tend to be unstressed. (There is a
very strong tendency, across languages, for word margins to be
deemphasized, so it is not just English.) It also runs into the fact
that the preceding phoneme also "imposes" coarticulation effects -- the
state of the articulators after their trajectory towards targets for the
preceding phoneme also interferes with attainment of the current phoneme.
The only time you get free of coarticulation effects is in pronouncing a
phoneme in isolation, and this is physically possible only for vowels and
for continuants like English m n ng l r.

It appears to me that the notion of targets or phonetically specified
reference perceptions which the "trajectories" of words only approximate,
is an instance of what Rick denies that you are claiming:

Rick Marken (931008.1400) --

Neither Bill nor I was saying that "the relation between phonemes and
the actual sounds that people pronounce" is a byproduct of controlling
phonetic perceptions. We are saying that "the appearance of phonemic
contrast" is a byproduct of controlling phonemic perceptions.

But I confess I still don't know what you understand "phonemic
perceptions" to be, Rick, if they are not specified in terms of phonetic
intensities, sensations, configurations, and transitions. The
hyperarticulated targets discussed above are specified in terms of these
lower-level perceptions. An actual pronunciation that fails to meet the
target for a phoneme is nonetheless recognized as an instance of that
phoneme because the trajectory is moving toward that target and no other.

there are sounds out there (s1,s2, .. sn) that map into perceptions
called phonemes (p1,p2...pn). [...] A person controls
phomemic perceptions by setting a reference for the desired phomeme.
If the reference specfies the perception of phoneme p1, phoneme p1
is produced (with whatever sound maps to p1); if the reference is
set to p2 we get phoneme p2 (again, via the appropriate sound).

You believe you are talking about sounds si, s2, .. sn, but I believe
that you perceive only sound categories p1, p2, .. pn. This is the
meaning of the paragraph you said you had trouble with. And this is why
I have made an issue of the peculiarity of stop consonants after s, as in
spin, so that in at least this case you would be compelled, by the
evidence of your own senses, to perceive at least one kind of sound below
the level of phonemic categorization, and so that you then would become
aware of the difference between phonemes and sounds.

A variety of sounds {s1a, .. s1z} map to p1. Another set {s2a, .. s2z}
maps to p2. Which sound is output when the person controls p1
varies depending upon disturbances. Among these disturbances are those
that influence the configurations of articulators prior to and after p1
in the intended utterance. But these adjacent configurations are
behavioral outputs due to the speaker controlling other phonemes adjacent
to p1. These are the coarticulation effects, and they result in the
"trajectory" Martin described. So p1 does not predict output of any
particular sound in the set {s1a, .. s1z}.

Worse than that, {s1a, .. s1z} is not a closed, well defined set. And
some of the sounds that map to p1 also might map to other phonemes in
different contexts (trajectories), so that e.g. s1g maps to p2 as well as
to p1. An example is the middle consonant in "latter", in "ladder" (in
American English, not British), and the consonant after the th sound in
"throw" (in many dialects of English). But we'll leave such
complications out of the picture here. They can be accounted for as
coarticulation disturbances.

I have proposed that we model pronunciation of a consonant phoneme as
control of an intended articulatory gesture perception, which hearers
reconstruct from the sounds (behavioral outputs) because there is a
well-known (publically known) finite set of these gestures, because they
are maximally different from one another in a thoroughly familiar
environment (one's own mouth, reliably assumed to be identical in all
relevant respects to the other person's mouth), and because the hearer
assumes that both she and the speaker controls the same number of them
differentiated from each other in the same or nearly the same ways.
Recognition is orders of magnitude easier than recognition of what
dropped on the floor in the next room, even when the precise targets used
by the other person differ from one's own.

I have proposed that we think of a vowel phoneme as an intended
configuration of formants in the acoustic signal, achieved by
articulatory means that have coarticulation effects on control of
adjacent phonemes. This is much closer to your identification of
phonemes with sounds.

All of this points to identifying phonemes with phonetically defined
target articulations (with their acoustic consequences heard especially
on neighboring vowels) and target formant-configurations (achieved by
articulatory means that disturb control of neighboring consonants).
Consonants are controlled articulation-perceptions with acoustic
consequences, and vowels are controlled acoustic perceptions with
articulatory consequences.

Beyond this, I have suggested that the targets be determined not as fixed
perceptions but as outcomes of controlling the relationships among points
of contrast. This makes for a much simpler account of language learning,
inter-dialect communication, inter-dialect accommodation (where speakers
of two dialects adjust their targets or rather their phonetic outputs
appear to converge to intermediate targets where they started out with
separate targets), and so on. But certainly it is much less awkward to
talk about the means for representing contrasts as the phonemes, rather
than the contrasts themselves.

We can observe the resistance to disturbance of these controlled
variables in the coarticulation effects that we observe in speech.
However, to do this we have to observe the details of speech sounds (not
satisfied with recognizing the intended phoneme) and speech
articulations. And yes, linguists have been doing this for quite a while.
They haven't been interpreting their observations in terms of controlled
perceptions and disturbances, but that doesn't determine the value of
these observations for us.

So contrast perceptions are category perceptions, the perceptions
determined by the pair test? So the pair test is testing for the
perception of contrast? So I perceive contrast when I hear pin and
then bin and can tell that they are different? I sure hope this
definition of a contrast perception doesn't change again. Is this
what you mean by a contrast perception??

Yes, and the perception that you experience when you hear pride and
bride, and when you hear apple and label, and when you hear sop and sob,
and so on. And when you hear win, ninny, minimum, thin, kin, gin, lint,
fin, sin, chin, din, tin, gimpy, and so on, as opposed to pin (or bin).
Then (assuming the point of view of one who doesn't know English yet) you
represent the contrast between pin and bin by a difference in two sets of
phonetic perceptions. You and I write them as p and b, but our observer
might not have those particular tools. An infant doesn't. Maybe to
start with the set of phonetic gestures at first includes what you and I
would write as min together with bin, and includes what you and I would
write as fin (or "pfin") together with pin. Then our observer experiences
a contrast between me and bee.

I've got to stop here (despite the order, this is the last item added to
this file). I hope this indicates how representations of contrasts,
phonemes, might develop from perceptions of word-contrast

CHUCK TUCKER 931012 --

I think your interest in language, as for practically everybody, starts
with words and meanings. Worrying about language perceptions below words
and their meanings is then no use to you at all, yes. A usual pitfall in
talking about children's speech is to assume that, because you heard the
child say "fish" and the parents heard the child say "fish" and the
child's nonverbal controlled perceptions while saying this seem to include
what you all agree is a fish in your shared environment -- that just
because of all of this, the child in fact said "fish", comprising speech
sounds that we can represent f-i-sh. It may be that for this child there
is no difference in pronunciation between "fish" and "fist", or that the
child controls a phonetic event-perception that sounds like our "fish"
but is not doing this by controlling a succession of speech sounds that
are also used in other combinations to make other words.

When you said "Sondra, leave your juice in the kitchen don't take it is
the living room" maybe she recognized word-events corresponding to
Sondra, don't, juice, kitchen liv-room, recognized "don't" as prohibition
of whatever she is doing, which happened to be walking into the living
room, remembered scenes about prohibited food in the living room, and
understood the prohibition without having to do much processing of the
other words in your sentence.

But by attributing understanding to them, we guide children's learning.

    "The fools! Don't they know you make of a man what you say of him?"
                Dostoevsky, _The Idiot_

Avery Andrews 931013.0600 --

I like that connection to mood disorders. I think reorganization below
the program level is perceived as "problem solving" and goes unnoticed
because we don't get stuck in it, or those who do are diagnosed as having
intelligence deficits rather than mood disorders.

Reorganization on the system concept level is perceived as conversion,
revelation, convincement, etc. It surely involves one or a few system
concepts at a given time, not all.

The same applies to mood disorders as reorg on program and principle
levels. So why are they so disabling? Why do the programs or principles
that are the seat of error seem to overwhelm those that are OK? Some
kind of looping through imagination seems to take off in a manic fugue.
Pandaemonium indeed! Some kind of double bind feedback (can't jump,
can't not jump) seems to suck up all candidate programs & lower like a
black hole during a depressive phase. An architect friend years ago was
enormously enamored of his manic phases, which he identified with his
creativity, and during his depressive phases got very cynical and thought
of himself as terribly realistic in contrast to the pitiful naivite of
others around him. A pain in the neck either way. Then there is the
cycling from one to the other in manic-depressive phases, which reminds
me of the growth/assimilation cycles of child development and learning.
It would take someone with direct experience, like your correspondent,
Bill, to test these speculations.