Language perception and production

[Martin Taylor 921026 17:45]

A quick note on a couple of items of interest I encountered during my trip
out West last week. Both have to do with language effects that the people
studying them believe to be due to the perception of language, though they
are manifest in performance. Neither researcher has come across PCT before,
so far as I know.

(1) Sound shifts. John Ohala, U of Alberta, Edmonton.

As most people are aware, the sounds, words, and syntax of language change
and drift over the generations. Ohala is studying the kind of sound shift
that leads to pater-vater-father-pe're relations among words in a language
family. Ohala has studied these relations in many different families all
over the world. His thesis is that there are many ways of producing sounds
that are perceptually very similar, and that a listener cannot always tell
which of these mechanisms is used. A particular person will probably settle

ยทยทยท

on one way of making a sound most of the time, but a listener trying to reproduce it may not use the same mechanism. Different minor articulatory errors make perceptually different sounds, so that, for example, a /p/ said
with a slightly loose lip tension may not have a full closure, and a
continuous fricative may be heard, like /f/. If a listener hears that as
the correct form, he or she may articulate it as a lip-to-tooth closure
rather than as a bilabial. Similar errors in voice onset time may lead to
perception of /v/ instead of /f/. (These are my examples, not his, but look
at what follows, on Elzbieta Slawinski's work).

As I understand him, Ohala claims that there is a pattern all around the
world in which the sound shifts that are observed can be accounted for by
shifts in articulatory mechanism between mechanisms that can have very
similar perceptual effects when either is slightly affected by execution error.
The acoustic percept is controlled and the desired sound is produced, but not
necessarily by the same muscular actions as are used by the older generation.
Particularly if there is explicit training ("Put your lips tight together and
blow" as opposed to "Put your bottom lip near your front teeth and blow") and
articulatory method could be propagated that differs across families or
communities, like mutations in a genetic system. But it can be propagated
without explicit training, by simple copying of the visual as well as the
auditory perception.

The central point is that Ohala believes that it is the perception of the
sound that is the constant across a sound shift, and that the articulatory
mechanism leads to a shift in the range of error (which happens when control
is not very tight--lowish gain), and this in turn leads to a shift in the
central reference for the acoustic perception. When this latter has occurred
in enough members of a community, the shifted percept becomes the norm, and
someone who used the old articulation would be considered a foreigner.

(2) Changes of speech with age and hearing handicap. Elzbieta Slawinski, U of
Calgary.

I did not get this one so clearly, but as I understand it, there is a shift
with age in the transition rate of formants, and in voice onset time. Dr
Slawinski believes that this is due to the perceptual ability of people to
detect the transitions involved, and is trying to relate the speech of
individuals to their ability to detect formant transitions and envelope
transitions (voicing affects the amplitude envelope as well as the spectrum).
She is considering very specific sound pairs, such as /ba/-/wa/ which differ
in transition rates.

I gave Slawinski (and I think Ohala, but I'm not sure) the contact information
for CSG-L. You may be hearing from them.

Martin

Ohala would be a *very major* figure to get interested in PCT. E.g. if you
were to ask people to name three top phoneticians I'm sure he would appear
on almost everybody's lists.

Let's keep our cool.

Avery.Andrews@anu.edu.au

A late comment on sound shifts. I was tempted to write net along the
following lines earlier, but I had missed this posting. Perhaps
the issues are illuminated by the kind of argument George Sacher
used to use in evolutionary biology. Roughly speaking this goes as
follows for the sound shift case:-

There is a tradeoff between cost of clear articulation needing
precise conscious control and error rate on the part of listeners.

We can classify conversations into classes by where this tradeoff lies.
Obviously a commencement address has a different optimum, from ordinary
conversation, and this has a different tradeoff from being a small
LRP group, way out in enemy teritory in the jungle.

Within a class there is still variation, and the usual stuff in
population genetics says that rate of evolution or divergence is
proportional to variance. I don't remember if this is the Hardy
Weinberg Thm. or something else, but could probably look it up.

So, it pays to look at redundancy and error rate in perception of
phonemes and do a Shannon kind of analysis to estimate acceptable
variance.

Refs - Ref in net posting to work of Slawinski and of Ohala