[From Bruce Nevin (2003.10.28 15:22 EST]
Bill Powers (2003.10.28.0718 MST)–
The
problem is that we can’t independently observe the shape of
something
in the environment.
this is a theory about
the nature of the very mind we
are using to think about these things, so at
some point we have to address it. You’re making a claim about the
objective structure of the environment.
I claim that it exists, not what it is. Our effects on it are our means
for controlling our perceptions of it. We can observe those effects by
means that are independent of our perceptual input functions for
language, which give us our subjective experience of language in use.
from observations in a variety of perspectives, extending our senses by
instrumental means, we can infer properties of the environment, without
ever having any claim to direct knowledge bypassing perception. This is
no different from any science.
If you’d be just as happy to say
you’re talking about the structure of human perceptual input functions,
so
it really doesn’t matter whether there’s anything in correspondence
to
those perceptions in the hypothetical “shared environment,” we
could go on
from there. But it does seem to matter to you that language have an
objective existence somewhere outside human beings, so we can’t just
let
this go.
I will be perfectly content to let it go, once there is an alternative
physical explanation for the ability of one person to repeat what another
has said (which is not the same as imitating the sounds they make).
Posing the nature of this problem has been the main thrust of what I have
written. I will lay it out more carefully so that perhaps it will be a
little harder to ignore this time.
As Rick has kindly reminded us
p =
f(qi)
You and he seem to be arguing that
p =
f()
But what is going on in qi is certainly relevant – the input functions f
could produce no perceptual input p without it! – and that which is
relevant in it is put there by the behavioral outputs qo of a speaker of
the language.
The behavioral outputs qo of the speaker have effects on the environment
that are perceived by the hearer
qi -
d
The hearer constructs
p =
f(qi)
The constructs quite a number of perceptions controlled in parallel
strata or layers of linguistic structure – phonemes, syllables,
morphemes, words, etc.
The speaker A and hearer B now reverse roles. B undertakes to repeat what
A said. The behavioral outputs qo of the speaker B have perceptible
effects on the environment
qi -
d
The hearer A constructs
p =
f(qi)
A constructs quite a number of perceptions controlled in parallel strata
or layers of linguistic structure – phonemes, syllables, morphemes,
words, etc.
A perceives a repetition of what he just said – the same phonemes,
syllables, morphemes, words, phrases, clauses, sentences, etc. All the
structure-perceptions of the utterance are absolutely identical to the
structure-perceptions that A was controlling by speaking to B just a
moment ago.
Now A repeats what B just said. And B perceives that all the structure of
A’s utterance is absolutely identical to what A had said before, which B
had just repeated.
To be sure, the outputs qo of B are not identical to qo of A. The aspects
that are different, such as voice pitch, precise phonetic shape of
phonemes, duration of syllables, amplitude, and so forth, are not
elements of the structure of the utterance. It’s not that they are
disturbances, they just don’t make a difference in the input functions f
of the hearer.
Nor of course are the same disturbances d present in the environment.
These are additional non-structural variations that the hearer ignores.
Assume a quiet, acoustically protected environment for the present. We
can talk about how this ignoring is done later, but it is a
separate discussion.
Now,
p =
f(qi)
In the usual situation that we diagram, the CV is some independent
variable in the environment. However, in this case the CVs in the
environment are the speech outputs of the other person
qi = qo +
d
Therefore, by substitution
p =
f(qo + d)
and since we have abstracted d for the moment,
p =
f(qo)
For clarity I should distinguish qiA from qoB and the converse, but I
have put it this way because this is the form of the environmental
feedback by which the speaker monitors her own speech. Substituting, as
above, qo for qi, the speaker, A, is monitoring
pA =
fA(qoA)
even as the hearer, B, is also listening to qiB = qoA, so that
pB =
fB(qoA)
Then they reverse roles. Since B intends to repeat what A said, and since
all parties agree that B is successful in doing so, the references for
the CVs to be repeated are set the same as the perceptual inputs that B
just heard from A:
rB = pB
By substitution of pB = fB(qoA), then
rB=
fB(qoA)
As speaker, B monitors
pB =
fB(qoB)
even as the hearer, A, is attending to
pA =
fA(qoB)
They reverse roles again. Since A intends to repeat what B said, and
since all parties agree that A is successful in doing so, the references
for the CVs to be repeated are set the same as the perceptual inputs that
A just heard from B:
rA = pA
And, as above
rA =
fA(qoB)
A has assented that B was successful in the repetition. This means
that
rA =
fA(qoB) = fB(qoB) = rB
rA =
rB
We have already agreed that A and B have come to be organized in the same
way in respect to this control of language, so
fA =
fB
Now, closing the loop through the environment we have
qo = qi -
d
or, abstracting d under laboratory conditions
qo =
qi
These physical effects in the environment are the means by which both the
speaker and the hearer control their perceptual input p.
Trivially, if
qi = 0
(no information in the environment), then
f(qi) gives the same value for p no matter what effect the speaker has on
the environment
qo =
qi
Or even
qo =
qi-d
(For surely unpredictable disturbances do not enable f to construct
p.)
In other words, if there is no information in the environment, then p is
imaginary.
p =
f()
Suppose by some actions I control some independent CV in the environment,
meaning, one that is not dependent upon this sort of reciprocal
correspondence that is a necessary and distinguishing characteristic of
language. Let’s say I paint something green. The perception that it is
green is a perception that is constructed by a perceptual input function
f:
f(qi) =
p
But we do not doubt for a moment that there is something really in the
environment to be perceived. We can even determine, by instrumental
extensions of our senses, how that something is structured so that most
light energy is absorbed and only that energy is reflected to the eye
which falls within a range that we perceive as green. We’re still talking
about perceptions, to be sure, however extended. But it is by such means
that physics, chemistry, and the other sciences claim to infer (and test)
objective knowledge of reality.
In the same way, we can instrumentally identify variables in the acoustic
properties of the speech signal as A speaks and as B speaks. We can
identify changes in amplitude, including silences, concentrations of
periodic energy at different levels of the acoustic spectrum, changes in
these, different spectral distributions of aperiodic noise (of nasality,
sibilants, fricatives, affricates, stop releases, etc.) and so on. As
observers with our instruments, we see that in the speech of B these
variables are articulated the same as in the speech of A, and vice versa,
as each repeats what the other has said. This is quite apart from
anyone’s input functions for speech, pre-set in speaker and hearer (and
observer) by evolution and by epigenetic reorganization. Again, we’re
still talking about perceptions, however extended. But this kind of
concordance of disparate instrumental and perceptual means is one basis
for the claims of science to construct and verify objective knowledge of
reality.
The acoustic signal is the result of the speaker controlling all of these
layers of structure simultaneously. The speech signal is like a merger of
overlays. A very rough analogy: several sine waves can be combined
forming a single complex wave. By fourier analysis we can recover the
constituent simple sine waves. All are present in the complex wave.
As you work to develop your perceptual input functions for Chinese, you
find that the input functions that you established for words in isolation
are not recognizing the same words in running speech. This is because in
running speech the speaker is simultaneously controlling other structures
in addition to syllables, morphemes, and words. You might have already
noticed the same sort of thing even within words (as is the fluent
listener). The same phoneme is pronounced differently in different
syllables, the same syllable is pronounced differently when stressed or
unstressed, and so on, and the difference in each case is predictable.
The predictability is somehow built into the input functions as
differences that don’t make any difference.
If the intonation contour for the sentence calls for voice pitch to go
up, then the entire range from high tone to low tone is constrained to a
narrower band of pitch frequencies, because the lower frequencies are not
available within that portion of the intonation contour. At a part of the
utterance where the intonation contour is high, low tone may possibly be
higher in frequency than high tone at a part of the utterance where the
intonation contour is low.
The first language structures that a baby learns are the intonation
contours. Charles Ferguson, at UCLA, used to recommend that language
learners spend a long time (a month or two, maybe) just listening, not
attempting to speak. Then another period babbling, just producing the
acoustic outlines of assertions, questions, side comments and
subordinated clauses within these, and so on. These intonation contours
almost always remain foreign in a foreign accent, no matter how
proficient an adult learner has become. Then begin putting syllables and
some of the simpler words into the babbling.
The acoustic signal qo = qi -d (or in our laboratory qo = qi) is the
result of controlling all of these layers of structure simultaneously.
For example, there are physically identifiable traces in the speech
signal of the boundaries between syllables. These may be identified quite
independently of the perceptual input functions f that have been
established (by whatever means) in the two participants A and B in our
laboratory experiment. This is a physical transform of perceptual
variables controlled by A and by B.
I’m starting to get an impression that you
don’t think there are any
nonverbal perceptions above approximately what I call the
“relationship”
level – that above that level, it’s all language.
This is a mistaken impression. It is just that verbal reports are not
trustworthy by themselves. Even verbal rejoinders are not sufficient
evidence of resistance to disturbance for purposes identifying controlled
variables. For one thing, as we see demonstrated here from time to time,
control of the structures of language and of argumentation can be a quite
beguiling alternative to control of the variables to which they may
refer. For another, there is the little matter of awareness. In the old
distinction between precept and example, I’ll take example any day as
evidence of what someone is really controlling, as opposed to what they
tell you they are controlling, or even what they tell themselves
(convincingly!) that they are controlling.
How is this different from one person
presenting qualia like green to
another person and the other person assenting that this is the same
color
he experienced?
The word qualia is in contrast to quanta, and refers to the subjective
experience of one’s perceptions. Your subjective experience is as
inaccessible to me as is the Real “thing in itself” Ding an
sich Reality in the environment, and for the very same reason.
On the other hand, we could set up an exchange between A and B, analogous
to the language demonstration outlined above, in which each had a set of
color chips. A picks up a green chip and shows it to B. B picks up a chip
and shows it to A. A, comparing the two chips, either nods yes that it is
the same color or no that it is not. (We need to exclude use of language
for obvious reasons.) If the color differentiation among the chips is
great – just the six primary and secondary colors, say – then they
would reach agreement. But this is because we have put the work of input
functions and output functions into the environment and done it for them.
This gives the appearance of repetition vs. imitation. If they have a
full range of color chips, then it depends on which chip they pick up,
and how close is close enough, and that kind of discrimination and
imitation will differ from one individual to another and from one
occasion to another, idiosyncratically. This is because none of this is
pre-set by lifelong practice according to pre-established social
convention.
In all of this, as for language, the qualia of the subjective experience
has nothing to do with it.
There is no place where we can draw a line and
say, "above this level, we
simply know that we are having the same experience." The problem
extends to
all levels of experience.
For subjective experience, yes. The process outlined above establishes
that two individuals are controlling the same CVs in the same way,
whatever their subjective experience of their perceptions may be.
But back to what I see as the big point,
exemplified in the discussion of
meaning and words like “fairness.” Is there a level of
perception at which
we sense the presence of a principle without using any words?
I think yes, there is, and further that the stories we tell about those
perceptions are unreliable guides as to what they are.
Is it really true that when you hear the word
“fairness,” the only thing it
means to you is a relationship of equality?
Not at all. But in the examples given I saw no evidence of control at any
higher levels, and that is what I said and why I said it.
/Bruce
···
At 09:05 AM 10/28/2003 -0700, Bill Powers wrote: