[From: Bruce Nevin (Tue 920317 09:14:28)]
New Interleaf software is being installed today, so my workstation is
down and I have some time to catch up on some responses I've been
wanting to make.
···
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
Randy Beer (3/16/92 12:04) talks about cockroach escape routines that
appear to run open loop.
One perspective on emotion is that it invokes or is invoked together
with quick, "canned" responses to stereotyped situations. The
inaccuracy of the stereotyping (category level, I assume) is offset by
the survival value of immediate fight/flight response.
The stereotyped situations can be social, involving but not
necessarily limited to conspecifics. My daughters just went this
weekend with friends to a place called Wolf Hollow, where they have a
pack of wolves and teach the public about them. In teaching about
social hierarchy, the woman demonstrated that she was lowest in the
hierarchy: when she approached while they were feeding, every tail
would rise, with growls. She explained that this is what happens if a
wolf approaches before a higher-status wolf has finished eating, and the
lower-status wolf's tail goes down in submission as she/he retreats.
She said that her husband is #3 in the pack hierarchy, after the head
male and head female, but could not demonstrate this because he was
pumping a visiting animal ethologist. The threat to survival here is
less immediate than an attacking predator, but real. Given more
predictability of the other players, one can focus more attention
(higher gain) on perceptions whose control contributes in more obvious
ways to fitness and survival. I suspect this is why assessment and
communication of relationship is universally of high importance among
mammals, and probably lower orders as well.
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
Avery (3/4/92, 3/6/92) --
You speak of semantic representations for words, overlapping for pairs
like betray and betrayal. You refer to Jackendoff's 1990 "conceptual
structures." I have not read this, and am unlikely to given my
situation. Please sketch what this looks like for the kinds of examples
we have been discussing. Am I correct in assuming that it is the
semantic "conceptual structures" that get correlated somehow with
nonverbal perceptions in the perceptual hierarchy, rather than the words
for which these semantic representations are provided?
I speak of semantic representations as well. For me, the semantic
representations use only the words and word relationships that are
characteristic of ordinary language. For the sake of refining the
semantic representations of discourses, the method extends some word
relationships beyond the domains that are normal (conventional, natural)
for ordinary, everyday usage of language.
In effect, rather than
`expand' whole sentences as syntactic structures, one `expands'
lexical items into their semantic structures, plugging arguments
into appropriate positions
Since the reductions take place at the time of word entry this is
essentially what I am doing. See more detail in my response to Bill
later in this post.
(3/5/92) --
On `Jimmy's betrayal': why do you think I or any other Chomskyan
needs a [+abstract] feature here? All I think I need is (a)
A recent query on the Linguist Digest used semantic features of this
sort, viz. Michael Newman <MNEHC@CUNYVM.bitnet>, posted 3/6/92 and
appearing in Linguist Digest 3.233 of 3/9/92:
The problem is related to the semantic distinction between referents
which are +specific (or +referential) or more informally real, existing
out there in some way, and those which are -specific, hypothetical or
generic. Thus there is a clear distinction between two readings of . . .
"Peter is going to marry the richest woman in town." So far so good. The
issue seems relatively clear when you are using example sentences, but
when you use real language, things do get messy sometimes. For
example, I would be reluctant to use the + or - specific label for the
following example from my corpus, (which is based on TV talk shows by
the way) The problematic antecedent-anaphor pair are in caps:(1) I have become involved with a consumer advocacy group called s.h.a.m.e.
it stands for Stop[ Hospital and Medical Errors, and it is a group that
was formed by MALPRACTICE VICTIMS and THEIR families.In this case there were indeed a concrete set of people who formed this
group, yet neither speaker nor hearer were in any position to specify
that set any further. In addition it is certainly conceivible that there
might be some dispute as to who exactly belongs in this set or not. So
my solution was to label this type as semi-specific (actually I use the
term 'semi-solid' reserving 'solid' for specific and non-solid for
-specific, but I don't want to get into that here) Cases like (1) where
there are sets which none of the interlocutors are in any position to
identify are fairly common, and using this semi label, I have managed to
reduce the number of problematic tokens by more than half.
Words like abstract, specific, semi-specific, solid, etc. are words of
English and depend upon the background vernacular of ordinary English
for us to interpret and use them. Yet the claim in such systems of
semantic representation so far as I am aware is that they are a
universal vocabulary of a universal semantic metalanguage apart from
English (or some other language being described). This metalanguage has
its own syntax involving constructions such as trees and tables.
Generally not noticed is the fact that this semantic metalanguage in
turn requires a semantic interpretation, and that its seeming
explanatory power depends in a circular manner upon the background
vernacular of natural language. Recourse to nonverbal perceptions in
the perceptual hierarchy as the universe of "meanings" promises a way
out of this circularity. The question remains: if the work can be done
within language (with slight extensions to enhance regularity), why have
recourse to a metalanguage purportedly outside the familiar language of
words organized in sentences and texts?
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
(Bill Powers (920310.1700) ) --
Here's my diagram of the operational definition of a controlled variable,
with those "low correlations" shown:------- low correlation ------
> >
------------ | ------------- --------
> > > > > > >
> BEHAVING |----->ACTION-->|ENVIRONMENTAL|----->|OBSERVED|
> SYSTEM | | TRANSFORMA- | |OUTCOME |
> > > TIONS | | |
------------ -------------- --------
> ^
--low correlation-- |
> >
> >
VARYING INDEPENDENT DISTURBANCE -->--
Is there not a high negative correlation between the action and the
disturbance, both measured in units of effect on the observed outcome?
And is it not that we infer from this, on the basis of coherence of
theoretical explanation and on the basis of our introspections, that the
behaving system has within itself (hidden from us) a perception of a
goal, the reference perception, and that the above high negative
correlation is a pretty exact mirror of a high positive correlation
between the goal and the outcome?
I suggested (3/9 midday--I didn't finish putting the date in!) that
correlation of an outcome with a purpose or goal smells unscientific
from a conventional perspective because the goal is a perception hidden
within the black box of the behaving system and the controlled outcome
is that subset of the possible sensory inputs to the behaving system
that are relevant to that goal.
I understand that we tease out which aspects of the outcome are
controlled and which are accidental byproducts by observing whether or
not the behaving system resists disturbance to particular perceptual
inputs (taken from the perspective of the behaving system). To
undertake this surprisingly challenging work, poking around in the dark
far from the seeming light under the lamppost, one must first have
grasped and at least suspended disbelief in the theory that both
motivates and guides it. Thus:
This invariant, the observable outcome, is not directly observable as a
first-order observation. It is a second-order observation, observable
only in context of certain expectations derived from hierarchical
perceptual control theory.
I was concerned with what might constitute stumbling blocks in getting
an adherent of the traditional perspective to leave off stumbling around
under that lamppost with its burnt-out bulb and come look where the key
is more likely to be found. I tried to identify "unfamiliar steps"
including:
isolating an "invariant" outcome in the
organism/environement system, selecting those aspects of the outcome
that are relevant from the point of view of the organism, and applying
the Test to verify that your guess as to what is being controlled is
correct.
Have I missed an essential point, as you suggest (920310.1700):
You guys are going all around the point here. I have a distinct feeling
that you're avoiding it.
In your reiteration of basics, you say:
Ordinarily, we would expect action and disturbance to correlate
with the outcome.
Do you mean the algebraic sum of action and disturbance correlates with
the outcome, while neither correlates well, taken separately?
When there is control, the variance of the observed outcome is
significantly less than the sum of the variances of the action and the
disturbance.
I assume we are looking at the summation of variances of action and
disturbance. Don't they cancel when there is control, but not
otherwise? Doesn't the algebraic sum of variances of action and
variances of disturbance approach identity with the variance of the
controlled outcome, more or less nearly depending upon gain? I don't
understand how the variance of the observed outcome can be significantly
less than this sum of the variances of disturbance and action. What am
I misunderstanding here?
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
I should say that I have never been invested in S-R models and have
always deeply distrusted them, and that my field, linguistics, as I have
learned it has not depended on S-R models. Even famous forbear Leonard
Bloomfield, who is often caricatured in conventional histories of the
field as a behaviorist, did not embrace S-R psychology so much as say
(in 1933, abandoning Wundt for Watson) "this is as much as scientific
psychology can tell us about meaning, which is not much, so let's put it
over here in a black box labelled "semantics" and get on with what
language can tell us in its formal structure, which is a lot." What I
have of an S-R perspective must be in unconscious preconceptions taken
in as part of the general American cultural package, but linguistics has
been pretty free of it. Really. Chomsky's famous, withering review of
Skinner's _Verbal Behavior_ was helpful to linguists who as in
Bloomfield's day felt some obligation to go along with what the
psychologists had to tell them, but met with no resistance to speak of
within the field of linguistics, only among (some) psychologists.
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
Bill Powers (920311.0930) --
If you accomplish the aim of accounting for what all languages have in
common, and you show that it all comes down to characteristics of the
world of nonverbal perception plus fundamentals of physics and
chemistry in the environment, like the acoustics of the vocal tract--
having reached the state where linguistic universals are trivially
deduced from first principles, what would remain?Nothing. I think you're pulling back from reductionism, which isn't implied
No, I was trying to get at something else. There are some aspects of
language that appear to be universal. There are other aspects that are
defined by social convention. These conventional aspects of language
are not universal.
The means in the perceptual control hierarchy for learning, maintaining,
and orienting one's behavioral outputs to social conventions or norms
are presumably universal. Those universal means may impose some
universal characteristics or universal constraints on what is a possible
social convention of language. Some language norms may be widely shared
among different languages through the historical contingencies of
language change (related languages) and contact ("genetically" unrelated
languages, to use the standard metaphor). But the conventional aspects
of language are not universal.
And having reached the desired state "state where linguistic universals
are trivially deduced from first principles, PCT still has to describe
or account for those remaining aspects of language that are socially
inherited and which do NOT follow from first principles. From the point
of view of any universal theory, they are arbitrary. We say horse
instead of Pferd or jahhom, we say went instead of goed, we say I ate
the fish instead of I the fish ate or ate I the fish, and all the rest,
because of historical contingencies that must ultimately be taken as
irreducibly arbitrary facts of social convention.
How can you tell when you have a satisfactory expansion?
It uses at each point of word entry (operator entry) only the relatively
small stock of words and word relations that are attested in many
perfectly ordinary sentences of the language. It uses at each such
point only reductions of word shape that are attested as minimal
sentence-differences between pairs of perfectly ordinary sentences, such
that the sentence-pairs informally meet my judgment of saying the same
thing and as a check of validity meet the stated formal criterion for
transformation (e.g. preservation of acceptability-difference over a
graded set of such sentence-pairs).
In particular, I believe that operator grammar shows a simple structure
for language--a structure of word dependencies--that is universal and
that accords well with perceptual control,...I agree that it does, although you will have to agree that it doesn't
completely fit natural language as it is spoken without introducing some
important invisible processes which are in principle unverifiable.
The departures from ordinary usage involve minimally extending the
domain of well-established reductions or other word relations, for the
sake of attaining a more regular semantic representation. By "minimal"
I mean the least amount necessary to attain that aim. By "regular"
I mean such that each difference of form correlates with one and only
one characteristic difference of meaning.
The discussion of "expansions" suggests that the semantic representation
is a string of words that results from all the reductions being
"expanded". "That which is a product of one discussing something which
is a result of one expanding something . . ." for the first four words
of the preceding sentence, for example. This is not the case. In the
typical case (see note * below on "typical), a sentence occurs with
other sentences in a discourse (text). It is the nonlinear structure of
the discourse that constitutes the semantic representation. To bring
out that structure in its most regular form (see above), one changes the
form of most or all of the sentences so that they are all instances of
sequences of word classes that recur through the discourse. This may
involve undoing some but not all reductions, or undoing some and
replacing them with others. (For the unreduced semantic primitives
(words) are still present in the reduced form of a sentence, albeit in
the form of affixes or even in zero form.) Strings that appear to be
constructions of many words may turn out to be single "words" in the
sublanguage grammar used for discourses in a particular subject matter.
Operator grammar is not the end product. It is a tool for analysis of
discourse. A semantic representation for a discourse is an end product
of that analysis. The word-classes of a sublanguage grammar correlate
with high-level category perceptions for that subject-matter domain
(e.g. symptom, patient, drug, etc. in a sublanguage of pharamacology.)
The "words" that are members of these word-classes may look like phrases
comprising a number of words for the grammar for some other domain.
("The beating of the heart" is the example I have given previously, a
member of the "symptom" class for pharmacology.)
Does this help to alleviate some of your discomfort with the reductions
of operator grammar? They are tools for changing the form of sentences
to a more regular form relative to other sentences of a sublanguage
exemplified by a set of discourses in the same subject matter, and for
achieving a semantic representation of those discourses that correlates
1-1 with nonverbal perceptions of one conversant with that
subject-matter domain.
A person's knowledge of a domain can be expressed in a set of discourses
about the domain, or in a set of semantic representations (double
arrays) for those discourses, or in the union of those semantic
representations, or in the memory and imagination of nonverbal
perceptions corresponding to the words and word classes in these. (*
Note on "typical," above: It is in this context of knowledge that the
atypical case of a single sentence or sentence fragment is interpreted.
Hence the discussion of zeroed dictionary sentences, etc.)
So I ask both you and Avery: in your models of language structure, which
parts of the phenomenon of language are observed, and which are imagined in
order to make the analysis work?
The words and word relations (including reductions) are observed, not
imagined. Some extensions of word relations beyond their observed
domains could be said to be imagined, though the bases for the
extensions are well within attested variation for language. The
regularity achieved by these extensions, and which motivates them, is
observed, not imagined. The correlation of elements, relations, etc. in
regularized discourses with nonverbal perceptions is observed. Even
departures from regularity, and the development of different
regularities replacing previously established regularities (as knowledge
in a field changes over time) have their directly observable
interpretation in the world of nonverbal perception.
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
(Bill Powers 920311.1100) to Martin --
Suppose you want to produce a sequence like "now is the time for all good
men to come to the aid of their country." By the wasteful pandemonium
postulate, this is the province of just one sequence-recognizer and control
system.
There might be one sequence recognizer for a familiar quotation like
this one, as for a proverb, an idiom, or some other fixed expression.
But in general, there cannot be a single sequence recognizer for each
possible sentence in a language.
I think there must be a sequence detector for each operator word,
satisfied by any word or words that meet the argument requirement of the
operator, modulo reductions of those words, and within the bounds of a
sentence (or sentence fragment) defined by intonation or punctuation.
This intonation or punctuation is taken as a reduction of "I say" as
highest (last-entering) operator (or "I ask," etc.).
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
That's all the news that fits, for now. Don't know when I'll have an
opportunity like this again.
Be well,
Bruce