Language, models, misc

[From Bill Powers (920515.2000)]

Avery (& anyone else):

My mailing address is
73 Ridge Place, CR 510 (that's "County Road 510")
Durango, CO 81301

···

-------------------------------------------------------------------
David Goldstein (920515)

Ann Landers has said, and I agree, that people usually are wrong when they
conceal a patient's true condition from the patient. They're really
controlling for their own feelings, not those of the other person.

If the older person has gone through some extensive medical testing, it
must be clear that there was a reason for it. I don't think it would be too
difficult to find out if the person wants to know the bad news. Ask the
person. "Do you think anything really serious might be wrong with you?"
That ought to be enough of a disturbance to elicit denials if the person
doesn't want to hear about anything bad. A more likely reply would be "I
don't know -- nobody around here will tell me a damn thing."

Does anyone really think he or she could visit a beloved relative in a
hospital and conceal the knowledge that this relative is dying? Denial
exists on both sides of the question.
----------------------------------------------------------------------
Greg Williams (920515a) --

RE: Fine kettle of fish..

Good point about the adjunct to HPCT in the form of proposals about free
will. HPCT is about control, nothing else. It's not even about
consciousness or awareness, much less "volition" in the usual sense. Any
remarks I've made on such subjects have come from an attempt to understand
personal experience in ways that HPCT doesn't help with. Of course knowing
that the vehicle is a hierarchy of control does make more sense of some
experiences.

According to PCT, we are CYBERNETIC beings, behaving/acting in certain
ways because of the history of each of our control structures IN
INTERACTION WITH each of our environments (or more exactly, those
portions of our environments which affect our individual control
structures -- what Maturana terms our "niches").

We are also reorganizing beings. Reorganization is fundamentally a
cybernetic process, of course, but it isn't necessarily totally "random" or
"statistical" in nature. All that's meant by saying that a process is
random is that we know of no algorithm that will predict what it will do
next. A random reorganizing system is powerful because it doesn't take
anything for granted. But reorganization could be quite systematic in some
way that is too subtle, or too advanced, for us to find order in it.

If determinism (of any sort) is an article of faith, then the idea that
there may be system behind our own reorganizations will cut no ice: one
will say, "Well, whatever that system may be, it must be deterministic and
result from interactions between organism and environment." But if one is
open-minded, it's not hard to entertain the possibility that reorganization
may have a systematic but not deterministic component. After all, until
you've found the deterministic links, all things remain possible -- unless
you've ruled them out in advance.
------------------------------------------------------------------
I've now heard ouches from two owners of linguistic toes.

Bruce Nevin (920514) --

I said

If it seems that there is structure in language, then a model that
explains this phenomenon should not contain that structure, but only
components that lead to phenomena which can be seen as having that
structure.

And you said

If it seems that there are words in language, then a model that >explains

this phenomenon should not contain words, but only components >that lead to
phenomena which can be seen as being words.

-- and so on to word dependencies, dependencies on dependencies, etc.

This isn't quite the direction of my thought, although I can't guarantee
that there is any direction to my thought. I think you're talking about
phenomena, observed order, facts. I'm talking about explaining those
phenomena by using a model.

In the case of words, I would wonder "What is a word? What is the system
doing when it produces and hears or sees those things we call words?" When
I examine words closely, they seem to be just familiar chunks of sound or
objects on paper. The meanings they seem to have (for those words that can
individually designate meanings) turn out to be ordinary perceptions. So
both words and their meanings are ordinary perceptions. The model,
therefore, would not explain words in terms of words, but in terms of the
way one perception can be used to indicate another, regardless of the
classification of the perception. The process of indication, evocation,
association, or whatever you want to call it, is what needs modeling,
because we already have a start on a model for how ordinary perceptions of
different levels depend on other perceptions hierarchically.

When it comes to apparent word dependencies, I don't deny that such
dependencies exist phenomenally. But I want to know why they exist and how
they CAN exist, in terms of a model. The dependencies themselves are just
observations and interpretations. Because words are just ordinary
perceptions, the observed dependencies between words must also exist
between perceptions of other kinds (although perhaps not exactly the same
dependencies). Dependency itself, therefore, is what needs modeling, not
any particular dependencies. When you understand dependency itself as a
type of controlled perception, probably as an example of a larger type such
as relationship or sequence, you will understand not only word dependencies
but all kind of dependencies.

By remaining at the level of phenomena, we can only observe and record
apparent cooccurrances and dependencies of particular things. We can't
explain why they are related as they are. The method of modeling attempts
to go beneath that level to the level of underlying operations, looking for
operations that could produce both the phenomena and the observed
dependencies among them.

Avery Andrews (920515)

I said

A Chomksyite evidently proposes "because I can perceive a certain
structure of relationships in language, that structure produces
language."

I wouldn't really say this, but that something is producing the
structure, and knowing a reasonable amount about the structure ought to
help in identifying the something.

I'll give you the same comment I give to Bruce. The question from the
standpoint of modeling is not WHAT structure is perceived in language, but
WHAT PERCEIVES STRUCTURE ITSELF. Bruce (with Harris) proposes that the
structure we perceive consists of words and their dependencies etc. You
propose that the structure can be represented as a program, a network of
contingencies. But structures can be of all kinds. What operation of the
brain underlies these symptoms we call structures? Surely, the ability to
perceive structure, to alter one's actions so as to create and change
structures that are perceived, is a more fundamental aspect of organization
than is the ability to perceive and control for a particular example of
structure.

Actually, in the case of grammatical generations, I suspect that there
really are things corresponding to them (ie., for the generalizations
that lead people to postulate noun-phrases, there is a noun-phrase
detector).

You can think of a noun-phrase detector in several ways. The conventional
way is to say that there really are such things as noun-phrases in some
objective world, and that all we need to do is develop detectors that can
respond to them. The CT way is to say that because we have developed
perceptual systems that respond to things in terms of noun-phrases, we can
create and control the occurrance of noun-phrases.

So in the CT model, a noun phrase exists because it is perceived and
controlled; in the same collection of words, something else might have been
perceived and controlled instead. But the CT model wouldn't specify noun-
phrases: it would look for the kind of operation that underlies the
perception of noun phrases; for example, the ability to recognize and
reproduce sequences (of any kinds of perceptions).

Also I suggest again that if you can perceive a structure, it is that
perception that is controlled by VARYING utterances. I know you're
skeptical about the possibility of a closed-loop organization here, but of
course if you don't try to invent one that would work, it will remain only
a possibility. If, on the other hand, you could find one, that might be
rather an important event in linguistics.

I suspect this because grammar does not seem to be real-world
interactive in the manner that most things studied in psychology are, >so

the structures in grammar can't be coming from simple interactions >with a
complex environment.

But grammar is real-world interactive during the time it develops. No
skill, after it becomes habitual, is real-world interactive to the same
extent it is while it's being learned. We reduce principles and reasoning
to slogans at the drop of a hat. We reduce slogans to slurred and run-
together events even more readily: ISWEARTOTELLLTHETHRUT
HANDNOTHINGBUTTHETRUTHSOHELPMEGOD. By the time we're adults, "grammar" is
just how we say things; most people say the same things the same way every
time, without considering whether it's grammatical or not. When you lose
the real-world interactiveness of grammar, you've also lost grammar. If you
hear "Every man for itself," the "itself" isn't grammatically wrong, it's
wrong because you said the memorized phrase wrong; you made the wrong sound
at the end. We no longer think of "Every man for himself" as a sentence in
which the words have individual meaning, sexist or otherwise. It's just
something you emit under certain circumstances. It's like a shaped grunt.

This is probably a good answer to your objection about treating rules as
controlled perceptions. As long as there is a possibility of making
mistakes, the rules are perceived and the utterances are adjusted until the
right rules are perceived. But once a person has settled on utterances that
will reliably fit the rules, the utterances are reduced to phrases and no
longer are treated as having internal structure. Then those utterances are
no longer rule-driven. They're just the way you talk.

This would apply to the Harris approach, too.

-----------------------------------------------------------------------
Cindy Cochrane (920515) --

(Bruce)
<more than one from of behavioral outputs can accomplish the same
<controlled perception

(you)

in speech and some other social/psychological behaviors. This is >indeed

one of the ways language use differs from many of the CSG >behaviors which
I've seen demonstrated.

A basic concept of CT is that you VARY your actions in order to keep
producing the SAME perceived result. You have to do this because the
environment keeps changing (and your own actions keep changing) and
disturbing the controlled result.

For practical reasons, the disturbances we've been using in experiments are
such as to require changing the AMOUNT or DIRECTION of action but not the
KIND. So if a disturbance alters the cursor position in a tracking
experiment, the handle has to be moved to a different place to keep the
cursor in the same place, but you don't have to drop the handle and pick up
a microphone or a hammer.

In general, higher systems can control their perceptions not only by
varying the amount of reference signal sent to a given set of lower
systems, but by changing WHICH LOWER SYSTEMS are provided with reference
signals by that higher system. This is changing the kind of behavioral
action rather than just the amount. This is tricky to implement in a model
when you don't already have a lot of control systems available in the
model, which is about where we are.

Another way that it differs is that at some levels of language use, >such

as decisions about wording, the target perception itself is >neither
particularly quantifiable (e.g., I want my hearers/readers to >think I am
witty) nor is it s simple (e.g., I want my hearers/readers >to think a) I
am witty, b) I know what I am talking about, c) see the >rlevance of what I
am saying to their own perceived needs, d) think my >wording and grammar
are acceptable, e) see me as a colleague, or >whatever).

I disagree along two dimensions. First, "witty" is certainly quantifiable
by the person doing the controlling and perceiving. Some sayings are
wittier than others. Some sayings intended to be witty leave you with a red
face because they fall so flat. Wittiness is a perception that varies along
a scale from zero to hilarious. It's your call as to where the wittiness
falls on this scale, but that's because it's your perception under your
control (subject to disturbance).

Second, the HPCT model is anything but "simple." You can have a given act
serving many different goals at once; a given goal being satisfied by a
changing mix of different lower-level actions. You can have many different
goals being served by many different actions, no one of which is under the
exclusive jurisdiction of any one higher-level system. The HPCT model is
just about as complex as real people are. Or it's intended to be so.

Somehow, as we speak or write -- and perhaps when we read and listen >--

we edit for the differences between our perceived goals and our >perceived
(and contructed) meaning associated with the words we are >perceiving.

(I edited "Somewho", illustrating that we also edit as we read). I agree
with this concept: we're varying our actions to reduce the difference
between the words we emit and the meanings we intend them to have until the
difference is as small as we can make it. That's the basic CT picture of
language production in the domain of meaning.

when I try to think of all of the variables that I'd have to quantify
in order to test the model at a level of granularity similar to the
little man (or the baby) I lose control. :->

                                          ^^
Me too. But Rome wasn't built in a day. OO
                                        (----)

As to your suggestion about basic research, I applaud. That's exactly
what's needed. The problem, however, is to keep the standards high enough
to get what I would admit to be data. Traditional statistical analysis is
based on very low standards of acceptance and extremely noisy data. I would
rather see less data and higher standards: say, correlations above 0.95 and
p < 0.000001. This should reduce the literature to a readable size and make
its contents worth reading.

I don't think you can get data like that without using a good model. The
model implied by most statistical analyses is that behavior is a linear
function of inputs: y = ax + b. That model is so wrong that the data are
very bad.
-------------------------------------------------------------------
Penni Sibun (920512) --

RE: looking for ways people correct errors.

the way to study this is to get lotsa data and look for patterns. you
can't claim to know what's going on in any particular case, but out of
the regularities you can develop a set of hypotheses about types of
errors and conditions under which they occur.

I don't like this approach unless it's done under the right model and with
great awareness of the pitfalls of mass studies. If you find that EVERYONE
corrects a certain kind of error, that's one thing. But if you find that 80
percent correct it and 20 percent don't, you don't have a scientific fact,
because you can't explain why the 20 percent don't. As most data obtained
with statistical studies leaves large amounts of unexplained behavior, the
custom has grown up of ignoring the counterexamples, and saying "people
correct this kind of error" when what you mean is "some people correct this
kind of error and some don't." This custom is the reason for the lack of
progress generally in the behavioral sciences. Hypotheses are accepted when
they ought to be rejected. The variance is blamed on the innate cussedness
of organisms rather than on the use of an inadequate model.

also, because language use just isn't a thing-in-isolation; it's a
social phenomenon, so you're missing the same things the gb'ers miss
if youu focus on one person.

I don't like this either (I'm sure I would like you if we met, so don't
take this personally). Social phenomena have to have a place to live, and
where they live is in individuals. If you understand how each individual
deals with the surrounding world (including the other people in it), you
can deduce the social phenomena. If you study only mass phenomena, you can
say something about similar-sized masses of people, but nothing useful
about any particular person. See my article in Wayne Hershberger's
_Volitional Action_, in which I show that a mass measure of the
characteristics of a population of 4000 simulated people produces a
relationship between two variables has to opposite slope to its true form
in EVERY INDIVIDUAL.

Working with individuals takes a lot more time and trouble than doing
surveys of thousands of people in parallel. But if you test a model over
and over against individual behavior, you get a picture of the distribution
of characteristics that doesn't generalize a slight preponderance to a
universal. You don't throw out any exceptions to the model: you change the
model. What you end up with is a hell of a lot more impressive than
anything you can get out of any kind of mass measure.

---------------------------------------------------------------------

Best to all,

Bill P.

[Avery Andrews (920520.2000)]

(Bill Powers 920515.2000)

Actually, in the case of grammatical generations, I suspect that there
really are things corresponding to them (ie., for the generalizations
that lead people to postulate noun-phrases, there is a noun-phrase
detector).

You can think of a noun-phrase detector in several ways. The conventional
way is to say that there really are such things as noun-phrases in some
objective world, and that all we need to do is develop detectors that can
respond to them. The CT way is to say that because we have developed
perceptual systems that respond to things in terms of noun-phrases, we can
create and control the occurrance of noun-phrases.

I don't think that current people would say anything different to this.
In Situation Semantics, for example, noun-phrases would exist *because*
other people respond to them in regular ways.

Surely the ability to
perceive structure, to alter one's actions so as to create and change
structures that are perceived, is a more fundamental aspect of organization
than is the ability to perceive and control for a particular example of
structure.

Indeed, but we are still amazingly ignorant about the general nature of the
funny kinds of structure that there are out there in languages, like a
language called Kayardild in Northern Australia where the words in some
noun-phrases can take up to four levels of morphological case-marking,
expressing, among other things, the tense and modality of the clause,
and certain aspects of subordination. I go for the descriptive schemes
(commonly though probably wrongly called `theories') that I think are
most likely to give *sharp* descriptions of these things. Sharp means
that I can describe the scheme to a programmar who can go away and
implement it without doing any serious linguistic theorizing, (e.g., it
is well-defined in fact, even if it doesn't bristle with horrible-looking
symbols) & students can then use the resulting system to implement their
answers to descriptive problems (this means that it expresses the basic
organization of the data in a clear and straightforward manner).

The involvement with descriptive schemes isn't because I don't believe
in real modelling, but because I wouldn't have a clue as to how to
actually do it in this area in such a way as to do justice to the
aspects of grammar that I like to think about. And a model that
doen't explain how the various wierdo grammatical structures I'm
interested in can arise will just be wrong, so the results of the
descriptive scheme project ought to be useful as constraints on
model building, when somebody is clever enough to figure out how to
actually do it.

Grammar-learning & interaction: children say lots of stuff they've
never said before as fluently as adults say things, & I doubt we really
loose these skills. I've never denied that grammar-learning is
interactional, & suggested various formulations for what the system
driving acquisition might be (something like: perceive people making sense.
e.g. when people are not perceived as making sense, the grammatical
system starts getting Reorganized). Knowing what the grammatical systems
of a wide variety of languages are like should help in figuring out
what is being Reorganized, & how.

Avery.Andrews@anu.edu.au
(currently andrews@csli.stanford.edu_