Generative models vs generative grammar

[From Bill Powers (921227.0700)]

Trying to squeeze another post in before the system goes down for
three days.

Eileen Prince (921227) --

Omigod, another usage for "generative" that I had overlooked.

In theory pure generative approaches are subject to the checks
recently outlined. However, the latest version of Chomsky's
theory with which I am familiar allows for this to be gotten
around. The generative system is allowed to overgenerate
based on universal rather than specific language-specific
rules.

Chomsky's generative grammar is not a generative model in the
sense being used on this net. It does not produce a behavior, out
of its own rules, that can be matched moment by moment against
the behavior of a real human speaker. As far as I know, Chomsky's
model doesn't produce any behavior at all: it produces an
analysis, a structure, to which speech is supposed to conform, or
of which specific instances of speech are supposed to be valid
examples. People like Avery Andrews who are madly writing
programs that use the principles of generative grammar are trying
to supply generative (my meaning) models in the form of computer
programs that will in fact generate realistic human utterances.

In fact, Chomsky's system is really a generalization, an
abstraction. This is one of the difficulties I see in it (dimly,
not being a linguist). The forms that this grammar produces are
not specific instances of speech, but classes to which instances
of speech are supposed to belong (like noun phrases). How does a
specification for a class produce the specific muscle tensions
that will produce a pattern of utterances that happen to belong
to that class? The fact that there is an infinity of utterances
that would qualify means that this can't be a pure top-down
system for generating speech -- there simply isn't enough
information in the name of a class to pin down the details to a
unique utterance.

This WILL work, however, if the forms of which Chomsky speaks are
considered to be perceptual patterns, not output patterns. Now,
in order to produce a sense that a noun phrase is being uttered,
all that is necessary is to produce any specific utterance that
is perceived as belonging to this class. In other words,
generative grammar can actually work only if for "generative" we
substitute "perceptual." A person does not generate general
output forms that are elaborated into more specific instances.
It's the other way around: the person generates specific
instances such as to create a perception of a more general form.
In short, an HPCT model of this concept of grammar will work, but
a top-down model will not.

To make a generative model of this process (our meaning), it
would be necessary to design the actual control systems with the perceptual
functions implied, and run them to produce real
utterances.

ยทยทยท

-----------------------------------------------------
Your comments on defending theories are right on the mark.
-------------------------------------------------------
Best,

Bill P.