encodingism

[Avery Andrews 950229]
{from Joel Judd 950224.1100 CST}

> Also, if learning and development can't create
>foundational encodings, evolution can't either, because evolution can't
>solve the infinite regress.

Not really. Learning is much harder than evolution. Evolution can work
if only a small percentage of the experiments survive (around one
in a hundred for frogs, maybe half or so for humans), while in the
case of learning, every single experiment has to be survived by the
learner, plus you have to become competent to reproduce before you're
probably dead of a mishap anyway. Evolution on the other hand has no
deadlines (but big piles of dead organisms). I think this is a good
argument for very strong innate constraints on what is learned. Finding
useful foundational codings is very hard, so young organisms surely
don't find them.

What's actually wrong with encodingism is that it assumes that the
constraints on what can be learned can be usefully
described as a language (Transformational Grammar, Franz Lisp,
or whatever), which is completely unfounded. Learning is at bottom
several different kinds of growth: there is no reason at all to
believe that the possible outcomes of this growth can be usefully
described as the set of formulas in some formal specification language.

Amazingly, this kind of technique does seem to work surprisingly well
for certain aspects of linguistic structure (there are notations in
which you can write tiny grammars that work, sort of), but nowhere does
it come anywhere near being a serious theory of language learning - there
are always gobs of actual languages excluded by, and ridiculous ones
included in, what the notational framework allows. They don't work
anywhere near as well as the concepts of Ptolemaic astronomy did, for
example.

Avery.Andrews@anu.edu.au

[From: Bruce Nevin (Thu 92079 12:21:32)]

(Joel Judd 920630) --

Martin (920729 & 0630) and Bill (920629 & 0701),

>Knowledge can certainly be represented, but interactions do not require the
kind of representation that >implies regress.

Right; *transmission* requires the kind of representation that involves
regress. Transmitting encodings requires that both receiver and sender
understand what the encodings represent.

I regret that I don't have the available bandwidth just now to show why
"encodingism" and "representationalism" don't work. As Martin said, we
covered this ground in some depth a year or so ago.

Briefly: distinctions between elements of language must be preset as
learned social conventions in sender and receiver. They must have had
experiences such that perceptions of words are associated with at least
some nonverbal perceptions on which they (the language users) can
successfully presume agreement--though they might not think of the
"same" associations immediately, but through the negotiation toward
agreement that constitutes much of communication. But sentences do not
"represent" nonverbal experiences. They incorporate or, better,
constitute linguistic information on the basis of which people can
represent nonverbal experiences to themselves--if memory and imagination
are taken to be "representations." From a PCT perspective, I believe
that too is an inappropriate use of the word "represent."

  Bruce
  bn@bbn.com