[Avery Andrews 950229]
{from Joel Judd 950224.1100 CST}
> Also, if learning and development can't create
>foundational encodings, evolution can't either, because evolution can't
>solve the infinite regress.
Not really. Learning is much harder than evolution. Evolution can work
if only a small percentage of the experiments survive (around one
in a hundred for frogs, maybe half or so for humans), while in the
case of learning, every single experiment has to be survived by the
learner, plus you have to become competent to reproduce before you're
probably dead of a mishap anyway. Evolution on the other hand has no
deadlines (but big piles of dead organisms). I think this is a good
argument for very strong innate constraints on what is learned. Finding
useful foundational codings is very hard, so young organisms surely
don't find them.
What's actually wrong with encodingism is that it assumes that the
constraints on what can be learned can be usefully
described as a language (Transformational Grammar, Franz Lisp,
or whatever), which is completely unfounded. Learning is at bottom
several different kinds of growth: there is no reason at all to
believe that the possible outcomes of this growth can be usefully
described as the set of formulas in some formal specification language.
Amazingly, this kind of technique does seem to work surprisingly well
for certain aspects of linguistic structure (there are notations in
which you can write tiny grammars that work, sort of), but nowhere does
it come anywhere near being a serious theory of language learning - there
are always gobs of actual languages excluded by, and ridiculous ones
included in, what the notational framework allows. They don't work
anywhere near as well as the concepts of Ptolemaic astronomy did, for
example.
Avery.Andrews@anu.edu.au