[From Rick Marken (920604 14:00)]
eric harnden(900604) says:
but i feel
compelled by my own combination of interests
that's what it's like, being a control system
to point out that people who engage in AI are not fools.
Nobody said they were. In fact, I've heard nothing but praise for
their intellectual skills.
it is well known that rule-based
systems can model (or simulate - the usage doesn't affect the issue)
only a limited subset of what might be termed intelligent behaviors.
I think the PCT position is not that ai type models are claiming to do
more than they really can do. Our position is simply that these models
are not usually organized as closed loop control systems. No problem
there, as long as your goal is not imitation of the behavior of organisms.
they do, however, provide useful tools
for information processing, and in fact are alive and well as pre-processors,
frontends, output analyzers, and other adjuncts to advanced work in
neural networks.
No doubt. I don't think PCTers are against software engineering. But we
believe that if you want to engineer systems that mimic life processes
(like intelligent behavior) you will have to take into account some of the
"facts of life" -- and one is that the behaviors being imitated are controlled
consequences of action. If the term "intelligent behavior" does not refer to
controlled consequences of action then current ai models are just fine, both
practically and theoretically.
Andy Papanicolaou and Tom Bourbon [920604 13:50] say:
The hope behind the selection of this specific topic (acquisition
of, or learning of the skill or the habit of producing correctly
and reliably new speech sounds) was to elicit specific comments
that would help us construct a CT model of the process.
This could be a very interesting exercise, especially if you have the
tools to do it. I think the first part of this modelling would involve
building a vocal tract model -- that produces an acoustic output as
does the actual vocal tract. There has been a lot of work in this
area so it shouldn't be too hard (theoretically, anyway) to build this.
The next part is to decide which variables of this vocal tract model
are to be controlled -- or, at least, perceived, so that they can
potentially be controlled. I think you will want to control perceived
lip positions, tongue positions, larynx opening, etc.-- there are many
possible vocal tract variables to perceive and control. Also, of course,
you will want to perceive and control aspects of the acoustic results of
vocal variations -- things like enery level in the formants, relative
energy in formants, spectral width, etc. Again, there are a lot of
possibilities but it seems to me like the acoustic linguists know a lot
about what might be the important acoustic and vocal variables involved
in speech. All you have to do is have a control system controlling each of
these variables and determine how these systems should be hierarchically
arranged (which determines who's output determines who's reference).
This, of course, if the BIG modelling problem. The highest level references
in the model will specify what, ultimately, the model is to DO -- perhaps
generate phomemes; so the magnitude of each of the highest order references
determines the degree to which a phoneme is to be present in perception.
Have fun.
Oded Maler (920604) says:
The "Reinforcement" in the machine learning context is used in a very
technical sense without any metaphysical assumption about what "behavior"
is and all the rest of the S-R psychology you dislike.
Actually, I'll let Bill P. deal with this one. I've got to get outta here.
Suffuce it to say that it is precisely reinforcement in the "technical sense"
that makes no sense as a model of learning or purposive behavior in
autonomous agents.
Best regards
Rick
ยทยทยท
**************************************************************
Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)